2016-04-03 132 views
0

我将首先解释提到代码的问题。Spark Spark RDD中的字符串替换

numPartitions = 2 
rawData1 = sc.textFile('train_new.csv', numPartitions,use_unicode=False) 


rawData1.take(1) 

['1,0,0,0,0,0,0,0,0,0,0,1,0,0,5,0,0,0,0,0,0,0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,2,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,9,0,0,0,0,0,Class_2'] 

现在我想更换Class_22

后更换的答案应该是

['1,0,0,0,0,0,0,0,0,0,0,1,0,0,5,0,0,0,0,0,0,0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,2,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,9,0,0,0,0,0,2'] 

一旦我明白了此行,我会为整个数据执行操作套装

在前提前致谢 Aashish

回答

0
result = rawData1.map(lambda element: ','.join(element.split(',')[:-1] + ['2'])) 

应该不止这样做。它的工作方式是将RDD中的每个元素映射到lambda函数,并返回一个新的数据集。

该元素使用','分隔符分割为一个数组,使用','分割,省略最后一个元素,然后使用额外的元素['2'],然后使用','将数组连接在一起。

更精细的结构可以通过适当地修改lambda函数来完成。