2015-12-01 127 views
-2

我想用spark的wholeTextFiles来读取目录,我的文件RDD包含(String,String),其中第一个String是我的文件名,第二个是我的文件内容。在Spark中用wholeTextFiles创建一个新的RDD

我想将这个RDD映射到另一个只有我的文件的内容,我该怎么做?

谢谢!

val file = sc.wholeTextFiles("./Desktop/093") 

file.first 
res0: (String, String) = 
(file:/Users/Desktop/093/nc-no-na.clusters.093.001.txt,"199 197 5 5 168 0 0.932125 11101111000000110100000000000000000000000000001010100000011100001000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001101101111100000000000000000011100000000000000000000000000100000111011000000000000000000000000000000000000000000000000000000000000000011110010111001001110000000011100000000010000000000000000000000000010000000000000000000000000000000000000000011111111111101010111000000000000000000000000000000000000000000000000000000000000000001100000000000000000000000000000000000000000101110101110101011010000000000000000001100001100000011110000000000000000000011111011110011100... 

回答

0

例如像这样:

import org.apache.spark.rdd.RDD 

val content: RDD[String] = file.map(_._2) 
+0

如何file.map(_._ 2)的作品? – user3180835

+0

[Scala中下划线的所有用法是什么?](http://stackoverflow.com/q/8000903/1560062),http://www.scala-lang.org/api/current/index.html# [email protected]_2:T2 – zero323

相关问题