2017-04-07 54 views
1

我使用下面的代码写一个RDD作为一个序列文件无法写入与火花RDD API序列文件

@Test 
    def testSparkWordCount(): Unit = { 
    val words = Array("Hello", "Hello", "World", "Hello", "Welcome", "World") 
    val conf = new SparkConf().setMaster("local").setAppName("testSparkWordCount") 
    val sc = new SparkContext(conf) 

    val dir = "file:///" + System.currentTimeMillis() 
    sc.parallelize(words).map(x => (x, 1)).saveAsHadoopFile(
     dir, 
     classOf[Text], 
     classOf[IntWritable], 
     classOf[org.apache.hadoop.mapred.SequenceFileOutputFormat[Text, IntWritable]] 
    ) 

    sc.stop() 
    } 

当我运行它,它抱怨说

Caused by: java.io.IOException: wrong key class: java.lang.String is not class org.apache.hadoop.io.Text 
    at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1373) 
    at org.apache.hadoop.mapred.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:76) 
    at org.apache.spark.internal.io.SparkHadoopWriter.write(SparkHadoopWriter.scala:94) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1139) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1137) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1137) 
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1360) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1145) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1125) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 

应该我必须使用sc.parallelize(words).map(x => (new Text(x), new IntWritable(1))而不是sc.parallelize(words).map(x => (x, 1))?我不认为我必须明确地包装它,因为SparkContext已经提供了将前缀类型包装到相应的Writable中的含义。

所以,我应该怎么做才能让这段代码工作

回答

1

是,SparkContext提供implicits转换的。但是,这种转换不保存过程中应用,必须在平时的Scala方式使用:

import org.apache.spark.SparkContext._ 
val mapperFunction: String=> (Text,IntWritable) = x => (x, 1) 
... parallelize(words).map(mapperFunction).saveAsHadoopFile ... 
+0

明白了,谢谢@pashaz为有用的答案 – Tom

+0

也为包括隐式转换,可以用“saveAsSequenceFile”的方法: .MAP (x =>(x,1))。saveAsSequenceFile(dir) – pasha701