2017-06-01 54 views
0

我将我的一个项目从Spark 1.6升级到Spark 2.0.1。下面的代码适用于星火1.6,但它并不适用于2.0.1工作:Spark 2.0:如何将元组的RDD转换为DF

def count(df: DataFrame): DataFrame = { 
    val sqlContext = df.sqlContext 
    import sqlContext.implicits._ 

    df.map { case Row(userId: String, itemId: String, count: Double) => 
     (userId, itemId, count) 
    }.toDF("userId", "itemId", "count") 
    } 

以下是错误消息:

Error:(53, 12) Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases. 
    df.map { case Row(userId: String, itemId: String, count: Double) => 
     ^
Error:(53, 12) not enough arguments for method map: (implicit evidence$7: org.apache.spark.sql.Encoder[(String, String, Double)])org.apache.spark.sql.Dataset[(String, String, Double)]. 
Unspecified value parameter evidence$7. 
    df.map { case Row(userId: String, itemId: String, count: Double) => 
    ^

我试图用df.rdd.map代替df.map,然后得到了以下错误:

Error:(55, 7) value toDF is not a member of org.apache.spark.rdd.RDD[(String, String, Double)] 
possible cause: maybe a semicolon is missing before `value toDF'? 
    }.toDF("userId", "itemId", "count") 
    ^

如何将Spark元组中的元组RDD转换为数据框?

+0

你尝试导入'进口spark.implicits._? –

+0

@ rogue-one是的,尝试更改'val sqlContext = df.sqlContext import sqlContext.implicits._'到'val spark = df.sparkSession import spark.implicits._',但得到了同样的错误。 – Rainfield

回答

0

有最有可能是语法错误别的地方在你的代码,因为你的地图功能似乎当你得到

Error:(53, 12) not enough arguments for method map: (implicit evidence$7: org.apache.spark.sql.Encoder[(String, String, Double)])org.apache.spark.sql.Dataset[(String, String, Double)]. Unspecified value parameter evidence$7

您的代码工作,这在我的星火外壳被正确地写入,我测试。

相关问题