2017-01-02 144 views
0

Spark SQL对我来说很清楚。但是,我刚开始使用Spark的RDD API。作为spark apply function to columns in parallel指出,这应该让我获得spark将spark-SQL转换为RDD API

def handleBias(df: DataFrame, colName: String, target: String = this.target) = { 
    val w1 = Window.partitionBy(colName) 
    val w2 = Window.partitionBy(colName, target) 

    df.withColumn("cnt_group", count("*").over(w2)) 
     .withColumn("pre2_" + colName, mean(target).over(w1)) 
     .withColumn("pre_" + colName, coalesce(min(col("cnt_group")/col("cnt_foo_eq_1")).over(w1), lit(0D))) 
     .drop("cnt_group") 
    } 
} 

摆脱缓慢的洗牌在伪代码:df foreach column (handleBias(column) 所以最小的数据帧装起来

val input = Seq(
    (0, "A", "B", "C", "D"), 
    (1, "A", "B", "C", "D"), 
    (0, "d", "a", "jkl", "d"), 
    (0, "d", "g", "C", "D"), 
    (1, "A", "d", "t", "k"), 
    (1, "d", "c", "C", "D"), 
    (1, "c", "B", "C", "D") 
) 
    val inputDf = input.toDF("TARGET", "col1", "col2", "col3TooMany", "col4") 

但没有正确映射

val rdd1_inputDf = inputDf.rdd.flatMap { x => {(0 until x.size).map(idx => (idx, x(idx)))}} 
     rdd1_inputDf.toDF.show 

它失败

java.lang.ClassNotFoundException: scala.Any 
java.lang.ClassNotFoundException: scala.Any 

对于此问题中概述的问题,可以找到一个示例https://github.com/geoHeil/sparkContrastCodinghttps://github.com/geoHeil/sparkContrastCoding/blob/master/src/main/scala/ColumnParallel.scala

回答

2

当你调用一个DataFrame.rdd你得到一个RDD[Row]这不是强类型。如果您希望能够映射在元素上,您将需要匹配很多:

scala> val input = Seq(
    |  (0, "A", "B", "C", "D"), 
    |  (1, "A", "B", "C", "D"), 
    |  (0, "d", "a", "jkl", "d"), 
    |  (0, "d", "g", "C", "D"), 
    |  (1, "A", "d", "t", "k"), 
    |  (1, "d", "c", "C", "D"), 
    |  (1, "c", "B", "C", "D") 
    | ) 
input: Seq[(Int, String, String, String, String)] = List((0,A,B,C,D), (1,A,B,C,D), (0,d,a,jkl,d), (0,d,g,C,D), (1,A,d,t,k), (1,d,c,C,D), (1,c,B,C,D)) 

scala> val inputDf = input.toDF("TARGET", "col1", "col2", "col3TooMany", "col4") 
inputDf: org.apache.spark.sql.DataFrame = [TARGET: int, col1: string ... 3 more fields] 

scala> import org.apache.spark.sql.Row 
import org.apache.spark.sql.Row 

scala> val rowRDD = inputDf.rdd 
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at rdd at <console>:27 

scala> val typedRDD = rowRDD.map{case Row(a: Int, b: String, c: String, d: String, e: String) => (a,b,c,d,e)} 
typedRDD: org.apache.spark.rdd.RDD[(Int, String, String, String, String)] = MapPartitionsRDD[20] at map at <console>:29 

scala> typedRDD.keyBy(_._1).groupByKey.foreach{println} 
[Stage 7:>               (0 + 0)/4] 
(0,CompactBuffer((A,B,C,D), (d,a,jkl,d), (d,g,C,D))) 
(1,CompactBuffer((A,B,C,D), (A,d,t,k), (d,c,C,D), (c,B,C,D))) 

否则,你可以使用一个类型Dataset

scala> val ds = input.toDS 
ds: org.apache.spark.sql.Dataset[(Int, String, String, String, String)] = [_1: int, _2: string ... 3 more fields] 

scala> ds.rdd 
res2: org.apache.spark.rdd.RDD[(Int, String, String, String, String)] = MapPartitionsRDD[8] at rdd at <console>:30 

scala> ds.rdd.keyBy(_._1).groupByKey.foreach{println} 
[Stage 0:>               (0 + 0)/4] 
(0,CompactBuffer((0,A,B,C,D), (0,d,a,jkl,d), (0,d,g,C,D))) 
(1,CompactBuffer((1,A,B,C,D), (1,A,d,t,k), (1,d,c,C,D), (1,c,B,C,D))) 
+1

正如我想在毫升使用此.Pipeline和输出步骤是DataFrame的“模式丢失​​”,例如我将需要使用模式匹配?它是否正确?但有很多列是否有一种方法来“推断”它们(部分shcema? –

+0

是的DF => RDD转换不会使用架构根本不幸的是(我不认为有这是一种强制使用它的好方法)但是,看一下我的新数据集示例:不需要使用中间数据框Dataframe,它看起来像DataSet可以很好地推断类型(在Spark 2.0中我认为任何你可以用DF做的事情也可以用DS来完成) –

+0

@GeorgHeiler(不知道你是否被告知了^^^^) –

相关问题