2017-04-19 40 views
0

除了在Spark DataFrame上正常工作吗?除Apache Spark 2.1.0中的DataFrame外使用

在Spark shell中,我用三个字符串创建了一个简单的DataFrame:“a”,“b”,“c”。限制(1)分配给正确生成Array([a])的row1。然后将row1用作grfDF DataFrame上的extend方法的参数,以生成tail1。不应该tail1是Array的新DataFrame([b],[c])?

为什么tail1仍然包含“a”并删除了“b”?

scala> grfDF.collect 
res1: Array[org.apache.spark.sql.Row] = Array([a], [b], [c])     

scala> val row1 = grfDF.limit(1) 
row1: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [sub: string] 

scala> row1.collect 
res3: Array[org.apache.spark.sql.Row] = Array([a]) 

scala> val tail1 = grfDF.except(row1).collect 
tail1: Array[org.apache.spark.sql.Row] = Array([c], [a]) 

数据帧创建如下:

case class Grf(sub: String) 
    def toGrf = (grf: Seq[String]) => Grf(grf(0)) 
    val sourceList = Array("a", "b", "c") 
    val grfRDD = sc.parallelize(sourceList).map(_.split(",")).map(toGrf(_)) 
    val grfDF = spark.createDataFrame(grfRDD) 
    grfDF.createOrReplaceTempView("grf") 

然后我尝试流行过一排:

val row1 = grfDF.limit(1) 
    row1.collect 
    val tail1 = grfDF.except(row1) 
    tail1.collect 
+1

需要[最小,完整,可验证的示例](https://stackoverflow.com/help/mcve)。 –

+0

我觉得这个故事从第2章开始。请你分享一下你如何构建'grfDF'? – Vidya

+0

如果你能够在'row1.collect'中看到'[a]',那么'tail1'将总是给你带有你的代码的'Array([c],[b])' – himanshuIIITian

回答

0

我试图做类似的事情在火花外壳。请再次尝试相同的代码,因为我得到的结果是Array([b],[c])。请参阅以下代码:

scala> val sourceList=Array("a","b","c") 
sourceList: Array[String] = Array(a, b, c) 

scala> val grfRDD = sc.parallelize(sourceList) 
grfRDD: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:29 

val grfDF = grfRDD.toDF() 
grfDF: org.apache.spark.sql.DataFrame = [_1: string] 

scala> grfDF 
res0: org.apache.spark.sql.DataFrame = [_1: string] 

scala> val row1 = grfDF.limit(1) 
row1: org.apache.spark.sql.DataFrame = [_1: string] 

scala> row1 
res1: org.apache.spark.sql.DataFrame = [_1: string] 

row1.collect() 
res2: Array[org.apache.spark.sql.Row] = Array([a]) 

scala> val tail = grfDF.except(row1) 
tail: org.apache.spark.sql.DataFrame = [_1: string] 

scala> tail.collect() 
res6: Array[org.apache.spark.sql.Row] = Array([b], [c]) 
相关问题