2016-12-13 55 views
1

我试图并行运行大量k-means。我有一个房间和大量的数据,我想计算每个房间的集群。所以,我有Spark 2.0.2在rdds /嵌套rdds或数据框或数据集中嵌套K-means

roomsSignals[(room:String, signals:List[org.apache.spark.mllib.linalg.Vector]] 

roomsSignals.map{l=> 
val data=sc.parallelize(l.signals) 
val clusterCenters=2 
val model = KMeans.train(data, clusterCenters, 5) 
    model.clusterCenters.map { r =>r.toJson.toString}.mkString(",") 

}.collect.foreach(println) 

这给我的错误:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 33 in stage 18.0 failed 4 times, most recent failure: Lost task 33.3 in stage 18.0 (TID 1284, 192.168.181.122):  java.lang.NullPointerException 
at $anonfun$1.apply(<console>:77) 
at $anonfun$1.apply(<console>:76) 
at scala.collection.Iter ator$$anon$11.next(Iter ator.scala:409) 
at scala.collection.Iter ator$class.foreach(Iter ator.scala:893) 
at scala.collection.AbstractIter ator.foreach(Iter ator.scala:1336) 
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) 
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) 
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) 
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) 
at scala.collection.AbstractIter ator.to(Iter ator.scala:1336) 
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) 
at scala.collection.AbstractIter ator.toBuffer(Iter ator.scala:1336) 
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) 
at scala.collection.AbstractIter ator.toArray(Iter ator.scala:1336) 
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:912) 
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:912) 
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1916) 
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1916) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
at org.apache.spark.scheduler.Task.run(Task.scala:86) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 
Driver stacktrace: 
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454) 
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442) 
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441) 
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
+0

请粘贴完整的堆栈跟踪并包含驱动程序堆栈跟踪! –

+0

我贴了一些它不能把所有因为限制 – Dima

+0

然后只是看看自己的错误是哪里!它会在驱动程序的堆栈轨迹中为您提供确切的线路 –

回答

0

遗憾的是不可能的。 Spark根本不支持任何类型的嵌套。

任一列车独立分布式模型遍历roomsSignals.collect或使用本地库选择在分布式结构中构建模型。

+0

我知道它在spark 1.3中是不可能的,我会得到不可串行化的spark上下文的异常,于是我认为spark已经进步了。 – Dima

+0

它没有,它不会。如果不明显限制Spark功能,则无法执行此类任何操作。 –