2015-07-12 83 views
1

我的本地群集中安装的spark没有正常工作。我下载了spark-1.4.0-bin-hadoop2.6.tgz并将其解压到所有节点可见的目录中(这些节点都可以通过ssh无密码访问)。另外,我编辑了conf/slaves,以便它包含节点的名称。然后我发布了一个sbin/start-all.sh。主站中的Web UI变为可用,并且节点出现在工作人员部分中。但是,如果开始pyspark部分(使用出现在Web UI的URL连接到主机),并尝试运行这个简单的例子:Spark独立模式在群集中不起作用

a=sc.parallelize([0,1,2,3],2) 
a.collect() 

我得到这个错误:

15/07/12 19:52:58 ERROR TaskSetManager: Task 1 in stage 0.0 failed 4 times; aborting job 
Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/home/myuser/spark-1.4.0-bin-hadoop2.6/python/pyspark/rdd.py", line 745, in collect 
    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 
    File "/home/myuser/spark-1.4.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ 
    File "/home/myuser/spark-1.4.0-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, 172.16.1.1): java.io.InvalidClassException: scala.reflect.ClassTag$$anon$1; local class incompatible: stream classdesc serialVersionUID = -4937928798201944954, local class serialVersionUID = -8102093212602380348 
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:604) 
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1601) 
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1750) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1964) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1888) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) 
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369) 
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69) 
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:95) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 
    at java.lang.Thread.run(Thread.java:722) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 

有没有人遇到过这个问题?提前致谢。

回答

0

这似乎是类型转换异常。 您可以尝试输入为sc.parallelize(List(1,2,3,4,5,6),2)并重新运行

+0

这不是一个强制性异常。我的例子并行化的第一个参数已经是一个列表。不管怎么说,还是要谢谢你。 – Eduardo

0

请检查您是否使用了正确的JAVA_HOME。 你应该在开始Spark工作之前设置它。 例如:

export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera 
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH 
相关问题