2014-12-02 130 views
0

我想获得一个用Scala编写的Spark 1.1.0程序来工作,但我很难用它。我有一个蜂巢查询很简单:Spark提交失败Hive

select json, score from data 

当我运行从火花壳一切以下命令作品(我需要MYSQL_CONN在驱动程序类路径,因为我使用蜂巢与一个MySQL的元数据存储)

bin/spark-shell --master $SPARK_URL --driver-class-path $MYSQL_CONN 

import org.apache.spark.sql.hive.HiveContext 
val sqlContext = new HiveContext(sc) 
sqlContext.sql("select json from data").map(t => t.getString(0)).take(10).foreach(println) 

我得到十行json就像我想要的。然而,当我运行这个火花提交如下,我得到一个问题

bin/spark-submit --master $SPARK_URL --class spark.Main --driver-class-path $MYSQL_CONN target/spark-testing-1.0-SNAPSHOT.jar 

这里是我的整个星火计划

package spark 

import org.apache.spark.sql.hive.HiveContext 
import org.apache.spark.{SparkContext, SparkConf} 

object Main { 
    def main(args: Array[String]) { 
    val sc = new SparkContext(new SparkConf().setAppName("Gathering Data")) 
    val sqlContext = new HiveContext(sc) 
    sqlContext.sql("select json from data").map(t => t.getString(0)).take(10).foreach(println) 
    } 
} 

,这里是将所得叠层

14/12/01 21:30:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, match1hd17.dc1): java.lang.ClassNotFoundException: spark.Main$$anonfun$main$1 
     java.net.URLClassLoader$1.run(URLClassLoader.java:200) 
     java.security.AccessController.doPrivileged(Native Method) 
     java.net.URLClassLoader.findClass(URLClassLoader.java:188) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:307) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:252) 
     java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) 
     java.lang.Class.forName0(Native Method) 
     java.lang.Class.forName(Class.java:247) 
     org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59) 
     java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575) 
     java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) 
     org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) 
     org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) 
     org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) 
     org.apache.spark.scheduler.Task.run(Task.scala:54) 
     org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) 
     java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
     java.lang.Thread.run(Thread.java:619) 
14/12/01 21:30:10 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job 
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, match1hd12.dc1m): java.lang.ClassNotFoundException: spark.Main$$anonfun$main$1 
     java.net.URLClassLoader$1.run(URLClassLoader.java:200) 
     java.security.AccessController.doPrivileged(Native Method) 
     java.net.URLClassLoader.findClass(URLClassLoader.java:188) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:307) 
     java.lang.ClassLoader.loadClass(ClassLoader.java:252) 
     java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) 
     java.lang.Class.forName0(Native Method) 
     java.lang.Class.forName(Class.java:247) 
     org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59) 
     java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575) 
     java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947) 
     java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) 
     java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753) 
     java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329) 
     java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) 
     org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) 
     org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) 
     org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) 
     org.apache.spark.scheduler.Task.run(Task.scala:54) 
     org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) 
     java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
     java.lang.Thread.run(Thread.java:619) 
Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391) 
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) 
    at akka.actor.ActorCell.invoke(ActorCell.scala:456) 
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) 
    at akka.dispatch.Mailbox.run(Mailbox.scala:219) 
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) 
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 

我已经花了几个小时了,我不知道为什么这只适用于spark-shell。我查看了单个节点上的stderr输出,并且它们具有相同的隐藏错误消息。如果任何人可以阐明为什么这只适用于spark-shell,而不是spark-submit,那将会很棒。

感谢

UPDATE:

我一直在玩周围和下面的程序工作正常。

package spark 

import org.apache.spark.sql.hive.HiveContext 
import org.apache.spark.{SparkContext, SparkConf} 

object Main { 
    def main(args: Array[String]) { 
    val sc = new SparkContext(new SparkConf().setAppName("Gathering Data")) 
    val sqlContext = new HiveContext(sc) 
    sqlContext.sql("select json from data").take(10).map(t => t.getString(0)).foreach(println) 
    } 
} 

显然,这将不是一个大数据量的工作,但它表明,这个问题似乎是在ScehmaRDD.map()函数。

回答

0

似乎火花上下文初始化有问题。

请尝试下面的代码:

val sparkConf = new SparkConf().setAppName("Gathering Data"); 
val sc = new SparkContext(sparkConf); 
+0

这并没有改变错误信息都没有。 – Jon 2014-12-02 06:24:18

+0

我遇到了类似的错误,它在spark-shell中执行得很好,但不是来自spark-submit。后来我发现spark上下文配置不正确。而在shell中,它的auto默认是初始化的。 – 2014-12-03 05:24:03

+0

在错误消息中,我看到了ClassNotFoundException,我猜可能存在编译错误,因此ClassNotFoundException。无论如何,我会尝试在我的群集中的代码,并让你知道。 – 2014-12-03 05:38:03