2016-02-11 44 views
1

我想单独运行OOTB example of Spark streaming(版本1.6),以构建它。我能够编译并运行示例,并与其他代码示例捆绑在一起。 也就是说:运行时错误:Spark发生ClassNotFound异常

./bin/run-example streaming.StatefulNetworkWordCount localhost 9999 

但是我无法在自己的项目中这样做(相同的代码)。 有什么帮助吗?

build.sbt:

import sbtassembly.AssemblyKeys 
name := "stream-test" 

version := "1.0" 


libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.0" % "provided" 
libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka" % "1.6.0" 
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "1.6.0" 
libraryDependencies += "org.json4s" %% "json4s-native" % "3.2.10" 

libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.4" % "test" 

assemblyJarName in assembly := "stream_test_" + version.value + ".jar" 

assemblyMergeStrategy in assembly := { 
    case PathList("org", "apache", xs @ _*) => MergeStrategy.last 
    case PathList("com", "google", xs @ _*) => MergeStrategy.last 
    case PathList("com", "esotericsoftware", xs @ _*) => MergeStrategy.last 
    case x => 
    val oldStrategy = (assemblyMergeStrategy in assembly).value 
    oldStrategy(x) 
} 

assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false) 

编译没有问题。然而,越来越一旦运行错误: (请注意,我用的火花1.6运行此):

$ ../../../app/spark-1.6.0-bin-hadoop2.6/bin/spark-submit --jars /app/spark-streaming_2.11-1.6.0.jar --master local[4] --class "StatefulNetworkWordCount" ./target/scala-2.10/stream-test_2.10-1.0.jar localhost 9999 
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
    16/02/10 22:16:30 INFO SparkContext: Running Spark version 1.4.1 
    2016-02-10 22:16:32.451 java[86932:5664316] Unable to load realm info from SCDynamicStore 
    16/02/10 22:16:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
    .. 
    16/02/10 22:16:39 INFO Utils: Successfully started service 'sparkDriver' on port 60720. 
    16/02/10 22:16:40 INFO SparkEnv: Registering MapOutputTracker 
    16/02/10 22:16:40 INFO SparkEnv: Registering BlockManagerMaster 

    16/02/10 22:16:42 INFO SparkUI: Started SparkUI at http://xxx:4040 
    16/02/10 22:16:43 INFO SparkContext: Added JAR file://app/spark-streaming_2.11-1.6.0.jar at http://xxx:60721/jars/spark-streaming_2.11-1.6.0.jar with timestamp 1455171403485 
    16/02/10 22:16:43 INFO SparkContext: Added JAR file:/projects/spark/test/./target/scala-2.10/stream-test_2.10-1.0.jar at http://xxx:60721/jars/stream-test_2.10-1.0.jar with timestamp 1455171403562 
    16/02/10 22:16:44 INFO Executor: Starting executor ID driver on host localhost 
    16/02/10 22:16:44 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 60722. 
    .. 
    16/02/10 22:16:44 INFO BlockManagerMaster: Registered BlockManager 
    Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.streaming.dstream.PairDStreamFunctions.mapWithState(Lorg/apache/spark/streaming/StateSpec;Lscala/reflect/ClassTag;Lscala/reflect/ClassTag;)Lorg/apache/spark/streaming/dstream/MapWithStateDStream; 
     at StatefulNetworkWordCount$.main(StatefulNetworkWordCount.scala:50) 
     at StatefulNetworkWordCount.main(StatefulNetworkWordCount.scala) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:606) 
     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) 
     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) 
     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) 
     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) 
     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

这种方法是JAR的类,所以我不明白..

+1

如您所见,检查您的火花集群安装 – eliasah

+0

的版本,我正在运行spark-1.6.0-bin-hadoop2.6,standalone/1节点。任何具体的我应该检查? –

回答

0

找到了答案.. 尽管我从1.6开始运行spark-submit,但我的SPARK_HOME仍然指向Spark 1.4的先前版本。 因此,将SPARK_HOME设置为1.6版本,与spark-submit运行代码相同,解决了该问题。