2017-07-15 97 views
0

我试图将Spark Streaming与Hbase连接起来。所有我真的有我的代码做的是使用该example code,但我发现一个奇怪的运行时错误:Spark Streaming + Hbase:NoClassDefFoundError:org/apache/hadoop/hbase/spark/HBaseContext

Exception in thread "streaming-job-executor-8" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration 
at buri.sparkour.HBaseInteractor.<init>(HBaseInteractor.java:26) 
at buri.sparkour.JavaCustomReceiver.lambda$main$94c29978$1(JavaCustomReceiver.java:104) 
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) 
at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) 
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) 
at scala.util.Try$.apply(Try.scala:192) 
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:256) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:256) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:256) 
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:255) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748) 

有堆栈溢出几个问题解决这个问题,所有这些处理添加路径正确的jar文件。我试图用SBT构建一个“超级”罐子,并通过了​​,但我仍然得到这个错误。

这是我的build.sbt文件:

val sparkVersion = "2.1.0" 

val hadoopVersion = "2.7.3" 
val hbaseVersion = "1.3.1" 

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % sparkVersion % "provided", 
    "org.apache.spark" %% "spark-sql" % sparkVersion % "provided", 
    "org.apache.spark" %% "spark-streaming" % sparkVersion , 
    "org.apache.commons" % "commons-csv" % "1.2" % "provided" , 
    "org.apache.hadoop" % "hadoop-hdfs" % "2.5.2" % "provided" , 
    "org.apache.hbase" % "hbase-spark" % "2.0.0-alpha-1" % "provided", 
    "org.apache.hbase" % "hbase-client" % hbaseVersion , 
    "org.apache.hadoop" % "hadoop-common" % hadoopVersion % "provided" , 
    "org.apache.hbase" % "hbase-common" % hbaseVersion , 
    "org.apache.hbase" % "hbase-server" % hbaseVersion % "provided", 
    "org.apache.hbase" % "hbase" % hbaseVersion 
) 

assemblyMergeStrategy in assembly := { 
case PathList("META-INF", xs @ _*) => MergeStrategy.discard 
case x => MergeStrategy.first 
} 

一旦尤伯杯罐子编译,我可以看到HBaseContext.class确实存在,所以我不知道为什么它不能在运行时找到类。

任何想法?/指针?

(我也试过在定义等spark.driver.extraClassPath类的道路上,但也不管用)

回答

0

看看this后改编职系NoClassDefFoundError。我不确定关于build.sbt,因为我使用Maven,但依赖关系看起来很好。