2016-10-06 93 views
1

我想从这里(Learning Spark book)运行的例子,但我发现了以下错误:阿帕奇星火:ERROR实施者 - >迭代器

16/10/07 01:15:26 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) 
java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 

我开始工作: $ SPARK_HOME /箱/ spark-submit --class com.oreilly.learningsparkexamples.mini.java.WordCount ./target/learning-spark-mini-example-0.0.1.jar ./README.md ./wordcounts

请点子,为什么会发生这种情况?

完整的日志:

mini-complete-example$ $SPARK_HOME/bin/spark-submit --class com.oreilly.learningsparkexamples.mini.java.WordCount ./target/learning-spark-mini-example-0.0.1.jar ./README.md ./wordcounts 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
16/10/07 01:15:23 INFO SparkContext: Running Spark version 2.0.0 
16/10/07 01:15:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/10/07 01:15:24 INFO SecurityManager: Changing view acls to: eDS 
16/10/07 01:15:24 INFO SecurityManager: Changing modify acls to: eDS 
16/10/07 01:15:24 INFO SecurityManager: Changing view acls groups to: 
16/10/07 01:15:24 INFO SecurityManager: Changing modify acls groups to: 
16/10/07 01:15:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(eDS); groups with view permissions: Set(); users with modify permissions: Set(eDS); groups with modify permissions: Set() 
16/10/07 01:15:24 INFO Utils: Successfully started service 'sparkDriver' on port 63851. 
16/10/07 01:15:24 INFO SparkEnv: Registering MapOutputTracker 
16/10/07 01:15:24 INFO SparkEnv: Registering BlockManagerMaster 
16/10/07 01:15:24 INFO DiskBlockManager: Created local directory at /private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/blockmgr-0fb2af5a-8662-4d78-88c8-8e0608f35ff3 
16/10/07 01:15:24 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 
16/10/07 01:15:24 INFO SparkEnv: Registering OutputCommitCoordinator 
16/10/07 01:15:24 INFO Utils: Successfully started service 'SparkUI' on port 4040. 
16/10/07 01:15:24 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.17:4040 
16/10/07 01:15:24 INFO SparkContext: Added JAR file:/Users/eDS/dev/learning-spark/mini-complete-example/./target/learning-spark-mini-example-0.0.1.jar at spark://192.168.1.17:63851/jars/learning-spark-mini-example-0.0.1.jar with timestamp 1475795724857 
16/10/07 01:15:24 INFO Executor: Starting executor ID driver on host localhost 
16/10/07 01:15:24 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 63852. 
16/10/07 01:15:24 INFO NettyBlockTransferService: Server created on 192.168.1.17:63852 
16/10/07 01:15:24 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.17, 63852) 
16/10/07 01:15:24 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.17:63852 with 366.3 MB RAM, BlockManagerId(driver, 192.168.1.17, 63852) 
16/10/07 01:15:24 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.17, 63852) 
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 145.5 KB, free 366.2 MB) 
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 16.3 KB, free 366.1 MB) 
16/10/07 01:15:25 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.17:63852 (size: 16.3 KB, free: 366.3 MB) 
16/10/07 01:15:25 INFO SparkContext: Created broadcast 0 from textFile at WordCount.java:31 
16/10/07 01:15:25 INFO FileInputFormat: Total input paths to process : 1 
16/10/07 01:15:25 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id 
16/10/07 01:15:25 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 
16/10/07 01:15:25 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap 
16/10/07 01:15:25 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition 
16/10/07 01:15:25 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id 
16/10/07 01:15:25 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 
16/10/07 01:15:25 INFO SparkContext: Starting job: saveAsTextFile at WordCount.java:46 
16/10/07 01:15:25 INFO DAGScheduler: Registering RDD 3 (mapToPair at WordCount.java:39) 
16/10/07 01:15:25 INFO DAGScheduler: Got job 0 (saveAsTextFile at WordCount.java:46) with 2 output partitions 
16/10/07 01:15:25 INFO DAGScheduler: Final stage: ResultStage 1 (saveAsTextFile at WordCount.java:46) 
16/10/07 01:15:25 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0) 
16/10/07 01:15:25 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0) 
16/10/07 01:15:25 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at WordCount.java:39), which has no missing parents 
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.9 KB, free 366.1 MB) 
16/10/07 01:15:25 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.7 KB, free 366.1 MB) 
16/10/07 01:15:25 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.17:63852 (size: 2.7 KB, free: 366.3 MB) 
16/10/07 01:15:25 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1012 
16/10/07 01:15:25 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at WordCount.java:39) 
16/10/07 01:15:25 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks 
16/10/07 01:15:25 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0, PROCESS_LOCAL, 5479 bytes) 
16/10/07 01:15:25 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1, PROCESS_LOCAL, 5479 bytes) 
16/10/07 01:15:26 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 
16/10/07 01:15:26 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 
16/10/07 01:15:26 INFO Executor: Fetching spark://192.168.1.17:63851/jars/learning-spark-mini-example-0.0.1.jar with timestamp 1475795724857 
16/10/07 01:15:26 INFO TransportClientFactory: Successfully created connection to /192.168.1.17:63851 after 73 ms (0 ms spent in bootstraps) 
16/10/07 01:15:26 INFO Utils: Fetching spark://192.168.1.17:63851/jars/learning-spark-mini-example-0.0.1.jar to /private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/spark-5adda737-293c-483e-bdbe-f8fa7c171211/userFiles-2a2cf77d-5794-4006-a125-5df94550cbf8/fetchFileTemp6160431711224595115.tmp 
16/10/07 01:15:26 INFO Executor: Adding file:/private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/spark-5adda737-293c-483e-bdbe-f8fa7c171211/userFiles-2a2cf77d-5794-4006-a125-5df94550cbf8/learning-spark-mini-example-0.0.1.jar to class loader 
16/10/07 01:15:26 INFO HadoopRDD: Input split: file:/Users/eDS/dev/learning-spark/mini-complete-example/README.md:66+66 
16/10/07 01:15:26 INFO HadoopRDD: Input split: file:/Users/eDS/dev/learning-spark/mini-complete-example/README.md:0+66 
16/10/07 01:15:26 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) 
java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) 
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
16/10/07 01:15:26 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1385 bytes result sent to driver 
16/10/07 01:15:26 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 307 ms on localhost (1/2) 
16/10/07 01:15:26 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main] 
java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) 
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
16/10/07 01:15:26 INFO SparkContext: Invoking stop() from shutdown hook 
16/10/07 01:15:26 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) 
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

16/10/07 01:15:26 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job 
16/10/07 01:15:26 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/10/07 01:15:26 INFO TaskSchedulerImpl: Cancelling stage 0 
16/10/07 01:15:26 INFO SparkUI: Stopped Spark web UI at http://192.168.1.17:4040 
16/10/07 01:15:26 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at WordCount.java:39) failed in 0.364 s 
16/10/07 01:15:26 INFO DAGScheduler: Job 0 failed: saveAsTextFile at WordCount.java:46, took 0.448155 s 
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) 
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 
    at scala.Option.foreach(Option.scala:257) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1904) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1219) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1161) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1161) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1064) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1030) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1030) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1030) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:956) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:956) 
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:956) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:955) 
    at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1440) 
    at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1419) 
    at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1419) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358) 
    at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1419) 
    at org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:549) 
    at org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45) 
    at com.oreilly.learningsparkexamples.mini.java.WordCount.main(WordCount.java:46) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) 
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
16/10/07 01:15:26 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(0,1475795726327,JobFailed(org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: com.oreilly.learningsparkexamples.mini.java.WordCount$1.call(Ljava/lang/Object;)Ljava/util/Iterator; 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124) 
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) 
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) 
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) 
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) 
    at org.apache.spark.scheduler.Task.run(Task.scala:85) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace:)) 
16/10/07 01:15:26 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 
16/10/07 01:15:26 INFO MemoryStore: MemoryStore cleared 
16/10/07 01:15:26 INFO BlockManager: BlockManager stopped 
16/10/07 01:15:26 INFO BlockManagerMaster: BlockManagerMaster stopped 
16/10/07 01:15:26 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 
16/10/07 01:15:26 INFO SparkContext: Successfully stopped SparkContext 
16/10/07 01:15:26 INFO ShutdownHookManager: Shutdown hook called 
16/10/07 01:15:26 INFO ShutdownHookManager: Deleting directory /private/var/folders/yw/zl5hc321387g3sz2fg3l01980000gq/T/spark-5adda737-293c-483e-bdbe-f8fa7c171211 
mini-complete-example$ 
+0

你在哪里执行此操作?在当地?哪个版本的火花安装? –

+0

你有没有把jar文件下载到这个位置? /target/learning-spark-mini-example-0.0.1.jar –

+0

抱歉,是在本地。为Hadoop 2.7.2构建的版本Spark 2.0.0。 – miro

回答

2

你得到这个错误,因为在$ SPARK_HOME火花二进制文件的版本不一样,在你的POM的版本。如果你在书中提到继pom.xml,你最终将依赖是

<dependency> <!-- Spark dependency --> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-core_2.10</artifactId> 
    <version>1.2.0</version> 
    <scope>provided</scope> 
</dependency> 

这是1.2.0版本,而你的$ SPARK_HOME有版本的二进制文件2.0.0

0

我喜欢同样的错误:

ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

java.lang.AbstractMethodError: ...


首先,检查以下路径中的火花版本:

%SPARK_HOME%/jars/spark-core_x.xx-y.y.y.jar

并修改pom.xml文件,如下所示:

<dependency> <!-- Spark dependency --> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-core_x.xx</artifactId> 
    <version>y.y.y</version> 
    <scope>provided</scope> 
</dependency> 

希望能帮到像我这样的州人。