2016-11-25 28 views
0

星火1.6.2版本的Hadoop 版本2.7.3IllegalStateException异常运行时,火花wordcount的例子

在独立的集群模式

命令WORDCOUNT例如运行火花时:

spark-submit --class org.apache.spark.examples.JavaWordCount --master spark://IP:7077 spark-examples-1.6.2.2.5.0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar file.txt output 

得到以下错误

INFO cluster.SparkDeploySchedulerBackend: Executor app-20161125052710-0012/10 removed: java.io.IOException: Failed to create directory /usr/hdp/2.5.0.0-1245/spark/work/app-20161125052710-0012/10  
ERROR spark.SparkContext: Error initializing SparkContext. 
    java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext. 
    This stopped SparkContext was created at: 

    org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) 
    org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:44) 
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    java.lang.reflect.Method.invoke(Method.java:606) 
    org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
    org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
    org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
    org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
    org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

    The currently active SparkContext was created at: 

    (No active SparkContext.) 

     at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:106) 
     at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1602) 
     at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2203) 
     at org.apache.spark.SparkContext.<init>(SparkContext.scala:579) 
     at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) 
     at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:44) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:606) 
     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
    16/11/25 04:24:48 INFO spark.SparkContext: SparkContext already stopped. 

in spark master node url,我看到两名工人在ALIVE状态

+0

你能重新启动spark集群吗?它可以解决这个问题? – Bhavesh

+0

看起来像'无法创建目录/ usr/hdp/2.5.0.0-1245/spark/work'是主要原因。在允许工作路径后,它工作正常 –

回答

0

好像是Failed to create directory /usr/hdp/2.5.0.0-1245/spark/work是根本原因。在给予/usr/hdp/2.5.0.0-1245/spark/work路径的许可后,它可以正常工作