2017-09-24 45 views
0

我可以在url http://ec2-54-186-47-36.us-west-2.compute.amazonaws.com:8080/看到我有两个工作节点和一个主节点,它显示了spark集群。我的2工作节点和1个主运行命令JPS我可以看到,所有的服务都起来。 下面的脚本,我使用初始化SPARKR会议。YRN集群上的sparkR

if (nchar(Sys.getenv("SPARK_HOME")) < 1) { 
    Sys.setenv(SPARK_HOME = "/home/ubuntu/spark") 
    } 

,但每当我试图用Rstudio初始化会话,然后它会失败,下面的错误显示,请咨询我,我不能使用集群的真正的好处。

sparkR.session(master = "yarn", deployMode="cluster", sparkConfig = 
    list(spark.driver.memory = "2g"),sparkPackages = "com.databricks:spark- 
    csv_2.11:1.1.0") 

    Launching java with spark-submit command /home/ubuntu/spark/bin/spark- 
    submit --packages com.databricks:spark-csv_2.11:1.1.0 --driver-memory 
    "2g" "--packages" "com.databricks:spark-csv_2.11:1.1.0" "sparkr-shell" 
    /tmp/RtmpkSWHWX/backend_port29310cbc7c6 
    Ivy Default Cache set to: /home/rstudio/.ivy2/cache 
The jars for the packages stored in: /home/rstudio/.ivy2/jars 
:: loading settings :: url = jar:file:/home/ubuntu/spark/jars/ivy- 
2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml 
com.databricks#spark-csv_2.11 added as a dependency 
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 
confs: [default] 
found com.databricks#spark-csv_2.11;1.1.0 in central 
found org.apache.commons#commons-csv;1.1 in central 
found com.univocity#univocity-parsers;1.5.1 in central 
:: resolution report :: resolve 441ms :: artifacts dl 24ms 
:: modules in use: 
com.databricks#spark-csv_2.11;1.1.0 from central in [default] 
com.univocity#univocity-parsers;1.5.1 from central in [default] 
org.apache.commons#commons-csv;1.1 from central in [default] 
--------------------------------------------------------------------- 
|     |   modules   || artifacts | 
|  conf  | number| search|dwnlded|evicted|| number|dwnlded| 
--------------------------------------------------------------------- 
|  default  | 3 | 0 | 0 | 0 || 3 | 0 | 
--------------------------------------------------------------------- 
:: retrieving :: org.apache.spark#spark-submit-parent 
confs: [default] 
0 artifacts copied, 3 already retrieved (0kB/18ms) 


Setting default log level to "WARN". 
    To adjust logging level use sc.setLogLevel(newLevel). 
    17/09/24 23:15:34 WARN NativeCodeLoader: Unable to load native-hadoop library 
    for your platform... using builtin-java classes where applicable 
    17/09/24 23:15:42 ERROR SparkContext: Error initializing SparkContext. 
    org.apache.spark.SparkException: Yarn application has already ended! It 
    might have been killed or unable to launch application master. 
    at 



org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend 
.waitForApplication (YarnClientSchedulerBackend.scala:85) 
at 

org.apache.spark.scheduler.cluster. 
YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62) 
at org.apache.spark.scheduler.TaskSchedulerImpl. 
start(TaskSchedulerImpl.scala:149) 
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500) 
at org.apache.spark.api.java.JavaSparkContext.<init> 
(JavaSparkContext.scala:58) 
at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:129) 
at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 



sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141) 
    at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86) 
    at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38) 
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) 
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) 
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) 
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) 
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) 
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) 
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244) 
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) 
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) 
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) 
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) 
    at java.lang.Thread.run(Thread.java:748) 
17/09/24 23:15:42 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 
17/09/24 23:15:42 WARN MetricsSystem: Stopping a MetricsSystem that is not running 
17/09/24 23:15:42 ERROR RBackendHandler: createSparkContext on org.apache.spark.api.r.RRDD failed 
Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
    org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. 
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85) 
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62) 
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149) 
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:500) 
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 
    at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:129) 
    at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Metho 

回答

0

互动星火壳&会议,如RStudio(用于R)或Jupyter笔记本电脑,无法在cluster模式下运行 - 你应该改为deployMode=client

这是试图用--deploy-mode cluster运行SparkR壳(这种情况实际上与RStudio相同)时会发生什么:

$ ./sparkR --master yarn --deploy-mode cluster 
R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch" 
[...] 
Error: Cluster deploy mode is not applicable to Spark shells. 

this answer请参阅PySpark情况。

这样做而不是意味着您不会在这些会话中利用Spark的分布式优势(即集群计算)从docs

有可用于启动星火纱线 应用的两个部署模式。在cluster模式中,火花驱动器,其是由管理YARN群集, 和客户端上发起所述应用之后可以走一个 应用主进程内运行。在client 模式下,驾驶者在客户端进程中运行,并且应用程序 主仅用于从纱线请求资源。