2016-08-31 163 views
1

我遇到了一些问题,需要使用master和worker启动Spark群集。我在Ubuntu 16.04 LTS上下载并安装了Hadoop 2.7.3和Spark 2.0.0。我做了一个的conf /从站我的奴隶的IP的文件,这是我spark-env.sh无法启动apache spark独立群集

#!/usr/bin/env #bash 

export SPARK_DIST_CLASSPATH=$(hadoop classpath) 


export SPARK_WORKER_CORES=2 

export SPARK_MASTER_IP=192.168.1.6 
export SPARK_LOCAL_IP=192.168.1.6 

export SPARK_YARN_USER_ENV="JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre" 

我使用start-master.sh开始主,这一切都OK。我尝试运行工作人员时遇到了一些问题。

我试着:

(1) - start-slave.sh spark://192.168.1.6:7077 (from worker) 
(2) - start-slaves.sh (from master) 
(3) - ./bin/spark-class org.apache.spark.deploy.worker.Worker spark://192.168.1.6:7077 (from worker) 

随着(1)E(2)从设备显然开始,但在主站:8080它被未示出。使用(3)它抛出这个异常:

16/08/31 14:17:03 INFO worker.Worker: Connecting to master master:7077... 
16/08/31 14:17:03 WARN worker.Worker: Failed to connect to master master:7077 
org.apache.spark.SparkException: Exception thrown in awaitResult 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75) 
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
    at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167) 
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83) 
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88) 
    at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96) 
    at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:216) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.IOException: Failed to connect to master/192.168.1.6:7077 
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228) 
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179) 
    at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197) 
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191) 
    at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) 
    ... 4 more 
Caused by: java.net.ConnectException: Connection refused: master/192.168.1.6:7077 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) 
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 
    ... 1 more 
16/08/31 14:17:40 ERROR worker.Worker: All masters are unresponsive! Giving up. 

硕士和工人通过安装使用桥接连接相同的Windows 10主机上的VMware虚拟机托管。

我也关闭了防火墙。

我该怎么办?

在此先感谢。

+0

检查您的主机是否可以访问您的工作器,反之亦然。 – Ravikumar

回答

1

在日志:

16/08/31 14:17:03 INFO worker.Worker: Connecting to master master:7077... 

你可以看到,它正在尝试连接master:7077

确保主主机名解析到指定的IP(192.168.1.6)。

您可以检查/ etc/hosts文件中的主机名。

+0

主机名解析为正确的IP。我已经尝试了,主机名和IP。感谢你的回答。 –

相关问题