2016-01-06 147 views
3

当我运行start-dfs时,出现下面的错误,它看起来像我需要告诉hadoop使用不同的端口,因为这是我在进入本地主机时需要的。换句话说以下工作成功:ssh -p 2020 localhost.如何配置hadoop使用非默认端口:“0.0.0.0:ssh:连接到主机0.0.0.0端口22:连接被拒绝”

[Wed Jan 06 16:57:34 [email protected]~]# start-dfs.sh 
16/01/06 16:57:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
Starting namenodes on [localhost] 
localhost: namenode running as process 85236. Stop it first. 
localhost: datanode running as process 85397. Stop it first. 
Starting secondary namenodes [0.0.0.0] 
0.0.0.0: ssh: connect to host 0.0.0.0 port 22: Connection refused 
16/01/06 16:57:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 

核心的site.xml:

<configuration> 
    <property> 
     <name>fs.default.name</name> 
      <value>hdfs://localhost:9000</value> 
    </property> 
</configuration> 

HDFS-site.xml中:

<configuration> 
    <property> 
     <name>dfs.replication</name> 
      <value>1</value> 
    </property> 

    <property> 
     <name>dfs.namenode.name.dir</name> 
     <value>file:///hadoop/hdfs/namenode</value> 
    </property> 

    <property> 
     <name>dfs.datanode.data.dir</name> 
     <value>file:///hadoop/hdfs/datanode</value> 
    </property> 
</configuration> 

回答

4

如果您的Hadoop集群节点上运行的sshd监听在非标准端口上,则可以告诉Hadoop脚本启动到该端口的ssh连接。实际上,可以自定义传递给ssh命令的任何选项。

这由名为HADOOP_SSH_OPTS的环境变量控制。你可以编辑你的hadoop-env.sh文件并在那里定义它。 (默认情况下该环境变量没有定义。)

例如:

export HADOOP_SSH_OPTS="-p 2020" 
相关问题