当我通过使用start-all.sh
启动hadoopnode1时,它成功地启动主从服务(请参阅从服务器的jps命令输出)。但是当我试图看到管理员屏幕中的活动节点时,从节点没有显示出来。甚至当我运行从主运行完美的hadoop fs -ls /
命令,但是从药膏它显示将数据节点添加到hadoop集群
@hadoopnode2:~/hadoop-0.20.2/conf$ hadoop fs -ls/
12/05/28 01:14:20 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 0 time(s).
12/05/28 01:14:21 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 1 time(s).
12/05/28 01:14:22 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 2 time(s).
12/05/28 01:14:23 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 3 time(s).
.
.
.
12/05/28 01:14:29 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 10 time(s).
它看起来像奴隶(hadoopnode2)不能够找到错误消息/连接主节点(hadoopnode1)
请问我缺少什么?
这里是从主节点和从节点的设置 - P.S. - 主机和从机上运行Linux和Hadoop和SSH的相同版本的正常使用, 因为我可以从主节点启动从
核心-site.xml中,HDFS-site.xml中和mapred现场还设置相同上主(hadooopnode1)和从属(hadoopnode2)
OS的.xml - Ubuntu的10 Hadoop的版 -
[email protected]:~/hadoop-0.20.2/conf$ hadoop version
Hadoop 0.20.2
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707
Compiled by chrisdo on Fri Feb 19 08:07:34 UTC 2010
- 主(hadoopnode1)
[email protected]:~/hadoop-0.20.2/conf$ uname -a
Linux hadoopnode1 2.6.35-32-generiC#67-Ubuntu SMP Mon Mar 5 19:35:26 UTC 2012 i686 GNU/Linux
[email protected]:~/hadoop-0.20.2/conf$ jps
9923 Jps
7555 NameNode
8133 TaskTracker
7897 SecondaryNameNode
7728 DataNode
7971 JobTracker
masters -> hadoopnode1
slaves -> hadoopnode1
hadoopnode2
--slave(hadoopnode2)
[email protected]:~/hadoop-0.20.2/conf$ uname -a
Linux hadoopnode2 2.6.35-32-generiC#67-Ubuntu SMP Mon Mar 5 19:35:26 UTC 2012 i686 GNU/Linux
[email protected]:~/hadoop-0.20.2/conf$ jps
1959 DataNode
2631 Jps
2108 TaskTracker
masters - hadoopnode1
core-site.xml
[email protected]:~/hadoop-0.20.2/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/tmp/hadoop/hadoop-${user.name}</value>
<description>A base for other temp directories</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoopnode1:8020</value>
<description>The name of the default file system</description>
</property>
</configuration>
[email protected]:~/hadoop-0.20.2/conf$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoopnode1:8021</value>
<description>The host and port that the MapReduce job tracker runs at.If "local", then jobs are run in process as a single map</description>
</property>
</configuration>
[email protected]:~/hadoop-0.20.2/conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication</description>
</property>
</configuration>
貌似问题的work.Best是在名称解析,请参阅哪些检查你的服务我的注释。谢谢 – Sandeep