2012-05-28 95 views
4

当我通过使用start-all.sh启动hadoopnode1时,它成功地启动主从服务(请参阅从服务器的jps命令输出)。但是当我试图看到管理员屏幕中的活动节点时,从节点没有显示出来。甚至当我运行从主运行完美的hadoop fs -ls /命令,但是从药膏它显示将数据节点添加到hadoop集群

@hadoopnode2:~/hadoop-0.20.2/conf$ hadoop fs -ls/
12/05/28 01:14:20 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 0 time(s). 
12/05/28 01:14:21 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 1 time(s). 
12/05/28 01:14:22 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 2 time(s). 
12/05/28 01:14:23 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 3 time(s). 
. 
. 
. 
12/05/28 01:14:29 INFO ipc.Client: Retrying connect to server: hadoopnode1/192.168.1.120:8020. Already tried 10 time(s). 

它看起来像奴隶(hadoopnode2)不能够找到错误消息/连接主节点(hadoopnode1)

请问我缺少什么?

这里是从主节点和从节点的设置 - P.S. - 主机和从机上运行Linux和Hadoop和SSH的相同版本的正常使用, 因为我可以从主节点启动从

核心-site.xml中,HDFS-site.xml中和mapred现场还设置相同上主(hadooopnode1)和从属(hadoopnode2)

OS的.xml - Ubuntu的10 Hadoop的版 -

[email protected]:~/hadoop-0.20.2/conf$ hadoop version 
Hadoop 0.20.2 
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707 
Compiled by chrisdo on Fri Feb 19 08:07:34 UTC 2010 

- 主(hadoopnode1)

[email protected]:~/hadoop-0.20.2/conf$ uname -a 
Linux hadoopnode1 2.6.35-32-generiC#67-Ubuntu SMP Mon Mar 5 19:35:26 UTC 2012 i686 GNU/Linux 

[email protected]:~/hadoop-0.20.2/conf$ jps 
9923 Jps 
7555 NameNode 
8133 TaskTracker 
7897 SecondaryNameNode 
7728 DataNode 
7971 JobTracker 

masters -> hadoopnode1 
slaves -> hadoopnode1 
hadoopnode2 

--slave(hadoopnode2)

[email protected]:~/hadoop-0.20.2/conf$ uname -a 
Linux hadoopnode2 2.6.35-32-generiC#67-Ubuntu SMP Mon Mar 5 19:35:26 UTC 2012 i686 GNU/Linux 

[email protected]:~/hadoop-0.20.2/conf$ jps 
1959 DataNode 
2631 Jps 
2108 TaskTracker 

masters - hadoopnode1 

core-site.xml 
[email protected]:~/hadoop-0.20.2/conf$ cat core-site.xml 
<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 
     <property> 
       <name>hadoop.tmp.dir</name> 
       <value>/var/tmp/hadoop/hadoop-${user.name}</value> 
       <description>A base for other temp directories</description> 
     </property> 

     <property> 
       <name>fs.default.name</name> 
       <value>hdfs://hadoopnode1:8020</value> 
       <description>The name of the default file system</description> 
     </property> 

</configuration> 

[email protected]:~/hadoop-0.20.2/conf$ cat mapred-site.xml 
<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 
     <property> 
       <name>mapred.job.tracker</name> 
       <value>hadoopnode1:8021</value> 
       <description>The host and port that the MapReduce job tracker runs at.If "local", then jobs are run in process as a single map</description> 
     </property> 
</configuration> 

[email protected]:~/hadoop-0.20.2/conf$ cat hdfs-site.xml 
<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<!-- Put site-specific property overrides in this file. --> 

<configuration> 
     <property> 
       <name>dfs.replication</name> 
       <value>2</value> 
       <description>Default block replication</description> 
     </property> 
</configuration> 

回答

0

检查NameNode和数据节点日志。 (应在$HADOOP_HOME/logs/)。最可能的问题可能是namenode和datanode ID不匹配。从所有节点删除hadoop.tmp.dir并再次格式化namenode($HADOOP_HOME/bin/hadoop namenode -format),然后重试。

+0

貌似问题的work.Best是在名称解析,请参阅哪些检查你的服务我的注释。谢谢 – Sandeep

0

我认为,在从2从2要听同一端口8020,而不是在听8021

+0

我已经配置了服务器在某些安装指南中给出。端口8020是HDFS和8021是JobTracker的 – Sandeep

0

添加新节点的主机名奴隶文件,并开始数据节点的新节点上&任务跟踪器。

0

确实在你的情况下有两个错误。

can't connect to hadoop master node from slave 

这就是网络问题。测试它:卷曲192.168.1.120:8020。

正常响应:卷曲:(52)从服务器

空回复在我的情况,我得到找不到主机错误。所以只要看看防火墙设置

data node down: 

这是Hadoop的问题。 Raze2dust的方法很好。如果您在日志中看到不兼容的namespaceIDs错误,可以使用另一种方法:

stop hadoop并编辑/ current/VERSION中的namespaceID的值以匹配当前namenode的值,然后启动hadoop。

可以使用随时检查可用的数据节点:hadoop fsck /

+0

看起来问题出在名称解析,请参阅我的意见。谢谢 – Sandeep

2

看起来问题不只有从站,还与主节点(hadoopnode1)。当我从主检查日志我看到同样的错误,它可以;吨连接到hadoopnode1从主节点(hadoopnode1)

日志。我改变了环回地址为127.0.0.1

2012-05-30 20:54:31,760 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopnode1/127.0.0.1:8020. Already tried 0 time(s). 
2012-05-30 20:54:32,761 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopnode1/127.0.0.1:8020. Already tried 1 time(s). 
2012-05-30 20:54:33,764 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopnode1/127.0.0.1:8020. Already tried 2 time(s). 
2012-05-30 20:54:34,764 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopnode1/127.0.0.1:8020. Already tried 3 time(s). 
. 
. 
hadoopnode1/127.0.0.1:8020. Already tried 8 time(s). 
2012-05-30 20:54:40,782 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopnode1/127.0.0.1:8020. Already tried 9 time(s). 
2012-05-30 20:54:40,784 INFO org.apache.hadoop.mapred.JobTracker: problem cleaning system directory: null 
java.net.ConnectException: Call to hadoopnode1/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused 
     at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) 

这里是我的/ etc/hosts文件

192.168.1.120 hadoopnode1  # Added by NetworkManager 
127.0.0.1  localhost.localdomain localhost hadoopnode1 
::1  hadoopnode1  localhost6.localdomain6 localhost6 
192.168.1.121 hadoopnode2 
# The following lines are desirable for IPv6 capable hosts 
::1  localhost ip6-localhost ip6-loopback 
fe00::0 ip6-localnet 
ff00::0 ip6-mcastprefix 
ff02::1 ip6-allnodes 
ff02::2 ip6-allrouters 
ff02::3 ip6-allhosts 

我真搞不清楚那这是怎么要去工作。我正在尝试从过去15天开始创建群集。任何帮助表示赞赏。

@ Raze2dust-我已经删除了所有.tmp文件,但现在的问题看别的东西。我的想法。更多名称解析问题

@William姚明的 - 没有安装卷曲,但我能够ping通彼此服务器,也正在以使用SSH

1

在Web GUI可以看到您的群集拥有的节点数量。如果你看不到你的预期,然后确保/ etc/hosts中的主文件为主机只(2节点集群)。

192.168.0.1 master 
192.168.0.2 slave 

如果你看到任何127.0 ..... IP然后注释掉,因为Hadoop的首先会看到他们作为主机(S)。 我有上面的问题,我解决它上面的方式。希望这可以帮助。

1

由须藤JPS 主人不应该显示你需要做的

Restart Hadoop 
Go to /app/hadoop/tmp/dfs/name/current 
Open VERSION (i.e. by vim VERSION) 
Record namespaceID 
Go to /app/hadoop/tmp/dfs/data/current 
Open VERSION (i.e. by vim VERSION) 
Replace the namespaceID with the namespaceID you recorded in step 4. 

这应该运气