2011-03-24 27 views
5

我正试图在非Cloudera Ubuntu测试映像上安装Hadoop。在我运行./bin/start-all.sh之前似乎一切都很顺利。名称节点永远不会出现,所以我甚至不能运行连接到文件系统的hadoop fs -ls我该如何解决这个Hadoop文件系统安装错误?

这里的NameNode的日志:

2011-03-24 11:38:00,256 INFO org.apache.hadoop.ipc.Server: Stopping server on 54310 
2011-03-24 11:38:00,257 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop-datastore/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:88) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:312) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:293) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:224) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:306) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1006) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015) 

2011-03-24 11:38:00,258 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at Brash/192.168.1.5 
************************************************************/ 

我的根目录已经chmod -R 755,到目前为止,因为即使去确保目录与mkdir -p建立它的存在。

[email protected]:/usr/lib/hadoop$ ls -la /usr/local/hadoop-datastore/hadoop-hadoop/dfs/ 
total 16 
drwxr-xr-x 4 hadoop hadoop 4096 2011-03-24 11:41 . 
drwxr-xr-x 4 hadoop hadoop 4096 2011-03-24 11:31 .. 
drwxr-xr-x 2 hadoop hadoop 4096 2011-03-24 11:31 data 
drwxr-xr-x 2 hadoop hadoop 4096 2011-03-24 11:41 name 

这里是我的/conf/hdfs-site.xml

[email protected]:/usr/lib/hadoop$ cat conf/hdfs-site.xml 
<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
<property> 
    <name>dfs.replication</name> 
    <value>1</value> 
    <description>Default block replication. 
    The actual number of replications can be specified when the file is created. 
    The default is used if replication is not specified in create time. 
    </description> 
</property> 
</configuration> 

回答

7

你永远不应该有自己创建的目录。它将自行创建它。你忘了格式化namenode吗?删除现有目录,然后重新格式化名称节点(bin/hadoop namenode -format)并重试。

+0

我已经运行了一点,但必须搞砸的操作顺序。这似乎已经清除了一切。谢谢! – buley 2011-03-24 19:08:21

+0

另请参阅http://code.google.com/p/hadoop-clusternet/wiki/TroubleshootingHadoop – 2013-02-14 19:46:02

相关问题