2013-03-21 92 views
1

我有我的机器上设置和运行hadoop,我试图运行PEGASUS peta规模挖掘程序,但是当我做的时候,有很多的警告和以下是它的一部分:Hadoop程序无法找到安装的二进制文件

13/03/21 12:58:42 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hadi_edge/catepillar_star.edge could only be replicated to 0 nodes, instead of 1 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) 
    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:601) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) 

    at org.apache.hadoop.ipc.Client.call(Client.java:1066) 
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) 
    at $Proxy1.addBlock(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:601) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
    at $Proxy1.addBlock(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826) 

13/03/21 12:58:42 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes = 
= null 
13/03/21 12:58:42 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/hadi_edge/catepillar_star.edge" - Aborting... 
put: java.io.IOException: File /user/hadoop/hadi_edge/catepillar_star.edge could only be replicated to 0 nodes, instead of 1 
13/03/21 12:58:42 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/hadi_edge/catepillar_star.edge : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hadi_edge/catepillar_star.edge could only be replicated to 0 nodes, instead of 1 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) 
    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:601) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) 

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/hadi_edge/catepillar_star.edge could only be replicated to 0 nodes, instead of 1 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) 
    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:601) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) 

    at org.apache.hadoop.ipc.Client.call(Client.java:1066) 
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) 
    at $Proxy1.addBlock(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:601) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
    at $Proxy1.addBlock(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586) 
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826) 

我认为这是一个权限问题,然后我试图sudo的化妆,然后我得到folloing错误:

Hadoop is not installed in the system. 
Please install Hadoop and make sure the hadoop binary is accessible. 
make: *** [demo_hadi] Error 127 

Hadoop是已经安装了打印我的JPS:

[email protected]:~/PEGASUS$ jps 
7814 JobTracker 
8061 TaskTracker 
3799 FsShell 
7718 SecondaryNameNode 
9155 FsShell 
8881 RunJar 
7235 NameNode 
6339 RunJar 
9236 Jps 

感谢您的时间,我真的想知道什么是错的!

回答

0

中可见,Datanode的是不是你的系统上(同样是可能的原因“只能复制到0节点,而不是1”),它有时会发生

解决方法:停止所有hadoop守护进程并重新启动它们。查看所有5个守护进程是否都在运行(namenode,Secondary namenode,Datanode,jobtracker,tasktracker)。

尝试运行一些示例代码来测试该框架运行良好。

如果上述解决方案不起作用,请重新启动机器。

如果问题仍然存在,请重新设置namenode的格式(尽管这不是一个很好的建议;应该保留为最后的解决方案)。

+0

我总是在我重新开始之前格式化namenode – Harshit 2013-03-21 07:56:01

+0

我试过./stop-all.sh然后./start-all.sh,再次生成同样的堆栈跟踪 – Harshit 2013-03-21 08:05:26

+0

问题,正如我所说的,数据节点是不起来。请检查您是否正确安装了Hadoop。检查conf文件是否被正确修改。 – Gargi 2013-03-21 08:07:43

相关问题