2014-01-31 55 views
4

我具有6节点最多Cloudera的5.0的β簇和运行Cloudera hadoop:无法运行Hadoop fs命令,同时HBase无法在HDFS上创建目录?

,但我不能够查看使用命令

sudo -u hdfs hadoop fs -ls/

文件和Hadoop HDFS的文件夹在其输出被显示的文件和文件夹linux目录。

尽管namenode UI正在显示文件和文件夹。

,并同时在HDFS上创建文件夹得到错误

sudo -u hdfs hadoop fs -mkdir /test 
mkdir: `/test': Input/output error 

由于这个错误HBase的没有启动,并与下面的错误关机:

Unhandled exception. Starting shutdown. 
java.io.IOException: Exception in makeDirOnFileSystem 
at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFileSystem.java:136) 
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:352) 
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:134) 
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:119) 
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:536) 
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:396) 
at java.lang.Thread.run(Thread.java:662) 
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224) 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204) 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4846) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4828) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4802) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3130) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3094) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3075) 
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) 
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:396) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) 
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) 

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:513) 
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) 
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) 
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) 
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) 
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:545) 
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913) 
at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFileSystem.java:129) 
... 6 more 
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224) 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204) 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4846) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4828) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4802) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3130) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3094) 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3075) 
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) 
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:396) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) 
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) 

at org.apache.hadoop.ipc.Client.call(Client.java:1238) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) 
at $Proxy27.mkdirs(Unknown Source) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
at java.lang.reflect.Method.invoke(Method.java:597) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) 
at $Proxy27.mkdirs(Unknown Source) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:426) 
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2151) 
... 10 more 

在此先感谢

回答

2

看起来像hadoop fs命令没有从您的core-site.xml中提取namenode地址。如果缺少配置的名称节点,Hadoop客户端代码通常会默认为本地文件系统。

如果您从集群中不是名称节点的节点运行命令,则可能必须将CM告知deploy the client configuration

如果您在集群外部的计算机上运行,​​则必须手动设置配置,并确保可以在Java类路径的某处找到core-site.xml文件。

+0

所有的配置文件是由Cloudera的,你认为这是好事,编辑文件管理Hadoop的FS的任何命令? –

+0

如果您使用的不是Cloudera管理器管理的计算机,请自行编辑文件。否则,请使用上面链接的deploy命令。 – climbage

+0

感谢它的工作我能够解决错误现在LS命令工作正常 –

3

要么改变核心file.xml的配置和编辑属性fs.default.name作为

<property> 
    <name>fs.default.name</name> 
    <value>hdfs://target-namenode:54310</value> 
</property> 

或者这样

对Cloudera的

sudo -u hdfs hadoop fs -ls hdfs://<hadoop-master-ip>:8020/ 
运行命令

for Apache Hadoop

bin/hadoop fs -ls hdfs://<hadoop-master-ip>:9000/ 

同样可以运行

+1

优秀的答案。指定hdfs:// :8020的方法在尝试调试时非常方便,特别是在Cloudera的情况下! –

+0

我使用Apache Hadoop 2.7.2和默认端口是'8020',看起来他们正在采用Cloudera方式。 –