2015-05-21 41 views
-1

您好我正在尝试设置HBASE(hbase-0.98.12-hadoop2)ON HADOOP(hadoop-2.7.0)
Hadoop在localhost上运行:560070其运行正常。Hbase on hadoop未在分布式模式下连接

当我开始./start-hbase.sh我得到在日志文件中此错误

我的HBase-site.xml中的显示如下

<configuration> 
    <property> 
    <name>hbase.rootdir</name> 
    <value>hdfs://localhost:9000/hbase</value> 
    </property> 

    <property> 
    <name>hbase.cluster.distributed</name> 
    <value>true</value> 
    </property> 

    <property> 
    <name>hbase.zookeeper.quorum</name> 
    <value>localhost</value> 
    </property> 

<!-- <property> 
    <name>dfs.replication</name> 
    <value>1</value> 
    </property>--> 

    <property> 
    <name>hbase.zookeeper.property.clientPort</name> 
    <value>2181</value> 

2015-05-22 11:17:30,468 INFO [master:bredgelinux-desktop:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 
    2015-05-22 11:17:31,021 WARN [Thread-13] hdfs.DFSClient: DataStreamer Exception 
    org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. 
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482) 
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) 
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:415) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) 
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) 

     at org.apache.hadoop.ipc.Client.call(Client.java:1347) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1300) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
     at com.sun.proxy.$Proxy10.addBlock(Unknown Source) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:606) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
     at com.sun.proxy.$Proxy10.addBlock(Unknown Source) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) 
    2015-05-22 11:17:31,023 DEBUG [master:bredgelinux-desktop:60000] util.FSUtils: Unable to create version file at hdfs://localhost:9000/hbase, retrying 
    org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. 
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482) 
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) 
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:415) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) 
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) 

     at org.apache.hadoop.ipc.Client.call(Client.java:1347) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1300) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
     at com.sun.proxy.$Proxy10.addBlock(Unknown Source) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:606) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
     at com.sun.proxy.$Proxy10.addBlock(Unknown Source) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) 
    2015-05-22 11:17:41,116 WARN [Thread-16] hdfs.DFSClient: DataStreamer Exception 
    org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. 
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482) 
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
  • TIA

我使用的OpenJDK这里是JPS命令类似的结果 bredgelinux @ bredgelinux桌面:〜$ sudo的netstat的-plten | grep的java的 TCP 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN 0 29563 3356/JAVA
TCP 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 0 27575 3063/JAVA
TCP 0 0 0.0。 0.0:46766 0.0.0.0:* LISTEN 0 29555 3356/java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 0 25124 2723/java
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 0 29579 3224/JAVA
TCP 0 0 0.0.0.0:13562 0.0.0.0:* LISTEN 0 29562 3356/JAVA
TCP 0 0 0.0.0.0:8030 0.0.0.0:* LISTEN 0 31542 3224/JAVA
tcp 0 0 0.0.0.0:8031 0.0.0.0:* LISTEN 0 29571 3 224/JAVA
TCP 0 0 0.0.0.0:8032 0.0.0.0:* LISTEN 0 31546 3224/JAVA
TCP 0 0 0.0.0.0:8033 0.0.0.0:* LISTEN 0 29581 3224/JAVA
TCP 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN 0 31536 3356/JAVA
TCP 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 0 28260 2723/JAVA

Datanode的日志文件

2015-05-22 14:21:33,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service 
2015-05-22 14:21:33,985 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 
2015-05-22 14:21:33,985 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 
2015-05-22 14:21:35,073 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2015-05-22 14:21:36,073 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2015-05-22 14:21:36,391 INFO org.apache.hadoop.hdfs.server.common.Storage: DataNode version: -56 and NameNode layout version: -60 
2015-05-22 14:21:36,443 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop_store/hdfs/datanode/in_use.lock acquired by nodename [email protected] 
2015-05-22 14:21:36,457 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop_store/hdfs/datanode: namenode clusterID = CID-654b4574-5929-4de9-ac12-f47de7f9fd75; datanode clusterID = CID-f70f0a9a-da72-4c70-b453-35227ceca6ce 
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646) 
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320) 
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403) 
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311) 
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276) 
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314) 
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220) 
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828) 
    at java.lang.Thread.run(Thread.java:745) 
2015-05-22 14:21:36,459 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 
2015-05-22 14:21:36,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned) 
2015-05-22 14:21:38,461 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 
2015-05-22 14:21:38,474 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 
2015-05-22 14:21:38,476 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down DataNode at bredgelinux-desktop/127.0.1.1 
************************************************************/ 

回答

0

现在Hbase运行在hadoop上。目录'datanode'和'namenode'不可访问。可能是因为这个Hadoop无法访问这些目录。我对这些目录和namenode格式执行了chmod 777并重新启动系统。现在,我的habse在61000端口上运行。感谢每一位的回复。

1

java.net.ConnectException: Call From bredgelinux-desktop/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused

如果您有loopb,则会发生此错误确认IP地址。请按照下列步骤纠正此错误:

步骤1:从/ etc/hosts删除行127.0.1.1

第2步:重新启动您的hadoop和hbase进程。

+0

尝试在所有节点的'hbase-site.xml'中将'localhost'更改为'hbase-master-hostname'或'hbase-master-ip'。重新启动所有hadoop和hbase进程。尽量避免在配置中使用localhost。 –

+0

我试图通过IP替换本地主机,但似乎不是这是一个问题。 –

+0

发布新日志。此外,发布所有节点的'jps'结果。 –

1

我猜(因为我以前在数据节点日志中发现过类似的错误),您已经删除了datanode数据目录并重新启动它。

尝试降低HDFS(datanodes和namenode),删除namenode和datanode数据目录,启动集群并格式化namenode。

+0

谢谢。我做到了,现在它运行良好 –