2015-12-09 32 views
0

我已经通过以下链接设置了我的完全分布式Hadoop集群 - 使用Zookeeper & QJM的自动故障转移HA集群。完全分布式Hadoop集群 - 使用Zookeeper和QJM自动故障转移HA集群

http://hashprompt.blogspot.in/2015/01/fully-distributed-hadoop-cluster.html

[email protected]:/opt/zookeeper-3.4.7/bin$ ./zkServer.sh start 
ZooKeeper JMX enabled by default 
Using config: /opt/zookeeper-3.4.7/bin/../conf/zoo.cfg 
Starting zookeeper ... STARTED 

[email protected]:/opt/zookeeper-3.4.7/bin$ jps 
2919 Jps 
2895 QuorumPeerMain 

现在一切是罚款高达。当我键入以下命令:

[email protected]:/opt/hadoop-2.6.0$ bin/hdfs zkfc –formatZK 

我得到以下错误消息的,我都不怎么解决这个问题。

Error: 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.13.0-24-generic 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:user.name=hduser 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hduser 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/hadoop-2.6.0 
15/12/09 04:31:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ha-nn01:2181,ha-nn02:2181,ha-nn03:2181 sessionTimeout=5000 watcher[email protected]deea7f 
15/12/09 04:31:07 INFO zookeeper.ClientCnxn: Opening socket connection to server ha-nn01/192.168.71.138:2181. Will not attempt to authenticate using SASL (unknown error) 
15/12/09 04:31:07 INFO zookeeper.ClientCnxn: Socket connection established to ha-nn01/192.168.71.138:2181, initiating session 
15/12/09 04:31:08 INFO zookeeper.ClientCnxn: Session establishment complete on server ha-nn01/192.168.71.138:2181, sessionid = 0x1518612d0000000, negotiated timeout = 5000 
Usage: java zkfc [ -formatZK [-force] [-nonInteractive] ] 

15/12/09 04:31:08 INFO ha.ActiveStandbyElector: Session connected. 
15/12/09 04:31:08 INFO zookeeper.ZooKeeper: Session: 0x1518612d0000000 closed 
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK 
    at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:249) 
    at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:212) 
    at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61) 
    at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:170) 
    at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:166) 
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412) 
    at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:166) 
    at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:180) 
15/12/09 04:31:08 INFO zookeeper.ClientCnxn: EventThread shut down 

设置:

192.168.71.138 ha-nn01 
192.168.71.139 ha-nn02 
192.168.71.140 ha-nn03 
192.168.71.141 ha-dn01 
192.168.71.142 ha-dn02 
192.168.71.143 ha-dn03  
192.168.71.144 ha-client 

注:

HDFS-SITE.XML文件配置:

<property> 
    <name>dfs.replication</name> 
    <value>3</value> 
</property> 
<property> 
    <name>dfs.name.dir</name> 
    <value>file:///hdfs/name</value> 
</property> 

<property> 
    <name>dfs.data.dir</name> 
    <value>file:///hdfs/data</value> 
</property> 

<property> 
    <name>dfs.permissions</name> 
    <value>false</value> 
</property> 

<property> 
    <name>dfs.nameservices</name> 
    <value>auto-ha</value> 
</property> 

<property> 
    <name>dfs.ha.namenodes.auto-ha</name> 
    <value>nn01,nn02</value> 
</property> 

<property> 
    <name>dfs.namenode.rpc-address.auto-ha.nn01</name> 
    <value>ha-nn01:8020</value> 
</property> 

<property> 
    <name>dfs.namenode.http-address.auto-ha.nn01</name> 
    <value>ha-nn01:50070</value> 
</property> 

<property> 
    <name>dfs.namenode.rpc-address.auto-ha.nn02</name> 
    <value>ha-nn02:8020</value> 
</property> 

<property> 
    <name>dfs.namenode.http-address.auto-ha.nn02</name> 
    <value>ha-nn02:50070</value> 
</property> 

<property> 
    <name>dfs.namenode.shared.edits.dir</name> 
    <value>qjournal://ha-nn01:8485;ha-nn02:8485;ha-nn03:8485/auto-ha</value> 
</property> 

<property> 
    <name>dfs.journalnode.edits.dir</name> 
    <value>/hdfs/journalnode</value> 
</property> 

<property> 
    <name>dfs.ha.fencing.methods</name> 
    <value>sshfence</value> 
</property> 

<property> 
    <name>dfs.ha.fencing.ssh.private-key-files</name> 
    <value>/home/hduser/.ssh/id_rsa</value> 
</property> 

<property> 
    <name>dfs.ha.automatic-failover.enabled.auto-ha</name> 
    <value>true</value> 
</property> 

<property> 
    <name>ha.zookeeper.quorum</name> 
    <value>ha-nn01.hadoop.lab:2181,ha-nn02.hadoop.lab:2181,ha-nn03.hadoop.lab:2181</value> 
</property> 

回答

0

如果您从Microsoft Word复制命令“hdfs zkfc -formatZK”,有时候' - '比实际长,所以在终端中不理解这一行。