2016-03-14 28 views
2

我正在使用Hadoop 2.6.0-cdh5.6.0。我配置了HA。我显示了活动(NN1)和备用名称节点(NN2)。现在,当我向活动名称节点(NN1)发出kill信号时,备用名称节点(NN2)不会变为活动状态,直到我再次启动NN1为​​止。再次启动NN1后,它处于待机状态,NN2处于活动状态。我没有配置“ha.zookeeper.session-timeout.ms”参数,所以我假设它将默认为5秒。在检查活动和备用NN之前,我正在等待时间完成。Hadoop HA。配置了自动故障转移,但在NN再次启动之前,备用NN不会激活

我核心的site.xml

<configuration> 
    <property> 
    <name>fs.defaultFS</name> 
    <value>hdfs://mycluster/</value> 
    </property> 
    <property> 
    <name>hadoop.proxyuser.mapred.groups</name> 
    <value>*</value> 
    </property> 
    <property> 
    <name>hadoop.proxyuser.mapred.hosts</name> 
    <value>*</value> 
    </property> 
    <property> 
    <name>ha.zookeeper.quorum</name> 
    <value>172.17.5.107:2181,172.17.3.88:2181,172.17.5.128:2181</value> 
    </property> 
</configuration> 

我HDFS-site.xml中

<configuration> 
    <property> 
    <name>dfs.permissions.superusergroup</name> 
    <value>hadoop</value> 
    </property> 
    <property> 
    <name>dfs.namenode.name.dir</name> 
    <value>file:///data/1/dfs/nn</value> 
    </property> 
    <property> 
    <name>dfs.datanode.data.dir</name> 
    <value>file:///data/1/dfs/dn</value> 
    </property> 
    <property> 
    <name>dfs.nameservices</name> 
    <value>mycluster</value> 
    </property> 
    <property> 
    <name>dfs.ha.namenodes.mycluster</name> 
    <value>nn1,nn2</value> 
    </property> 
    <property> 
    <name>dfs.namenode.rpc-address.mycluster.nn1</name> 
    <value>172.17.5.107:8020</value> 
    </property> 
    <property> 
    <name>dfs.namenode.rpc-address.mycluster.nn2</name> 
    <value>172.17.3.88:8020</value> 
    </property> 
    <property> 
    <name>dfs.namenode.http-address.mycluster.nn1</name> 
    <value>172.17.5.107:50070</value> 
    </property> 
    <property> 
    <name>dfs.namenode.http-address.mycluster.nn2</name> 
    <value>172.17.3.88:50070</value> 
    </property> 
    <property> 
    <name>dfs.namenode.shared.edits.dir</name> 
    <value>qjournal://172.17.5.107:8485;172.17.3.88:8485;172.17.5.128:8485/mycluster</value> 
    </property> 
    <property> 
    <name>dfs.client.failover.proxy.provider.mycluster</name> 
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> 
    </property> 
    <property> 
    <name>dfs.ha.fencing.methods</name> 
    <value>sshfence</value> 
    </property> 
    <property> 
    <name>dfs.ha.fencing.ssh.private-key-files</name> 
    <value>/root/.ssh/id_rsa</value> 
    </property> 
    <property> 
    <name>dfs.ha.automatic-failover.enabled</name> 
    <value>true</value> 
    </property> 
    <property> 
    <name>dfs.journalnode.edits.dir</name> 
    <value>/data/1/dfs/jn</value> 
    </property> 
</configuration> 

我zoo.cfg

maxClientCnxns=50 
# The number of milliseconds of each tick 
tickTime=2000 
# The number of ticks that the initial 
# synchronization phase can take 
initLimit=10 
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement 
syncLimit=5 
# the directory where the snapshot is stored. 
dataDir=/var/lib/zookeeper 
# the port at which the clients will connect 
clientPort=2181 
# the directory where the transaction logs are stored. 
dataLogDir=/var/lib/zookeeper 

回答

1

有与sshfence问题。授予hdfs用户权限或将其更改为root用户

<property>                     
    <name>dfs.client.failover.proxy.provider.mycluster</name>         
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> 
    </property>                     
    <property>                     
    <name>dfs.ha.fencing.methods</name>               
    <value>sshfence(root)</value>                
    </property>                     
    <property>                     
    <name>dfs.ha.fencing.ssh.private-key-files</name>           
    <value>/var/lib/hadoop-hdfs/.ssh/id_rsa</value>            
    </property>                     
    <property>                     
    <name>dfs.ha.automatic-failover.enabled</name>            
    <value>true</value>                   
    </property>                     
    <property>                     
    <name>dfs.journalnode.edits.dir</name>              
    <value>/data/1/dfs/jn</value>                
    </property>                     
</configuration>