我已经创建在主机和从机节点一个Hadoop多节点集群并且也配置SSH Hadoop的多节点集群现在我可以连接在主节点无法启动start-dfs.sh在
但是,如果没有密码到从当我尝试在我无法连接到从属节点执行主节点start-dfs.sh停在下面一行
日志:
[email protected]:~$ start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-namenode-master.out
[email protected]'s password: master: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-datanode-master.out
我按下回车
slave: Connection closed by 192.168.0.2
master: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-secondarynamenode-master.out
jobtracker running as process 10396. Stop it first.
[email protected]'s password: master: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-HNname-tasktracker-master.out
slave: Permission denied, please try again.
[email protected]'s password:
进入从密码连接被关闭
下面的事情我都试过之后,但没有结果:在这两个主&从节点
- 格式化的NameNode节点
- 覆盖默认的HADOOP_LOG_DIR形式this后
是的,这应该修复issu,因为很明显无密码的ssh设置不正确 –