2015-09-28 77 views
1

当我尝试将文件从我local directory复制到HDFS我收到以下错误:不能写入文件到HDFS - 收到错误HDFS在安全模式下

[[email protected] ~]$ hadoop fs -copyFromLocal hello.txt /user/cloudera/my_data 


copyFromLocal: Cannot create file/user/cloudera/my_data/hello.txt._COPYING_. Name node is in safe mode. 

然后我执行的命令:

[[email protected] ~]$ su 
Password: 
[[email protected] cloudera]# hdfs dfsadmin -safemode leave 
safemode: Access denied for user root. Superuser privilege is required 

并进一步执行该命令将文件存储到HDFS我得到了同样的错误。

我再次执行该命令:

[[email protected] ~]$ su - root 
Password: 
[[email protected] ~]# hdfs dfsadmin -safemode leave 

我收到了同样的错误。我使用cloudera分布hadoop

+0

感谢Maximillian纠正格式。 – user1574688

回答

0

Namenode在重启后有时会处于安全模式,如果等待一段时间(取决于块的数量),namenode会自动离开安全模式。

您可以使用hdfs dfsadmin -safemode leave命令强制执行此操作,只有HDFS管理员用户可以执行此命令,因此在执行此命令之前切换到hdfs用户。

su hdfs

0

尝试用

hadoop dfsadmin -safemode leave  

这应该工作...

+0

请添加一些解释,而不是只是代码... – eirikir

2

从Apache文档here

During start up the NameNode loads the file system state from the fsimage and the edits log file. It then waits for DataNodes to report their blocks so that it does not prematurely start replicating the blocks though enough replicas already exist in the cluster. During this time NameNode stays in Safemode. Safemode for the NameNode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally the NameNode leaves Safemode automatically after the DataNodes have reported that most file system blocks are available. If required, HDFS could be placed in Safemode explicitly using bin/hadoop dfsadmin -safemode command.

在大多数情况下,在过程中完成HDFS之后的合理时间是s tarted。但是,您可以强制HDFS通过以下命令出来的安全模式:

hadoop dfsadmin -safemode leave 

强烈建议运行fsck从不一致的状态中恢复过来。