我在HDFS上追加文件时遇到了错误(cloudera 2.0.0-cdh4.2.0)。 导致错误的用例为:java.io.IOException:未能添加数据节点。 HDFS(Hadoop)
- 在文件系统(DistributedFileSystem)上创建文件。 确定
追加前面创建的文件。 错误
OutputStream stream = FileSystem.append(filePath); stream.write(fileContents);
然后引发错误:
Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)
一些相关的HDFS CONFIGS:
dfs.replication
设置为2
dfs.client.block.write.replace-datanode-on-failure.policy
设置为true dfs.client.block.write.replace-datanode-on-failure
设置为默认
任何想法? 谢谢!