2016-01-24 164 views
0

我试图在Hadoop的我是多么得0数据节点为活动数据节点和我的HDFS建立多节点集群显示为0字节Hadoop的多节点集群设置

分配然而节点管理器守护进程在数据节点

大师运行: masterhost1 172.31.100.3(充当也次级名称节点)名称节点

datahost1 172.31.100.4 #datanode

日志数据管理部的下面是:

`STARTUP_MSG:build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc865b490b9a6260e9611a5b8633cab885b3d247;由jenkins编译于2015-12-18T01:19Z STARTUP_MSG:java = 1.8.0_71 **************************** ********************************/ 2016-01-24 03:53:28,368 INFO org.apache.hadoop .hdfs.server.datanode.DataNode:为[TERM,HUP,INT]注册的UNIX信号处理程序 2016-01-24 03:53:28,862 WARN org.apache.hadoop.hdfs.server.common.Util:Path/usr/local/hadoop_tmp/hdfs/datanode应该在配置文件中指定为URI。请更新hdfs配置。 2016-01-24 03:53:36,454 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性 2016-01-24 03:53:37,127 INFO org.apache.hadoop。 metrics2.impl.MetricsSystemImpl:计划的10秒快照周期。 2016-01-24 03:53:37,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:启动DataNode度量系统 2016-01-24 03:53:37,132 INFO org.apache.hadoop.hdfs.server。 datanode.DataNode:配置的主机名是datahost1 2016-01-24 03:53:37,142 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:使用maxLockedMemory = 0启动DataNode 2016-01-24 03:53: 37,195 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:在/0.0.0.0:50010打开流服务器 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode。 DataNode:平衡带宽为1048576字节/秒 2016-01-24 03:53:37,197 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:用于平衡的线程数为5 2016-01-24 03:53 :47,331 INFO org.mortbay.log:记录到org.slf4j.impl.Log4jLoggerA dapter(org.mortbay.log)via org.mortbay.log.Slf4jLog 2016-01-24 03:53:47,375 INFO org.apache.hadoop.http.HttpRequestLog:http.requests.datanode的Http请求日志未定义 2016-01-24 03:53:47,395 INFO org.apache.hadoop.http.HttpServer2:增加了全局过滤器'safety'(class = org.apache.hadoop.http.HttpServer2 $ QuotingInputFilter) 2016-01-24 03 :53:47,400信息org.apache.hadoop.http.HttpServer2:添加过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter)到上下文data​​node 2016-01-24 03:53:47,404信息org.apache.hadoop.http.HttpServer2:在上下文日志中添加了过滤器static_user_filter(class = org.apache.hadoop.http.lib.StaticUserWebFilter $ StaticUserFilter) 2016-01-24 03:53:47,405 INFO org.apache.hadoop .http.HttpServer2:增加了过滤器static_user_filter(class = org.apache.hadoop.http.lib .StaticUserWebFilter $ StaticUserFilter)上下文静态 2016-01-24 03:53:47,559 INFO org.apache.hadoop.http.HttpServer2:addJerseyResourcePackage:packageName = org.apache.hadoop.hdfs.server.datanode.web.resources; org.apache.hadoop.hdfs.web.resources,pathSpec =/webhdfs/v1/* 2016-01-24 03:53:47,566 INFO org.apache.hadoop.http.HttpServer2:绑定到端口50075的Jetty 2016- 01-24 03:53:47,566信息org.mortbay.log:jetty-6.1.26 2016-01-24 03:53:48,565 INFO org.mortbay.log:已启动[email protected]:50075 2016 -01-24 03:53:49,200 INFO org.apache.hadoop.hdfs.server.datanode。DataNode:dnUserName = hadoop 2016-01-24 03:53:49,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:supergroup = sudo 2016-01-24 03:53:59,319 INFO org.apache。 hadoop.ipc.CallQueueManager:使用callQueue类java.util.concurrent.LinkedBlockingQueue 2016-01-24 03:53:59,354 INFO org.apache.hadoop.ipc.Server:启动端口50020的Socket读取器#1 2016-01 -24 03:53:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:在/0.0.0.0:50020打开IPC服务器2016-01-24 03:53:59,450 INFO org.apache.hadoop。 hdfs.server.datanode.DataNode:为名称服务收到的刷新请求:空 2016-01-24 03:53:59,485信息org.apache.hadoop.hdfs.server.datanode.DataNode:为名称服务启动BPOfferServices: 2016-01 -24 03:53:59,491 WARN org.apache.hadoop.hdfs.serve r.common.Util:路径/ usr/local/hadoop_tmp/hdfs/datanode应该在配置文件中指定为URI。请更新hdfs配置。 2016-01-24 03:53:59,499 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:从masterhost1/172.31.100.3:9000开始提供服务的块池(Datanode Uuid unassigned)服务 2016-01 -24 03:53:59,503 INFO org.apache.hadoop.ipc.Server:IPC服务器响应者:启动 2016-01-24 03:53:59,504 INFO org.apache.hadoop.ipc.Server:50020上的IPC服务器监听者:开始 2016-01-24 03:54:00,805 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:masterhost1/172.31.100.3:9000。已经尝试0次(s);重试策略是RetryUpToToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:01,808信息org.apache.hadoop.ipc.Client:重试连接到服务器:masterhost1/172.31.100.3:9000。已经尝试过1次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:02,811 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:masterhost1/172.31.100.3:9000。已经尝试过2次(s);重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:03,826 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:masterhost1/172.31.100.3:9000。已经尝试过3次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1000 MILLISECONDS) 2016-01-24 03:54:04,831 INFO org.apache.hadoop.ipc.Client:重试连接到服务器:masterhost1/172.31.100.3:9000。已经尝试过4次;重试的政策是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,休眠时间= 1000毫秒)

`

+0

我想我有同样的问题与你。我有3个从机,当我放入时,它报告没有数据节点在运行 –

回答

0

问题是传入连接的NameNode不是从数据节点获取传入的信息,这是因为IPv6的问题,只是禁用主的IPv6节点和使用netstat检查监听端口,那么你可以解决以上