2016-09-13 92 views
1

当我运行mapreduce作业时,它从RUNNING跳转到PREP状态。我查看了mapreduce日志,但没有发现任何异常。我想知道这是否是与纱线配置有关的问题。所以,我查看了mapred-site.xml [2]的配置,看起来内存大小是正确的。尽管我已将mapreduce设置为32GB(<name>yarn.nodemanager.resource.memory-mb</name> <value>32218</value>),但我正在运行16核和64GB内存的PC。任何建议尝试调试这个?作业从RUNNING跳转到PREP状态

[1]工作状态

Total jobs:1 
        JobId  State   StartTime  UserName   Queue  Priority  UsedContainers RsvdContainers UsedMem   RsvdMem  NeededMem   AM info 
job_1379101056979_0001  PREP  1379101096477   root   default  NORMAL     0    0  0M    0M 

[2] mapred-site.xml

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> 
<property> <name>mapreduce.jobhistory.done-dir</name> <value>/root/Programs/hadoop/logs/history/done</value> </property> 
<property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/root/Programs/hadoop/logs/history/intermediate-done-dir</value> </property> 
<property> <name>mapreduce.job.reduces</name> <value>4</value> </property> 

<!-- property> <name>yarn.nodemanager.resource.memory-mb</name> <value>8240</value> </property --> 
<property> <name>yarn.nodemanager.resource.memory-mb</name> <value>24240</value> </property> 
<property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> </property> 

<!-- property><name>mapreduce.task.files.preserve.failedtasks</name><value>true</value></property> 
<property><name>mapreduce.task.files.preserve.filepattern</name><value>*</value></property --> 

</configuration> 

我不知道发生了什么事到这一点,所以我张贴在这里的部分作业的日志。我注意到作业正在运行的容器获得了CONTAINER_STOP信号。任何人都可以帮助我发生什么事情?

2016-10-17 09:57:23,233 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1476697963637_0001_01_000022 
2016-10-17 09:57:23,233 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu  IP=172.30.0.231 OPERATION=Stop Container Request  TARGET=ContainerManageImpl  RESULT=SUCCESS APPID=application_1476697963637_0001 CONTAINERID=container_1476697963637_0001_01_000022 
2016-10-17 09:57:23,263 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000020 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL 
2016-10-17 09:57:23,263 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000022 transitioned from RUNNING to KILLING 
2016-10-17 09:57:23,321 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1476697963637_0001_01_000022 
2016-10-17 09:57:23,341 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /home/ubuntu/tmp/hadoop-temp/nm-local-dir/usercache/ubuntu/appcache/application_1476697963637_0001/container_1476697963637_0001_01_000020 
2016-10-17 09:57:23,404 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27978 for container-id container_1476697963637_0001_01_000042: 263.0 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used 
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu  OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1476697963637_0001 CONTAINERID=container_1476697963637_0001_01_000020 
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000020 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE 
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1476697963637_0001_01_000020 from application application_1476697963637_0001 
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1476697963637_0001_01_000020 for log-aggregation 
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1476697963637_0001 
2016-10-17 09:57:23,570 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1476697963637_0001_01_000022 is : 143 
2016-10-17 09:57:23,571 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000022 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL 
2016-10-17 09:57:23,571 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /home/ubuntu/tmp/hadoop-temp/nm-local-dir/usercache/ubuntu/appcache/application_1476697963637_0001/container_1476697963637_0001_01_000022 
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu  OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1476697963637_0001 CONTAINERID=container_1476697963637_0001_01_000022 
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000022 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE 
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1476697963637_0001_01_000022 from application application_1476697963637_0001 
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1476697963637_0001_01_000022 for log-aggregation 
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1476697963637_0001 
2016-10-17 09:57:23,670 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27820 for container-id container_1476697963637_0001_01_000040: 266.3 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used 
+0

请问您是否会发布对应于此转换发生的日志部分? – abhiieor

回答

0

我有这个问题;重新开始cloudera和纱线解决它。

如果重新启动不起作用,请尝试检查job.properties中的端口 - 端口namenodejobtracker可能有问题。确保您的jobtracker端口在job.properties文件中正确。

另请检查map-reduce群集插槽。它可能会用完插槽。