乔布斯从RUNNING跳到PREP状态

时间:2016-09-13 19:48:10

标签: hadoop mapreduce yarn

当我运行mapreduce作业时,它会从RUNNING跳到PREP状态。我查看了mapreduce日志,但我没有发现任何异常。我想知道这是否是与纱线配置有关的问题。所以,我已经查看了mapred-site.xml [2]的配置,看起来内存大小是正确的。我在一台拥有16个内核和64GB内存的PC上运行,虽然我已将mapreduce设置为以32GB(<name>yarn.nodemanager.resource.memory-mb</name> <value>32218</value>)运行。有任何建议尝试调试吗?

[1]工作状态

Total jobs:1
                  JobId      State           StartTime      UserName           Queue      Priority       UsedContainers  RsvdContainers  UsedMem         RsvdMem        NeededMem          AM info
 job_1379101056979_0001       PREP       1379101096477          root         default        NORMAL                    0               0       0M              0M 

[2] mapred-site.xml

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
 <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
 <property> <name>mapreduce.jobhistory.done-dir</name> <value>/root/Programs/hadoop/logs/history/done</value> </property>
 <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/root/Programs/hadoop/logs/history/intermediate-done-dir</value> </property>
 <property> <name>mapreduce.job.reduces</name> <value>4</value> </property>

 <!-- property> <name>yarn.nodemanager.resource.memory-mb</name> <value>8240</value> </property -->
 <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>24240</value> </property>
 <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> </property>

<!-- property><name>mapreduce.task.files.preserve.failedtasks</name><value>true</value></property>
<property><name>mapreduce.task.files.preserve.filepattern</name><value>*</value></property -->

</configuration>

我不知道发生了什么,所以我在这里发布一份工作日志。我注意到作业运行的容器有一个CONTAINER_STOP信号。任何人都可以帮助我发生什么事情?

2016-10-17 09:57:23,233 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1476697963637_0001_01_000022
2016-10-17 09:57:23,233 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu       IP=172.30.0.231 OPERATION=Stop Container Request        TARGET=ContainerManageImpl      RESULT=SUCCESS  APPID=application_1476697963637_0001    CONTAINERID=container_1476697963637_0001_01_000022
2016-10-17 09:57:23,263 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000020 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2016-10-17 09:57:23,263 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000022 transitioned from RUNNING to KILLING
2016-10-17 09:57:23,321 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1476697963637_0001_01_000022
2016-10-17 09:57:23,341 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /home/ubuntu/tmp/hadoop-temp/nm-local-dir/usercache/ubuntu/appcache/application_1476697963637_0001/container_1476697963637_0001_01_000020
2016-10-17 09:57:23,404 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27978 for container-id container_1476697963637_0001_01_000042: 263.0 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu       OPERATION=Container Finished - Killed   TARGET=ContainerImpl    RESULT=SUCCESS  APPID=application_1476697963637_0001    CONTAINERID=container_1476697963637_0001_01_000020
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000020 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1476697963637_0001_01_000020 from application application_1476697963637_0001
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1476697963637_0001_01_000020 for log-aggregation
2016-10-17 09:57:23,559 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1476697963637_0001
2016-10-17 09:57:23,570 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1476697963637_0001_01_000022 is : 143
2016-10-17 09:57:23,571 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000022 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2016-10-17 09:57:23,571 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /home/ubuntu/tmp/hadoop-temp/nm-local-dir/usercache/ubuntu/appcache/application_1476697963637_0001/container_1476697963637_0001_01_000022
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=ubuntu       OPERATION=Container Finished - Killed   TARGET=ContainerImpl    RESULT=SUCCESS  APPID=application_1476697963637_0001    CONTAINERID=container_1476697963637_0001_01_000022
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1476697963637_0001_01_000022 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1476697963637_0001_01_000022 from application application_1476697963637_0001
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl: Considering container container_1476697963637_0001_01_000022 for log-aggregation
2016-10-17 09:57:23,572 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1476697963637_0001
2016-10-17 09:57:23,670 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27820 for container-id container_1476697963637_0001_01_000040: 266.3 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used

1 个答案:

答案 0 :(得分:0)

我有这个问题;重新启动cloudera和纱线解决了它。

如果重新启动不起作用,请尝试检查job.properties中的端口 - namenodejobtracker端口可能存在问题。确保jobtracker文件中的job.properties端口正确无误。

同时检查map-reduce群集插槽。它可能用完了插槽。