我有hadoop集群,我试图从使用REST API在另一台机器上运行的java代码中运行wordcount作业。我在这里如何运作
Configuration conf = new Configuration();
conf.set("yarn.resourcemanager.address", resourceManagerAddress);
conf.set("mapreduce.framework.name", "yarn");
conf.set("fs.default.name", fsDefaultName);
Job job = Job.getInstance(conf, "Rest WC job2");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(inputPath));
FileOutputFormat.setOutputPath(job, new Path(outputPath));
job.submit();
作业被提交到集群,我可以在hadoop UI控制台中看到它,但是当查看从属日志时,我可以看到以下内容:
2017-11-01 09:03:21,669 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1509459563039_0017_000001 (auth:SIMPLE)
2017-11-01 09:03:21,676 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1509459563039_0017_01_000001 by user root
2017-11-01 09:03:21,677 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1509459563039_0017
2017-11-01 09:03:21,677 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=root IP=10.56.0.93 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1509459563039_0017 CONTAINERID=container_1509459563039_0017_01_000001
2017-11-01 09:03:21,677 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1509459563039_0017 transitioned from NEW to INITING
2017-11-01 09:03:21,677 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Adding container_1509459563039_0017_01_000001 to application application_1509459563039_0017
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: Application application_1509459563039_0017 transitioned from INITING to RUNNING
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1509459563039_0017_01_000001 transitioned from NEW to LOCALIZING
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1509459563039_0017
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.jar transitioned from INIT to DOWNLOADING
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.splitmetainfo transitioned from INIT to DOWNLOADING
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.split transitioned from INIT to DOWNLOADING
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.xml transitioned from INIT to DOWNLOADING
2017-11-01 09:03:21,678 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1509459563039_0017_01_000001
2017-11-01 09:03:21,680 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-root/nm-local-dir/nmPrivate/container_1509459563039_0017_01_000001.tokens. Credentials list:
2017-11-01 09:03:21,689 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user root
2017-11-01 09:03:21,690 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-root/nm-local-dir/nmPrivate/container_1509459563039_0017_01_000001.tokens to /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017/container_1509459563039_0017_01_000001.tokens
2017-11-01 09:03:21,690 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017 = file:/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017
2017-11-01 09:03:22,055 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.jar(->/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017/filecache/10/job.jar) transitioned from DOWNLOADING to LOCALIZED
2017-11-01 09:03:22,073 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.splitmetainfo(->/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017/filecache/11/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED
2017-11-01 09:03:22,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.split(->/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017/filecache/12/job.split) transitioned from DOWNLOADING to LOCALIZED
2017-11-01 09:03:22,111 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://<master_ip_address>:9000/tmp/hadoop-yarn/staging/root/.staging/job_1509459563039_0017/job.xml(->/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED
2017-11-01 09:03:22,111 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1509459563039_0017_01_000001 transitioned from LOCALIZING to LOCALIZED
2017-11-01 09:03:22,131 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1509459563039_0017_01_000001 transitioned from LOCALIZED to RUNNING
2017-11-01 09:03:22,135 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1509459563039_0017/container_1509459563039_0017_01_000001/default_container_executor.sh]
2017-11-01 09:03:23,755 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1509459563039_0017_01_000001
2017-11-01 09:03:23,768 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 135.8 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
2017-11-01 09:03:26,770 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 232.3 MB of 2 GB physical memory used; 1.6 GB of 4.2 GB virtual memory used
2017-11-01 09:03:29,772 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:32,773 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:35,775 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:38,777 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:41,778 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:44,780 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:47,781 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
2017-11-01 09:03:50,784 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 4957 for container-id container_1509459563039_0017_01_000001: 296.7 MB of 2 GB physical memory used; 1.7 GB of 4.2 GB virtual memory used
注意最后几行。数字不断增长,工作永远不会完成
在hadoop ui我能看到
YarnApplicationState: ACCEPTED: waiting for AM container to be allocated, launched and register with RM.
并且系统陷入此状态。
我可以通过运行hadoop jar ....命令从hadoop master运行wordcount作业,并且它已正确完成,因此集群已配置并正常工作。
可能是什么问题?
由于
UPD。纱线的最后一行 - 主节点上的资源管理器
2017-11-01 11:49:04,630 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1509459563039_0020_000001 State change from ALLOCATED to LAUNCHED
2017-11-01 11:49:05,620 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1509459563039_0020_01_000001 Container Transitioned from ACQUIRED to RUNNING
答案 0 :(得分:0)
Job甚至没有开始执行。您的YARN中没有可用的免费装备,因此工作无法启动。
它不是错误,而是正常的YARN应用程序状态转换。