当作为非hadoop用户运行时,纱线MapReduce approximate-pi示例失败退出代码1

时间:2015-12-22 20:57:39

标签: hadoop mapreduce yarn

我正在使用Hadoop 2.6.2和yarn运行一个小型Linux机器私有集群。我从linux边缘节点启动yarn作业。在由hadoop(超级用户,群集所有者)用户运行时,用于近似pi值的罐装纱线示例可以正常工作,但在边缘节点上从我的个人帐户运行时会失败。在这两种情况下(hadoop,我)我的工作完全如下:

clott@edge: /home/hadoop/hadoop-2.6.2/bin/yarn jar /home/hadoop/hadoop-2.6.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.2.jar pi 2 5

失败了;完整输出低于。我认为文件未找到的例外完全是假的。我认为有些东西导致容器的启动失败,因此没有找到输出。导致容器启动失败的原因是什么,以及如何调试?

因为这个相同的命令在由hadoop用户运行时完美运行,但在同一边缘节点上由不同帐户运行时不能正常运行,我怀疑是否有权限或其他纱线配置问题;我不怀疑丢失jar文件问题。我的个人帐户使用与hadoop帐户相同的环境变量,这是值得的。

这些问题很相似,但我找不到解决方案:

https://issues.cloudera.org/browse/DISTRO-577

Running a map reduce job as a different user

Yarn MapReduce Job Issue - AM Container launch error in Hadoop 2.3.0

我尝试过这些补救措施但没有取得任何成功:

  1. 在core-site.xml中,将hadoop.tmp.dir的值设置为/ tmp / temp - $ {user.name}

  2. 将我的个人用户帐户添加到群集中的每个节点

  3. 我想很多安装只运行一个用户,但我试图允许两个人在群集上一起工作而不会相互干扰中间结果。我完全疯了吗?

    完整输出:

    Number of Maps  = 2
    Samples per Map = 5
    Wrote input for Map #0
    Wrote input for Map #1
    Starting Job
    15/12/22 15:29:18 INFO client.RMProxy: Connecting to ResourceManager at ac1.mycompany.com/1.2.3.4:8032
    15/12/22 15:29:18 INFO input.FileInputFormat: Total input paths to process : 2
    15/12/22 15:29:19 INFO mapreduce.JobSubmitter: number of splits:2
    15/12/22 15:29:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450815437271_0002
    15/12/22 15:29:19 INFO impl.YarnClientImpl: Submitted application application_1450815437271_0002
    15/12/22 15:29:19 INFO mapreduce.Job: The url to track the job: http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/
    15/12/22 15:29:19 INFO mapreduce.Job: Running job: job_1450815437271_0002
    15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 running in uber mode : false
    15/12/22 15:29:31 INFO mapreduce.Job:  map 0% reduce 0%
    15/12/22 15:29:31 INFO mapreduce.Job: Job job_1450815437271_0002 failed with state FAILED due to: Application application_1450815437271_0002 failed 2 times due to AM Container for appattempt_1450815437271_0002_000002 exited with  exitCode: 1
    For more detailed output, check application tracking page:http://ac1.mycompany.com:8088/proxy/application_1450815437271_0002/Then, click on links to logs of each attempt.
    Diagnostics: Exception from container-launch.
    Container id: container_1450815437271_0002_02_000001
    Exit code: 1
    Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
    
    Container exited with a non-zero exit code 1
    Failing this attempt. Failing the application.
    15/12/22 15:29:31 INFO mapreduce.Job: Counters: 0
    Job Finished in 13.489 seconds
    java.io.FileNotFoundException: File does not exist: hdfs://ac1.mycompany.com/user/clott/QuasiMonteCarlo_1450816156703_163431099/out/reduce-out
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1817)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1841)
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
    at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
    

1 个答案:

答案 0 :(得分:2)

是Manjunath Ballur你是对的,这是一个权限问题!最后学会了如何保存纱线应用日志,这清楚地揭示了问题。以下是步骤:

  1. 编辑yarn-site.xml并添加一个属性以延迟删除纱线日志:

    <property>
        <name>yarn.nodemanager.delete.debug-delay-sec</name>
        <value>600</value>
    </property>
    
  2. 将yarn-site.xml推送到所有节点(ARGH我忘记了很长时间)并重新启动群集。

  3. 如上图所示运行纱线示例来估算pi,它失败了。查看http://namenode:8088/cluster/apps/FAILED以查看失败的应用程序,单击最近失败的链接,查看底部以查看群集中的哪些节点已被使用。

  4. 在群集中应用失败的其中一个节点上打开一个窗口。找到工作目录,在我的案例中是

    ~hadoop/hadoop-2.6.2/logs/userlogs/application_1450815437271_0004/container_1450‌​815437271_0004_01_000001/
    
  5. 瞧,我看到文件stdout(只有log4j bitching),stderr(几乎是空的)和syslog(赢家赢家鸡肉晚餐)。在syslog文件中,我找到了这个gem:

    2015-12-23 08:31:42,376 INFO [main] org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=clott, access=EXECUTE, inode="/tmp/hadoop-yarn/staging/history":hadoop:supergroup:drwxrwx---
    
  6. 所以问题是对hdfs:/// tmp / hadoop-yarn / staging / history的权限。一个简单的chmod 777让我正确,我不再打击群体烫发了。现在,非hadoop非超级用户可以运行一个纱线作业。