无法找到或加载主类256 - 纱线群集

时间:2016-01-16 00:58:56

标签: hadoop mapreduce yarn hadoop2 giraph

我目前正在运行单节点纱线群集,由于某种原因,我甚至无法执行map reduce(grep,wordcount等)附带的示例。使用此行我执行grep:

$HADOOP_HOME/bin/yarn jar /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar grep input output2 'dfs[a-z.]+'

这个集群普遍运行Giraph程序,但现在我需要一个Map Reduce应用程序,所以我将其切换回纯纱线。但可能我错过了一些东西。

所有失败的容器都有相同的错误:

Container: container_1452447718890_0001_01_000002 on localhost_37976
======================================================================
LogType: stderr
LogLength: 45
Log Contents:
Error: Could not find or load main class 256

Jps结果:

7261 SecondaryNameNode
7535 NodeManager
7413 ResourceManager
6928 NameNode
7593 JobHistoryServer
7047 DataNode
7733 QuorumPeerMain
8433 Jps

主要日志:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at hdnode01/192.168.0.10:8050
16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to process : 1
16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1452905418747_0001
16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application application_1452905418747_0001
16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1452905418747_0001/
16/01/15 21:53:54 INFO mapreduce.Job: Running job: job_1452905418747_0001
16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001 running in uber mode : false
16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
16/01/15 21:54:07 INFO mapreduce.Job: Task Id : attempt_1452905418747_0001_m_000000_0, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

16/01/15 21:54:11 INFO mapreduce.Job: Task Id : attempt_1452905418747_0001_m_000000_1, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

16/01/15 21:54:15 INFO mapreduce.Job: Task Id : attempt_1452905418747_0001_m_000000_2, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed with state FAILED due to: Task failed task_1452905418747_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
    Job Counters 
        Failed map tasks=4
        Launched map tasks=4
        Other local map tasks=3
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=15548
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=7774
        Total vcore-seconds taken by all map tasks=7774
        Total megabyte-seconds taken by all map tasks=3980288
    Map-Reduce Framework
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0
16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at hdnode01/192.168.0.10:8050
16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to process : 0
16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1452905418747_0002
16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application application_1452905418747_0002
16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1452905418747_0002/
16/01/15 21:54:22 INFO mapreduce.Job: Running job: job_1452905418747_0002
16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002 running in uber mode : false
16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
16/01/15 21:54:36 INFO mapreduce.Job: Task Id : attempt_1452905418747_0002_r_000000_0, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

16/01/15 21:54:41 INFO mapreduce.Job: Task Id : attempt_1452905418747_0002_r_000000_1, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

16/01/15 21:54:46 INFO mapreduce.Job: Task Id : attempt_1452905418747_0002_r_000000_2, Status : FAILED
Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: 
org.apache.hadoop.util.Shell$ExitCodeException: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed with state FAILED due to: Task failed task_1452905418747_0002_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
    Job Counters 
        Failed reduce tasks=4
        Launched reduce tasks=4
        Total time spent by all maps in occupied slots (ms)=0
        Total time spent by all reduces in occupied slots (ms)=11882
        Total time spent by all reduce tasks (ms)=5941
        Total vcore-seconds taken by all reduce tasks=5941
        Total megabyte-seconds taken by all reduce tasks=3041792
    Map-Reduce Framework
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0

1 个答案:

答案 0 :(得分:0)

我在mapred-site.xml中遇到了问题。我的mapred-site.xml是:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdnode01:54311</value>
</property>

<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>

<property>
<name>mapreduce.job.maps</name>
<value>4</value>
</property>

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.map.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.reduce.memory.mb</name>
<value>512</value>
</property>

<property>
<name>mapreduce.map.java.opts</name>
<value>256</value>
</property>

<property>
<name>mapreduce.reduce.java.opts</name>
<value>256</value>
</property>

</configuration>

最后两个属性是问题所在。删除两者(或使用-Xmx256m而不是256)解决了我的问题。