hadoop mapreduce示例有时可以工作,有时会失败,发生了什么?

时间:2018-11-06 08:04:06

标签: hadoop mapreduce

我通过命令运行了一个hadoop mapreduce示例

hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount input output

有时可行:

18/11/06 00:37:06 INFO client.RMProxy: Connecting to ResourceManager at node-0/10.10.1.1:8032
18/11/06 00:37:06 INFO input.FileInputFormat: Total input paths to process : 1
18/11/06 00:37:06 INFO mapreduce.JobSubmitter: number of splits:1
18/11/06 00:37:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541484532513_0006
18/11/06 00:37:06 INFO impl.YarnClientImpl: Submitted application application_1541484532513_0006
18/11/06 00:37:06 INFO mapreduce.Job: The url to track the job: http://node-0:8088/proxy/application_1541484532513_0006/
18/11/06 00:37:06 INFO mapreduce.Job: Running job: job_1541484532513_0006
18/11/06 00:37:11 INFO mapreduce.Job: Job job_1541484532513_0006 running in uber mode : false
18/11/06 00:37:11 INFO mapreduce.Job:  map 0% reduce 0%
18/11/06 00:37:15 INFO mapreduce.Job:  map 100% reduce 0%
18/11/06 00:37:18 INFO mapreduce.Job:  map 100% reduce 100%
18/11/06 00:37:18 INFO mapreduce.Job: Job job_1541484532513_0006 completed successfully
18/11/06 00:37:18 INFO mapreduce.Job: Counters: 44
    File System Counters
        FILE: Number of bytes read=216
        FILE: Number of bytes written=231641
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=1300
        Total time spent by all reduces in occupied slots (ms)=1265
        Total time spent by all map tasks (ms)=1300
        Total time spent by all reduce tasks (ms)=1265
        Total vcore-seconds taken by all map tasks=1300
        Total vcore-seconds taken by all reduce tasks=1265
        Total megabyte-seconds taken by all map tasks=1331200
        Total megabyte-seconds taken by all reduce tasks=1295360
    Map-Reduce Framework
        Map input records=1
        Map output records=2
        Map output bytes=20
        Map output materialized bytes=30
        Input split bytes=135
        Combine input records=2
        Combine output records=2
        Reduce input groups=2
        Reduce shuffle bytes=30
        Reduce input records=2
        Reduce output records=2
        Spilled Records=4
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=14
        CPU time spent (ms)=660
        Physical memory (bytes) snapshot=402006016
        Virtual memory (bytes) snapshot=4040646656
        Total committed heap usage (bytes)=402653184
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=32
    File Output Format Counters 
        Bytes Written=28

或日志可能在下面:

18/11/06 00:35:17 INFO mapreduce.Job: Task Id : attempt_1541484532513_0003_m_000000_1, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


18/11/06 00:35:21 INFO mapreduce.Job: Task Id : attempt_1541484532513_0003_m_000000_2, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0003/job.jar does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


18/11/06 00:35:25 INFO mapreduce.Job:  map 100% reduce 0%
18/11/06 00:35:29 INFO mapreduce.Job:  map 100% reduce 100%
18/11/06 00:35:29 INFO mapreduce.Job: Job job_1541484532513_0003 completed successfully
18/11/06 00:35:29 INFO mapreduce.Job: Counters: 46
    File System Counters
        FILE: Number of bytes read=216
        FILE: Number of bytes written=231635
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Job Counters 
        Failed map tasks=3
        Launched map tasks=4
        Launched reduce tasks=1
        Other local map tasks=3
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=6266
        Total time spent by all reduces in occupied slots (ms)=1290
        Total time spent by all map tasks (ms)=6266
        Total time spent by all reduce tasks (ms)=1290
        Total vcore-seconds taken by all map tasks=6266
        Total vcore-seconds taken by all reduce tasks=1290
        Total megabyte-seconds taken by all map tasks=6416384
        Total megabyte-seconds taken by all reduce tasks=1320960
    Map-Reduce Framework
        Map input records=1
        Map output records=2
        Map output bytes=20
        Map output materialized bytes=30
        Input split bytes=135
        Combine input records=2
        Combine output records=2
        Reduce input groups=2
        Reduce shuffle bytes=30
        Reduce input records=2
        Reduce output records=2
        Spilled Records=4
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=14
        CPU time spent (ms)=680
        Physical memory (bytes) snapshot=404619264
        Virtual memory (bytes) snapshot=4036009984
        Total committed heap usage (bytes)=402653184
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=32
    File Output Format Counters 
        Bytes Written=28

这很奇怪!它应该与这样的日志一起工作!它说job.jar不存在。

但是有时,它会失败,并且执行相同的操作。

18/11/06 00:36:41 INFO mapreduce.Job: Task Id : attempt_1541484532513_0005_r_000000_1, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_15414845
18/11/06 00:36:46 INFO mapreduce.Job: Task Id : attempt_1541484532513_0005_r_000000_2, Status : FAILED
File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0005/job.jar does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/suqiang/.staging/job_1541484532513_0005/job.jar does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


18/11/06 00:36:52 INFO mapreduce.Job:  map 100% reduce 100%
18/11/06 00:36:52 INFO mapreduce.Job: Job job_1541484532513_0005 failed with state FAILED due to: Task failed task_1541484532513_0005_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

18/11/06 00:36:52 INFO mapreduce.Job: Counters: 35
    File System Counters
        FILE: Number of bytes read=186
        FILE: Number of bytes written=115831
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Job Counters 
        Failed map tasks=1
        Failed reduce tasks=4
        Launched map tasks=2
        Launched reduce tasks=4
        Other local map tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=2217
        Total time spent by all reduces in occupied slots (ms)=8012
        Total time spent by all map tasks (ms)=2217
        Total time spent by all reduce tasks (ms)=8012
        Total vcore-seconds taken by all map tasks=2217
        Total vcore-seconds taken by all reduce tasks=8012
        Total megabyte-seconds taken by all map tasks=2270208
        Total megabyte-seconds taken by all reduce tasks=8204288
    Map-Reduce Framework
        Map input records=1
        Map output records=2
        Map output bytes=20
        Map output materialized bytes=30
        Input split bytes=135
        Combine input records=2
        Combine output records=2
        Spilled Records=2
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=7
        CPU time spent (ms)=250
        Physical memory (bytes) snapshot=252555264
        Virtual memory (bytes) snapshot=2014208000
        Total committed heap usage (bytes)=201326592
    File Input Format Counters 
        Bytes Read=32

我的实验发生了什么?是我的误操作还是hadoop示例自身的问题?有谁遇到过同样的问题?任何建议和解决方案将不胜感激。

1 个答案:

答案 0 :(得分:0)

由于在超级模式下作业失败,因此问题出在应用程序主服务器无法访问HDFS或HDFS中的那些文件夹的地方。

虽然我们找到了解决您问题的真正方法,但是您可以为您的工作禁用uber模式,如下所示:

hadoop jar hadoop-mapreduce-examples-2.7.1.jar -D mapreduce.job.ubertask.enable=false wordcount input output

要完全解决此问题,首先要清除ApplicationMaster AM 配置。

编辑:也许您的问题出在/etc/hosts中。您能否在两台机器上都打印它们的内容。也许您在10.10.1.2的计算机上没有从localhost10.10.1.2的映射。