distcp hdfs到s3失败

时间:2014-09-18 18:06:22

标签: hadoop amazon-s3 hdfs distcp

我试图做一个目录,其中有数百个os小文件,扩展名为.avro

但是对于出现以下错误的某些文件失败了:

14/09/18 13:05:19 INFO mapred.JobClient:  map 99% reduce 0%
14/09/18 13:05:22 INFO mapred.JobClient:  map 100% reduce 0%
14/09/18 13:05:24 INFO mapred.JobClient: Task Id : attempt_201408291204_35665_m_000000_0, Status : FAILED
java.io.IOException: Copied: 32 Skipped: 0 Failed: 1
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:584)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

14/09/18 13:05:25 INFO mapred.JobClient:  map 83% reduce 0%
14/09/18 13:05:32 INFO mapred.JobClient:  map 100% reduce 0%
14/09/18 13:05:32 INFO mapred.JobClient: Task Id : attempt_201408291204_35665_m_000005_0, Status : FAILED
java.io.IOException: Copied: 20 Skipped: 0 Failed: 1
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.close(DistCp.java:584)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

14/09/18 13:05:33 INFO mapred.JobClient:  map 83% reduce 0%
14/09/18 13:05:41 INFO mapred.JobClient:  map 93% reduce 0%
14/09/18 13:05:48 INFO mapred.JobClient:  map 100% reduce 0%
14/09/18 13:05:51 INFO mapred.JobClient: Job complete: job_201408291204_35665
14/09/18 13:05:51 INFO mapred.JobClient: Counters: 33
14/09/18 13:05:51 INFO mapred.JobClient:   File System Counters
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of bytes read=0
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of bytes written=1050200
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     FILE: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of bytes read=782797980
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of bytes written=0
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of read operations=88
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     HDFS: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of bytes read=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of bytes written=782775062
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of large read operations=0
14/09/18 13:05:51 INFO mapred.JobClient:     S3: Number of write operations=0
14/09/18 13:05:51 INFO mapred.JobClient:   Job Counters
14/09/18 13:05:51 INFO mapred.JobClient:     Launched map tasks=8
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=454335
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/18 13:05:51 INFO mapred.JobClient:   Map-Reduce Framework
14/09/18 13:05:51 INFO mapred.JobClient:     Map input records=125
14/09/18 13:05:51 INFO mapred.JobClient:     Map output records=53
14/09/18 13:05:51 INFO mapred.JobClient:     Input split bytes=798
14/09/18 13:05:51 INFO mapred.JobClient:     Spilled Records=0
14/09/18 13:05:51 INFO mapred.JobClient:     CPU time spent (ms)=50250
14/09/18 13:05:51 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1930326016
14/09/18 13:05:51 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=9781469184
14/09/18 13:05:51 INFO mapred.JobClient:     Total committed heap usage (bytes)=5631639552
14/09/18 13:05:51 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
14/09/18 13:05:51 INFO mapred.JobClient:     BYTES_READ=22883
14/09/18 13:05:51 INFO mapred.JobClient:   distcp
14/09/18 13:05:51 INFO mapred.JobClient:     Bytes copied=782769559
14/09/18 13:05:51 INFO mapred.JobClient:     Bytes expected=782769559
14/09/18 13:05:51 INFO mapred.JobClient:     Files copied=70
14/09/18 13:05:51 INFO mapred.JobClient:     Files skipped=53

来自JobTracker UI的更多片段:

2014-09-18 13:04:24,381 INFO org.apache.hadoop.fs.s3native.NativeS3FileSystem: OutputStream for key '09/01/01/SEARCHES/_distcp_tmp_hrb8ba/part-m-00005.avro' upload complete
2014-09-18 13:04:25,136 INFO org.apache.hadoop.tools.DistCp: FAIL part-m-00005.avro : java.io.IOException: Fail to rename tmp file (=s3://magnetic-test/09/01/01/SEARCHES/_distcp_tmp_hrb8ba/part-m-00005.avro) to destination file (=s3://abc/09/01/01/SEARCHES/part-m-00005.avro)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:494)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:463)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:549)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:316)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.IOException
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.rename(DistCp.java:490)
    ... 11 more

有人知道这个问题吗?

2 个答案:

答案 0 :(得分:2)

通过在distcp命令中添加-D mapred.task.timeout=60000000来解决此问题

答案 1 :(得分:0)

我尝试了建议的答案,但没有运气。我在复制许多小文件时遇到了这个问题(数千个,总共不超过半个千兆字节)。我无法使distcp命令工作(与OP发布的错误相同),因此切换到hadoop fs -cp是我的解决方案。作为旁注,在同一个集群中,使用distcp复制其他更大的文件可以正常工作。