distcp可用于将文件目录从S3复制到HDFS吗?

时间:2013-05-23 20:13:32

标签: hadoop amazon-s3 hdfs

我想知道hadoop distcp是否可以用于将多个文件从S3一次复制到HDFS。它似乎只适用于具有绝对路径的单个文件。我想复制整个目录,或使用通配符。

请参阅:Hadoop DistCp using wildcards?

我知道s3distcp,但为了简单起见,我宁愿使用distcp

这是我尝试将目录从S3复制到HDFS:

[root@ip-10-147-167-56 ~]# /root/ephemeral-hdfs/bin/hadoop distcp s3n://<key>:<secret>@mybucket/dir hdfs:///input/
13/05/23 19:58:27 INFO tools.DistCp: srcPaths=[s3n://<key>:<secret>@mybucket/dir]
13/05/23 19:58:27 INFO tools.DistCp: destPath=hdfs:/input
13/05/23 19:58:29 INFO tools.DistCp: sourcePathsCount=4
13/05/23 19:58:29 INFO tools.DistCp: filesToCopyCount=3
13/05/23 19:58:29 INFO tools.DistCp: bytesToCopyCount=87.0
13/05/23 19:58:29 INFO mapred.JobClient: Running job: job_201305231521_0005
13/05/23 19:58:30 INFO mapred.JobClient:  map 0% reduce 0%
13/05/23 19:58:45 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_0, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:58:55 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_1, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:04 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_2, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:18 INFO mapred.JobClient: Job complete: job_201305231521_0005
13/05/23 19:59:18 INFO mapred.JobClient: Counters: 6
13/05/23 19:59:18 INFO mapred.JobClient:   Job Counters 
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=38319
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Launched map tasks=4
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/05/23 19:59:18 INFO mapred.JobClient:     Failed map tasks=1
13/05/23 19:59:18 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201305231521_0005_m_000000
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
    at org.apache.hadoop.tools.DistCp.copy(DistCp.java:667)
    at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)

1 个答案:

答案 0 :(得分:1)

您无法在s3n://地址中使用通配符。

但是,可以将整个目录从S3复制到HDFS。在这种情况下,空指针异常的原因是HDFS目标文件夹已经存在。

修复:删除HDFS目标文件夹:./hadoop fs -rmr /input/

注1:我也试过传递-update-overwrite,但我仍然有NPE。

注2:https://hadoop.apache.org/docs/r1.2.1/distcp.html显示了如何复制多个显式文件。