aws emr s3-dist-cp在CopyFilesReducer.cleanup上,MapReduce作业失败

时间:2020-10-20 18:19:47

标签: amazon-web-services hadoop amazon-emr

具有(学习)AWS EMR集群版本emr-5.31.0

试图将文件从s3复制到hdfs,我在主节点上发出了一条命令:

s3-dist-cp --src=s3://bigdata-xxxxxxxxx/emrdata/orders.tbl.gz --dest=hdfs:/emrdata/orders.tbl.gz

实际上执行一系列的map / reduce作业,其中reduce作业之一失败:

20/10/20 17:46:29 INFO mapreduce.Job:  map 100% reduce 50%
20/10/20 17:46:31 INFO mapreduce.Job: Task Id : attempt_1603203512239_0014_r_000005_0, Status : FAILED
Error: java.lang.RuntimeException: Reducer task failed to copy 1 files: s3://bigdata-xxxxxxxxx/emrdata/orders.tbl.gz etc
        at com.amazon.elasticmapreduce.s3distcp.CopyFilesReducer.cleanup(CopyFilesReducer.java:67)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:179)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:635)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)

如果有帮助,我有a full cli output and the task syslog

该文件是相对较小的归档文件(400MB)

我正在学习AWS EMR环境,因此我可能会丢失一些被认为是理所当然的东西。

集群信息:

Applications:Hive 2.3.7, Pig 0.17.0, Hue 4.7.1, Spark 2.4.6, Tez 0.9.2, Flink 1.11.0, ZooKeeper 3.4.14, Oozie 5.2.0

EC2 instance profile:EMR_EC2_DefaultRole
EMR role:EMR_DefaultRole
Auto Scaling role:EMR_AutoScaling_DefaultRole

我无法确定问题的根本原因或解决方法。

1 个答案:

答案 0 :(得分:0)

我知道了。 使用s3-dist-cp的正确方法是使用存储桶和srcPattern参数。

s3-dist-cp --src=s3://bigdata-xxxxxxxxx/emrdata/ --dest=hdfs:///emrdata/ --srcPattern='orders\.tbl\.gz'