AWS数据管道错误

时间:2017-05-02 13:57:37

标签: amazon-web-services amazon-data-pipeline

使用数据管道aws进程的dynamoDB表备份出现错误:

02 May 2017 07:19:04,544 [WARN] (TaskRunnerService-df-0940986HJGYQM1ZJ8BN_@EmrClusterForBackup_2017-04-25T13:31:55-2) df-0940986HJGYQM1ZJ8BN amazonaws.datapipeline.cluster.EmrUtil: EMR job flow named 'df-0940986HJGYQM1ZJ8BN_@EmrClusterForBackup_2017-04-25T13:31:55' with jobFlowId 'j-2SJ0OQOM0BTI' is in status 'RUNNING' because of the step 'df-0940986HJGYQM1ZJ8BN_@TableBackupActivity_2017-04-25T13:31:55_Attempt=2' failures 'null'
02 May 2017 07:19:04,544 [INFO] (TaskRunnerService-df-0940986HJGYQM1ZJ8BN_@EmrClusterForBackup_2017-04-25T13:31:55-2) df-0940986HJGYQM1ZJ8BN amazonaws.datapipeline.cluster.EmrUtil: EMR job '@TableBackupActivity_2017-04-25T13:31:55_Attempt=2' with jobFlowId 'j-2SJ0OQOM0BTI' is in  status 'RUNNING' and reason 'Running step'. Step 'df-0940986HJGYQM1ZJ8BN_@TableBackupActivity_2017-04-25T13:31:55_Attempt=2' is in status 'FAILED' with reason 'null'
02 May 2017 07:19:04,544 [INFO] (TaskRunnerService-df-0940986HJGYQM1ZJ8BN_@EmrClusterForBackup_2017-04-25T13:31:55-2) df-0940986HJGYQM1ZJ8BN amazonaws.datapipeline.cluster.EmrUtil: Collecting steps stderr logs for cluster with AMI 3.9.0
02 May 2017 07:19:04,558 [INFO] (TaskRunnerService-df-0940986HJGYQM1ZJ8BN_@EmrClusterForBackup_2017-04-25T13:31:55-2) df-0940986HJGYQM1ZJ8BN amazonaws.datapipeline.taskrunner.LogMessageUtil: Returning tail errorMsg :    at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132)
 at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:460)
 at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
 at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
 at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
 at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
 at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
 at org.apache.hadoop.dynamodb.tools.DynamoDbExport.run(DynamoDbExport.java:79)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.dynamodb.tools.DynamoDbExport.main(DynamoDbExport.java:30)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

有大量数据(600万)。管道工作了4天,出现了错误。无法弄清楚错误。

1 个答案:

答案 0 :(得分:0)

通过分析日志,尤其是这一行...

org.apache.hadoop.dynamodb.tools.DynamoDbExport

似乎您正在运行在名为"Export DynamoDB table to S3"的预定义模板之一上创建的AWS Data Pipeline。

此数据管道采用了几个输入参数,您可以在管道设计器中对其进行编辑,但是最重要的参数是:

  1. myDDBTableName -要导出的DynamoDB表的名称。
  2. myOutputS3Loc​​ -希望MapReduce作业将数据导出到的完整S3路径。该格式必须为s3:///。然后,MR作业将根据日期时间戳(例如s3://<S3_BUCKET_NAME>/<S3_BUCKET_PREFIX>/2019-08-13-15-32-02)将您的数据导出到带有S3前缀的S3位置
  3. myDDBReadThroughputRatio -指定MR作业将花费DDB表RCU的比例来完成操作。建议根据您最近的指标+ MR作业的额外RCU将其设置为预配置吞吐量。换句话说,不要让您的DDB表带有“按需”配置-它将不起作用。另外,我建议您宽容MR工作所需的额外RCU,因为这将确保您的EMR群集资源能够更快地完成,并且几个小时的额外RCU比额外的EMR计算资源便宜。
  4. myDDBRegion -DDB表的区域(请记住:DDB是多区域服务,而不管全局表的概念如何。)

现在我们熟悉了此数据管道所需的这些参数,让我们看一下此日志行:

02 May 2017 07:19:04,558 [INFO] (TaskRunnerService-df-0940986HJGYQM1ZJ8BN_@EmrClusterForBackup_2017-04-25T13:31:55-2) df-0940986HJGYQM1ZJ8BN amazonaws.datapipeline.taskrunner.LogMessageUtil: Returning tail errorMsg :    at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132)

尽管它没有在错误日志级别上冒泡,但这是来自Hadoop框架的错误消息,该消息无法识别Hadoop作业的输出格式位置。 当您的数据管道将任务提交给Hadoop的TaskRunner时,它评估了输出位置格式,并意识到它不受支持。这可能意味着多件事:

  1. 您的数据管道参数 myOutputS3Loc​​ 在两次运行之间已更改为无效值。
  2. 您的数据管道参数 myOutputS3Loc​​ 指向同时被删除的S3存储桶。

我建议您检查 myOutputS3Loc​​ 参数并传递值,以确保您的MR工作获得正确的输入。您还可以通过在作业运行时检查EMR控制台中的控制器日志来验证向EMR任务提交了哪些参数。