使用S3的Spark结构化流式传输失败

时间:2017-10-04 13:26:50

标签: apache-spark amazon-s3 spark-structured-streaming

我正在AWS上运行的Spark 2.2群集上运行Structured Streaming作业。我在eu-central-1中使用S3存储桶进行检查点操作。 对工作人员的某些提交操作似乎随机失败,并出现以下错误:

17/10/04 13:20:34 WARN TaskSetManager: Lost task 62.0 in stage 19.0 (TID 1946, 0.0.0.0, executor 0): java.lang.IllegalStateException: Error committing version 1 into HDFSStateStore[id=(op=0,part=62),dir=s3a://bucket/job/query/state/0/62]
at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.commit(HDFSBackedStateStoreProvider.scala:198)
at org.apache.spark.sql.execution.streaming.StateStoreSaveExec$$anonfun$doExecute$3$$anon$1.hasNext(statefulOperators.scala:230)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:99)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:97)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: XXXXXXXXXXX, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: abcdef==
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more

使用以下选项提交作业以允许eu-central-1存储桶:

--packages org.apache.hadoop:hadoop-aws:2.7.4
--conf spark.hadoop.fs.s3a.endpoint=s3.eu-central-1.amazonaws.com
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.hadoop.fs.s3a.access.key=xxxxx
--conf spark.hadoop.fs.s3a.secret.key=xxxxx

我已经尝试过生成没有特殊字符和使用实例策略的访问密钥,两者都具有相同的效果。

2 个答案:

答案 0 :(得分:2)

Hadoop小组经常发生这种情况provide a troubleshooting guide

但是像Yuval说的那样:直接投入S3太危险了,而且你创建的数据越多越慢,列出不一致的风险意味着有时数据会丢失,至少使用Apache Hadoop 2.6-2.8版本的S3A

答案 1 :(得分:0)

你的日志说:

  

引起:com.amazonaws.services.s3.model.AmazonS3Exception:Status   代码:403,AWS服务:Amazon S3,AWS请求ID:XXXXXXXXXXX,AWS   错误代码:SignatureDoesNotMatch,AWS错误消息:请求   我们计算的签名与您提供的签名不符。   检查您的密钥和签名方法。,S3扩展请求ID:abcdef ==

该错误表示凭据不正确。

class Map extends Component{
  constructor(){
      super()
      this.state = {
        value: ' ',
        searchValue: "London"
      };

      this.handleChange = this.handleChange.bind(this);
      this.handleSubmit = this.handleSubmit.bind(this);
    }



handleChange(event) {
       this.setState({value: event.target.value});
     }

handleSubmit(event) {
    const value = this.state.value
       event.preventDefault();
       this.setState ({
         searchValue: value
       })
 }


    render(){
        return(
            <div className='map'>
                <h1>The Map</h1>

                    <AppMap city={this.state.searchValue} />

                    <Grid>
                    <Row className="show-grid">
                        <FormGroup  value={this.state.value} onChange={this.handleChange} name='cardHeading' controlId="formControlsTextarea">
                          <Col xs={8} md={8}>
                            <FormControl type="text" placeholder="title" />
                          </Col>
                          <Col xs={4} md={4}>
                            <Button onClick={this.handleSubmit} type="submit">Submit</Button>
                          </Col>
                        </FormGroup>
                    </Row>
                  </Grid>
            </div>
          )
        }
    }

用于调试目的

1)访问密钥/密钥都有效

2)存储桶名称是否正确

3)打开CLI中的日志记录并将其与SDK进行比较

4)启用SDK记录,如下所示:

http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-logging.html

您需要提供log4j jar和示例log4j.properties文件。

http://docs.aws.amazon.com/ses/latest/DeveloperGuide/get-aws-keys.html