S3桶缺少红移火花

时间:2016-08-08 23:55:53

标签: java apache-spark amazon-s3 amazon-redshift

我尝试使用spark-redshift从redshift读取数据并遇到此错误。我在S3中创建了存储桶,并且能够以足够的凭据访问它。

java.sql.SQLException: Amazon Invalid operation: S3ServiceException:The specified bucket does not exist,Status 404,Error NoSuchBucket,Rid AA6E01BF9BCED7ED,ExtRid 7TQKPoWU5lMdJ9av3E0Ehzdgg+e0yRrNYaB5Q+WCef0JPm134XHeiSNk1mx4cdzp,CanRetry 1
Details:

error: S3ServiceException:The specified bucket does not exist,Status 404,Error NoSuchBucket,Rid AA6E01BF9BCED7ED,ExtRid 7TQKPoWU5lMdJ9av3E0Ehzdgg+e0yRrNYaB5Q+WCef0JPm134XHeiSNk1mx4cdzp,CanRetry 1
code: 8001
context: Listing bucket=redshift-spark.s3.amazonaws.com prefix=s3Redshift/3a312209-7d6d-4d6b-bbd4-c1a70b2e136b/
query: 0
location: s3_unloader.cpp:200
process: padbmaster [pid=4952]
-----------------------------------------------;
at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(ErrorResponse.java:1830)
at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(PGMessagingContext.java:804)
at com.amazon.redshift.client.PGMessagingContext.handleMessage(PGMessagingContext.java:642)
at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(InboundMessagesPipeline.java:312)
at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(PGMessagingContext.java:1062)
at com.amazon.redshift.client.PGMessagingContext.getErrorResponse(PGMessagingContext.java:1030)
at com.amazon.redshift.client.PGClient.handleErrorsScenario2ForPrepareExecution(PGClient.java:2417)
at com.amazon.redshift.client.PGClient.handleErrorsPrepareExecute(PGClient.java:2358)
at com.amazon.redshift.client.PGClient.executePreparedStatement(PGClient.java:1358)
at com.amazon.redshift.dataengine.PGQueryExecutor.executePreparedStatement(PGQueryExecutor.java:370)
at com.amazon.redshift.dataengine.PGQueryExecutor.execute(PGQueryExecutor.java:245)
at com.amazon.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)
at com.amazon.jdbc.common.SPreparedStatement.execute(Unknown Source)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:101)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:101)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$2.apply(RedshiftJDBCWrapper.scala:119)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

我在S3中创建了桶

2 个答案:

答案 0 :(得分:1)

问题在于spark-redshift版本与amazonaws-sdk冲突。更新pom解决了这个问题。

更新了pom.xml

<dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.10.22</version>
            <!--<version>1.7.4</version>-->
        </dependency>
     <dependency>
            <groupId>com.databricks</groupId>
            <artifactId>spark-redshift_2.10</artifactId>
            <version>0.6.0</version>
        </dependency> 

答案 1 :(得分:0)

你看到了https://github.com/databricks/spark-redshift/issues/176吗?

这很可能是由于存储桶和群集位于不同的区域。