Pyspark保存到S3

时间:2017-02-06 14:46:59

标签: apache-spark amazon-s3 pyspark spark-dataframe

我正在尝试将大文件保存到Amazon S3存储桶。 以下代码完美运行:

sqlContext.createDataFrame([('1', '4'), ('2', '5'), ('3', '6')], ["A", "B"]).select('A').repartition(1).write \
    .format("text") \
    .mode("overwrite") \
    .option("header", "false") \
    .option("codec", "gzip") \
    .save("s3n://BUCKETNAME/temp.txt")

保存我的完整数据框然后失败了。在我的笔记本中出现以下错误:

Py4JJavaError: An error occurred while calling o1274.save.
: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:156)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
    at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
    at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:209)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Date
    at org.jets3t.service.model.StorageObject.getLastModifiedDate(StorageObject.java:376)
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:176)

在spark应用程序UI中,作业被描述为成功。

我有以下配置:

sc._jsc.hadoopConfiguration().set("fs.s3n.multipart.uploads.enabled", "true")

尝试调试我尝试了以下操作,它应该正常工作......

sqlContext.createDataFrame(full_df.select('columnA').take(5),['columnA']).select('columnA').repartition(1).write \
    .format("text") \
    .mode("overwrite") \
    .option("header", "false") \
    .option("codec", "gzip") \
    .save("s3n://BUCKETNAME/temp.txt")

我发现以下链接似乎是关于这个问题,但我找不到工作包 Jets3t

谁可以帮助解决这个神秘的错误?

1 个答案:

答案 0 :(得分:0)

使用Hadoop 2.7 JAR切换到S3n上的s3a。 S3n的时间过剩了 - 只是为了停止回归而停止。