如何从Spark中正确读取S3中的.csv文件? - 无法读取文件的页脚

时间:2018-06-08 13:24:14

标签: apache-spark amazon-s3 pyspark

我们正在尝试使用Spark读取S3中的.csv文件,但收到此错误:

py4j.protocol.Py4JJavaError: An error occurred while calling o32.load.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.50.94.133, executor 0): java.io.IOException: Could not read footer for file: FileStatus{path=s3a://edl-dfs-sas-cecl-dev/output/dev/dev10/h2o/extend_subset.csv; isDirectory=false; length=897466691973; replication=0; blocksize=0; modification_time=0; access_time=0; owner=; group=; permission=rw-rw-rw-; isSymlink=false}

可以采取哪些措施来避免此错误?

1 个答案:

答案 0 :(得分:0)

我能够从火花2.2中的pyspark shell中完美阅读 Check the screenshot

无法复制此问题。