AWS EMR Spark:写入S3时出错 - IllegalArgumentException - 无法从空字符串创建路径

时间:2017-07-08 23:08:41

标签: amazon-web-services apache-spark amazon-s3 amazon-emr

我一直试图解决这个问题很长一段时间......不知道为什么我会这样做?仅供参考,我在AWS EMR Cluster上的集群上运行Spark。我调试并清楚地看到提供的目标路径......类似于s3://my-bucket-name/。 spark作业创建orc文件并在创建如下所示的分区后写入它们:date=2017-06-10。有什么想法吗?

17/07/08 22:48:31 ERROR ApplicationMaster: User class threw exception: java.lang.IllegalArgumentException: Can not create a Path from an empty string
java.lang.IllegalArgumentException: Can not create a Path from an empty string
    at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
    at org.apache.hadoop.fs.Path.<init>(Path.java:134)
    at org.apache.hadoop.fs.Path.<init>(Path.java:93)
    at org.apache.hadoop.fs.Path.suffix(Path.java:361)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.deleteMatchingPartitions(InsertIntoHadoopFsRelationCommand.scala:138)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:82)

编写orc的代码:

dataframe.write
   .partitionBy(partition)
   .option("compression", ZLIB.toString)
   .mode(SaveMode.Overwrite)
   .orc(destination)

1 个答案:

答案 0 :(得分:1)

将镶木地板文件写入S3时,我遇到了类似的问题。问题是SaveMode.Overwrite。这种模式似乎与S3结合使用时无法正常工作。在写入之前,尝试删除S3存储桶my-bucket-name中的所有数据。然后您的代码应该成功运行。

要删除存储桶my-bucket-name中的所有文件,您可以使用以下pyspark代码:

# see https://www.quora.com/How-do-you-overwrite-the-output-directory-when-using-PySpark
URI = sc._gateway.jvm.java.net.URI
Path = sc._gateway.jvm.org.apache.hadoop.fs.Path
FileSystem = sc._gateway.jvm.org.apache.hadoop.fs.FileSystem

# see http://crazyslate.com/how-to-rename-hadoop-files-using-wildcards-while-patterns/
fs = FileSystem.get(URI("s3a://my-bucket-name"), sc._jsc.hadoopConfiguration())
file_status = fs.globStatus(Path("/*"))
for status in file_status:
    fs.delete(status.getPath(), True)