调用Spark DataFrame.saveAsParquetFile()时,已删除的google存储目录显示为“已存在”

时间:2015-07-10 17:30:12

标签: google-cloud-storage google-hadoop

我通过Google云端控制台删除了Google云存储目录(该目录是由早期的Spark(版本1.3.1)作业生成的),当重新运行作业时,它总是失败,似乎目录仍然是在那里工作;我找不到gsutil的目录。

这是一个错误,还是我错过了什么?谢谢!

我得到的错误:

java.lang.RuntimeException: path gs://<my_bucket>/job_dir1/output_1.parquet already exists.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:112)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:240)
at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1196)
at org.apache.spark.sql.DataFrame.saveAsParquetFile(DataFrame.scala:995)
at com.xxx.Job1$.execute(Job1.scala:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

1 个答案:

答案 0 :(得分:1)

看起来您可能遇到了NFS列表一致性缓存的已知错误:https://github.com/GoogleCloudPlatform/bigdata-interop/issues/5

它已在最新版本中修复,如果您通过使用bdutil-1.3.1(在此处公布:https://groups.google.com/forum/#!topic/gcp-hadoop-announce/vstNuV0LpDc)部署新群集进行升级,则应解决该问题。如果需要就地升级,可以尝试将最新的gcs-connector-1.4.1 jar文件下载到/home/hadoop/hadoop-install/lib/gcs-connector-*.jar下的主节点和工作节点上,然后重新启动Spark守护进程:

sudo sudo -u hadoop /home/hadoop/spark-install/sbin/stop-all.sh
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/start-all.sh