我正在尝试将我的dataFrame保存在s3中,如下所示:
myDF.write.format("com.databricks.spark.csv").options(codec="org.apache.hadoop.io.compress.GzipCodec").save("s3n://myPath/myData.csv")
然后我遇到了错误:
<console>:132: error: overloaded method value options with alternatives:
(options: java.util.Map[String,String])org.apache.spark.sql.DataFrameWriter <and>
(options: scala.collection.Map[String,String])org.apache.spark.sql.DataFrameWriter
cannot be applied to (codec: String)
有谁知道我错过了什么?谢谢!
答案 0 :(得分:5)
Scala不是Python。它没有** kwargs。您必须提供Map
:
myDF.write.format("com.databricks.spark.csv")
.options(Map("codec" -> "org.apache.hadoop.io.compress.GzipCodec"))
.save("s3n://myPath/myData.csv")