我已经引用了以下链接,以了解如何在python
中导出spark sql数据帧我的代码:
df = sqlContext.createDataFrame(routeRDD, ['Consigner', 'AverageScore', 'Trips'])
df.select('Consigner', 'AverageScore', 'Trips').write.format('com.databricks.spark.csv').options(header='true').save('file:///opt/BIG-DATA/VisualCargo/output/top_consigner.csv')
我使用spark-submit加载作业,在主URL上传递以下jar:
spark-csv_2.11-1.5.0.jar, commons-csv-1.4.jar
我收到以下错误
df.select('Consigner', 'AverageScore', 'Trips').write.format('com.databricks.spark.csv').options(header='true').save('file:///opt/BIG-DATA/VisualCargo/output/top_consigner.csv')
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 332, in save
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 36, in deco
File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o156.save.
py4j.protocol.Py4JJavaError: An error occurred while calling o156.save.
: java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
at com.databricks.spark.csv.util.CompressionCodecs$.<init>(CompressionCodecs.scala:29)
at com.databricks.spark.csv.util.CompressionCodecs$.<clinit>(CompressionCodecs.scala)
at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:198)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:170)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
答案 0 :(得分:3)
Spark版本1.5.0-cdh5.5.1
是使用Scala 2.10构建的 - Spark&lt;的默认Scala版本2.0。你的spark-csv是用Scala 2.10构建的 - spark-csv_ 2.11 -1.5.0.jar。
请使用Scala 2.10将spark-csv更新为版本或将Spark更新为Scala 2.11。您将在artifactId之后按编号了解Scala版本,即spark-csv_2.10-1.5.0将用于Scala 2.10
答案 1 :(得分:1)
我在Windows上运行Spark,遇到类似的问题,即无法写入文件(CSV或Parquet)。在进一步阅读Spark网站后,我发现以下错误,这是由于我使用的winutils版本。我将其更改为64位,并且可以正常工作。希望这可以帮助一些人。 Spark Log