如何使用SparkR在apache spark中编写csv文件?

时间:2016-06-10 12:59:23

标签: csv apache-spark sparkr file-writing

我可以使用以下命令

成功加载数据
sc = sparkR.init(master = 'local', sparkPackages = 'com.databricks:spark-csv_2.11:1.4.0')
sqlContext <- sparkRSQL.init(sc)
ss <- read.df(sqlContext, '/home/anmol/Downloads/Rgraphics/dataSets/states.csv', source = "com.databricks.spark.csv", inferSchema = "true")
head(ss)

我尝试过以下命令

write.df(df, '/home/anmol/faithfull.csv', source = 'com.databricks.spark.csv', 'overwrite')

但它会出现以下错误

  

16/06/10 18:28:26错误RBackendHandler:保存在261失败错误   invokeJava(isStatic = FALSE,objId $ id,methodName,...):         java.lang.NoClassDefFoundError:无法初始化类com.databricks.spark.csv.util.CompressionCodecs $         at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:189)         在org.apache.spark.sql.execution.datasources.ResolvedDataSource $ .apply(ResolvedDataSource.scala:222)         在org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)         在org.apache.spark.sql.DataFrame.save(DataFrame.scala:2027)         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)         at java.lang.reflect.Method.invoke(Method.java:606)         在org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)         在org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:86)         在org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:38)         在io.netty.channel。

1 个答案:

答案 0 :(得分:0)

the problem was the version that is being used to compile my apache spark it was 2.10 so i used

sc <- sparkR.init(master = 'local', sparkPackages = 'com.databricks:spark-csv_2.10:1.4.0')

you can check yours by logging into spark-shell it gives the version of scala while starting up