如何使用scala将RDD [某些情况下的类]转换为csv文件?

时间:2019-06-28 06:43:42

标签: scala csv apache-spark

我有一个RDD [case class],我想将其转换为csv文件。我正在使用spark 1.6和scala 2.10.5。

stationDetails.toDF.coalesce(1).write.format("com.databricks.spark.csv").save("data/myData.csv")

给出错误

Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:219)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)

我无法在build.sbt文件中添加“ com.databricks.spark.csv”的依赖项。

我在build.sbt文件中添加的依赖项是:

libraryDependencies ++= Seq(
  "org.apache.commons" % "commons-csv" % "1.1",
  "com.univocity" % "univocity-parsers" % "1.5.1",
  "org.slf4j" % "slf4j-api" % "1.7.5" % "provided",
  "org.scalatest" %% "scalatest" % "2.2.1" % "test",
  "com.novocode" % "junit-interface" % "0.9" % "test"
)

我也尝试过

stationDetails.toDF.coalesce(1).write.csv("data/myData.csv")

但出现错误:csv无法解析。

1 个答案:

答案 0 :(得分:0)

请将您的build.sbt更改为以下-

libraryDependencies ++= Seq(
  "org.apache.commons" % "commons-csv" % "1.1",
  "com.databricks" %% "spark-csv" % "1.4.0",
  "com.univocity" % "univocity-parsers" % "1.5.1",
  "org.slf4j" % "slf4j-api" % "1.7.5" % "provided",
  "org.scalatest" %% "scalatest" % "2.2.1" % "test",
  "com.novocode" % "junit-interface" % "0.9" % "test"
)