Spark:如何将数组保存为两列CSV?

时间:2015-11-16 13:40:34

标签: arrays csv apache-spark

我从逻辑回归得到了一个包含predictionslabels的数组,如下所示:

labelAndPreds: org.apache.spark.rdd.RDD[(Double, Double)] =  
MapPartitionsRDD[517] at map at <console>:52

scala> labelAndPreds.collect()
res2: Array[(Double, Double)] = Array((0.004106564139257318, 0.0), 
(0.3641478408865635, 0.0), (0.9999258409695498, 1.0), (0.342287288060...

如何以CSV格式将其保存在本地磁盘上,并带有两列(一列用于标签,另一列用于预测)?

1 个答案:

答案 0 :(得分:2)

您可以使用spark-csv

import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext.implicits._

val df = labelsAndPreds.toDF("labels", "predictions")

df.write
    .format("com.databricks.spark.csv")
    .option("header", "true")
    .save("labelsAndPreds.csv")