我从逻辑回归得到了一个包含predictions
和labels
的数组,如下所示:
labelAndPreds: org.apache.spark.rdd.RDD[(Double, Double)] =
MapPartitionsRDD[517] at map at <console>:52
scala> labelAndPreds.collect()
res2: Array[(Double, Double)] = Array((0.004106564139257318, 0.0),
(0.3641478408865635, 0.0), (0.9999258409695498, 1.0), (0.342287288060...
如何以CSV
格式将其保存在本地磁盘上,并带有两列(一列用于标签,另一列用于预测)?
答案 0 :(得分:2)
您可以使用spark-csv:
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext.implicits._
val df = labelsAndPreds.toDF("labels", "predictions")
df.write
.format("com.databricks.spark.csv")
.option("header", "true")
.save("labelsAndPreds.csv")