将Spark数据帧转换为org.apache.spark.rdd.RDD [org.apache.spark.mllib.linalg.Vector]

时间:2017-02-20 16:47:49

标签: scala apache-spark apache-spark-sql rdd apache-spark-mllib

我对scala非常陌生并激发2.1。 我正在尝试计算数据框中许多元素之间的相关性,如下所示:

item_1 | item_2 | item_3 | item_4
     1 |      1 |      4 |      3
     2 |      0 |      2 |      0
     0 |      2 |      0 |      1

这是我尝试过的:

val df = sqlContext.createDataFrame(
  Seq((1, 1, 4, 3),
      (2, 0, 2, 0),
      (0, 2, 0, 1)
).toDF("item_1", "item_2", "item_3", "item_4")


val items = df.select(array(df.columns.map(col(_)): _*)).rdd.map(_.getSeq[Double](0))

并计算元素之间的相关性:

val correlMatrix: Matrix = Statistics.corr(items, "pearson")

有以下错误消息:

<console>:89: error: type mismatch;
found   : org.apache.spark.rdd.RDD[Seq[Double]]
 required: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]
       val correlMatrix: Matrix = Statistics.corr(items, "pearson")

我不知道如何从数据框创建org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]

这可能是一件非常容易的事,但我很挣扎,我很乐意接受任何建议。

2 个答案:

答案 0 :(得分:5)

例如,您可以使用VectorAssembler。汇编向量并转换为RDD

import org.apache.spark.ml.feature.VectorAssembler

val rows = new VectorAssembler().setInputCols(df.columns).setOutputCol("vs")
  .transform(df)
  .select("vs")
  .rdd

Vectors提取Row

  • Spark 1.x:

    rows.map(_.getAs[org.apache.spark.mllib.linalg.Vector](0))
    
  • Spark 2.x:

    rows
      .map(_.getAs[org.apache.spark.ml.linalg.Vector](0))
      .map(org.apache.spark.mllib.linalg.Vectors.fromML)
    

关于您的代码:

  • 您的Integer列不是Double
  • 数据不是array,因此您无法使用_.getSeq[Double](0)

答案 1 :(得分:2)

如果您的目标是执行皮尔逊相关,那么您不必使用RDD和向量。以下是直接在DataFrame列上执行pearson关联的示例(相关列是双打类型)。

<强>代码:

import org.apache.spark.sql.{SQLContext, Row, DataFrame}
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType, DoubleType}
import org.apache.spark.sql.functions._


val rb = spark.read.option("delimiter","|").option("header","false").option("inferSchema","true").format("csv").load("rb.csv").toDF("name","beerId","brewerId","abv","style","appearance","aroma","palate","taste","overall","time","reviewer").cache()

rb.agg(
    corr("overall","taste"),
    corr("overall","aroma"),
    corr("overall","palate"),
    corr("overall","appearance"),
    corr("overall","abv")
    ).show()

在这个例子中,我导入一个数据帧(带有自定义分隔符,没有标题和推断数据类型),然后简单地对数据帧执行agg函数,该数据帧内部有多个相关性。



<强>输出:

+--------------------+--------------------+---------------------+-------------------------+------------------+
|corr(overall, taste)|corr(overall, aroma)|corr(overall, palate)|corr(overall, appearance)|corr(overall, abv)|
+--------------------+--------------------+---------------------+-------------------------+------------------+
|  0.8762432795943761|   0.789023067942876|   0.7008942639550395|       0.5663593891357243|0.3539158620897098|
+--------------------+--------------------+---------------------+-------------------------+------------------+

从结果中可以看出,(整体,品味)列高度相关,而(总体而言,abv)则没那么多。

这是Scala Docs DataFrame page which has the Aggregation Correlation Function的链接。