Spark:匹配两个数据帧的列

时间:2016-04-14 06:13:48

标签: apache-spark dataframe apache-spark-sql

我有一个格式如下的数据框

+---+---+------+---+
| sp|sp2|colour|sp3|
+---+---+------+---+
|  0|  1|     1|  0|
|  1|  0|     0|  1|
|  0|  0|     1|  0|
+---+---+------+---+

另一个数据帧包含第一个数据帧中每列的系数。例如

+------+------+---------+------+
| CE_sp|CE_sp2|CE_colour|CE_sp3|
+------+------+---------+------+
|  0.94|  0.31|     0.11|  0.72|
+------+------+---------+------+

现在我想在第一个数据帧中添加一个列,该列通过添加第二个数据帧的分数来计算。

代表

+---+---+------+---+-----+
| sp|sp2|colour|sp3|Score|
+---+---+------+---+-----+
|  0|  1|     1|  0| 0.42|
|  1|  0|     0|  1| 1.66|
|  0|  0|     1|  0| 0.11|
+---+---+------+---+-----+

r -> row of first dataframe
score = r(0)*CE_sp + r(1)*CE_sp2 + r(2)*CE_colour + r(3)*CE_sp3

可以有n列,列的顺序可以不同。

提前致谢!!!

3 个答案:

答案 0 :(得分:4)

快速而简单:

import org.apache.spark.sql.functions.col

val df = Seq(
  (0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)
).toDF("sp","sp2", "colour", "sp3")

val coefs = Map("sp" -> 0.94, "sp2" -> 0.32, "colour" -> 0.11, "sp3" -> 0.72)
val score = df.columns.map(
  c => col(c) * coefs.getOrElse(c, 0.0)).reduce(_ + _)

df.withColumn("score", score)

在PySpark中也是如此:

from pyspark.sql.functions import col

df = sc.parallelize([
    (0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)
]).toDF(["sp","sp2", "colour", "sp3"])

coefs = {"sp": 0.94, "sp2": 0.32, "colour": 0.11, "sp3": 0.72}
df.withColumn("score", sum(col(c) * coefs.get(c, 0) for c in df.columns))

答案 1 :(得分:1)

我相信有很多方法可以完成你想要做的事情。在所有情况下,您都不需要第二个DataFrame,就像我在评论中所说的那样。

这是一种方式:

import org.apache.spark.ml.feature.{ElementwiseProduct, VectorAssembler}
import org.apache.spark.mllib.linalg.{Vectors,Vector => MLVector}

val df = Seq((0, 1, 1, 0), (1, 0, 0, 1), (0, 0, 1, 0)).toDF("sp", "sp2", "colour", "sp3")

// Your coefficient represents a dense Vector
val coeffSp = 0.94
val coeffSp2 = 0.31
val coeffColour = 0.11
val coeffSp3 = 0.72

val weightVectors = Vectors.dense(Array(coeffSp, coeffSp2, coeffColour, coeffSp3))

// You can assemble the features with VectorAssembler
val assembler = new VectorAssembler()
  .setInputCols(df.columns) // since you need to compute on all your columns
  .setOutputCol("features")

// Once these features assembled we can perform an element wise product with the weight vector
val output = assembler.transform(df)
val transformer = new ElementwiseProduct()
  .setScalingVec(weightVectors)
  .setInputCol("features")
  .setOutputCol("weightedFeatures")

// Create an UDF to sum the weighted vectors values
import org.apache.spark.sql.functions.udf
def score = udf((score: MLVector) => { score.toDense.toArray.sum })

// Apply the UDF on the weightedFeatures
val scores = transformer.transform(output).withColumn("score",score('weightedFeatures))
scores.show
// +---+---+------+---+-----------------+-------------------+-----+
// | sp|sp2|colour|sp3|         features|   weightedFeatures|score|
// +---+---+------+---+-----------------+-------------------+-----+
// |  0|  1|     1|  0|[0.0,1.0,1.0,0.0]|[0.0,0.31,0.11,0.0]| 0.42|
// |  1|  0|     0|  1|[1.0,0.0,0.0,1.0]|[0.94,0.0,0.0,0.72]| 1.66|
// |  0|  0|     1|  0|    (4,[2],[1.0])|     (4,[2],[0.11])| 0.11|
// +---+---+------+---+-----------------+-------------------+-----+

我希望这会有所帮助。如果您有更多问题,请不要犹豫。

答案 2 :(得分:1)

这是一个简单的解决方案:

scala> df_wght.show
+-----+------+---------+------+
|ce_sp|ce_sp2|ce_colour|ce_sp3|
+-----+------+---------+------+
|    1|     2|        3|     4|
+-----+------+---------+------+

scala> df.show
+---+---+------+---+
| sp|sp2|colour|sp3|
+---+---+------+---+
|  0|  1|     1|  0|
|  1|  0|     0|  1|
|  0|  0|     1|  0|
+---+---+------+---+

然后我们可以做一个简单的交叉连接和交叉产品。

val scored = df.join(df_wght).selectExpr("(sp*ce_sp + sp2*ce_sp2 + colour*ce_colour + sp3*ce_sp3) as final_score")

输出:

scala> scored.show
+-----------+                                                                   
|final_score|
+-----------+
|          5|
|          5|
|          3|
+-----------+