总和向量列中的火花

时间:2019-02-14 19:18:21

标签: scala apache-spark vector breeze

我有一个数据框,其中有多个包含矢量的列(矢量列的数量是动态的)。我需要创建一个包含所有向量列之和的新列。我很难做到这一点。这是生成我正在测试的样本数据集的代码。

import org.apache.spark.ml.feature.VectorAssembler

val temp1 = spark.createDataFrame(Seq(
                                    (1,1.0,0.0,4.7,6,0.0),
                                    (2,1.0,0.0,6.8,6,0.0),
                                    (3,1.0,1.0,7.8,5,0.0),
                                    (4,0.0,1.0,4.1,7,0.0),
                                    (5,1.0,0.0,2.8,6,1.0),
                                    (6,1.0,1.0,6.1,5,0.0),
                                    (7,0.0,1.0,4.9,7,1.0),
                                    (8,1.0,0.0,7.3,6,0.0)))
                                    .toDF("id", "f1","f2","f3","f4","label")

val assembler1 = new VectorAssembler()
    .setInputCols(Array("f1","f2","f3"))
    .setOutputCol("vec1")

val temp2 = assembler1.setHandleInvalid("skip").transform(temp1)

val assembler2 = new VectorAssembler()
    .setInputCols(Array("f2","f3", "f4"))
    .setOutputCol("vec2")

val df = assembler2.setHandleInvalid("skip").transform(temp2)

这为我提供了以下数据集

+---+---+---+---+---+-----+-------------+-------------+
| id| f1| f2| f3| f4|label|         vec1|         vec2|
+---+---+---+---+---+-----+-------------+-------------+
|  1|1.0|0.0|4.7|  6|  0.0|[1.0,0.0,4.7]|[0.0,4.7,6.0]|
|  2|1.0|0.0|6.8|  6|  0.0|[1.0,0.0,6.8]|[0.0,6.8,6.0]|
|  3|1.0|1.0|7.8|  5|  0.0|[1.0,1.0,7.8]|[1.0,7.8,5.0]|
|  4|0.0|1.0|4.1|  7|  0.0|[0.0,1.0,4.1]|[1.0,4.1,7.0]|
|  5|1.0|0.0|2.8|  6|  1.0|[1.0,0.0,2.8]|[0.0,2.8,6.0]|
|  6|1.0|1.0|6.1|  5|  0.0|[1.0,1.0,6.1]|[1.0,6.1,5.0]|
|  7|0.0|1.0|4.9|  7|  1.0|[0.0,1.0,4.9]|[1.0,4.9,7.0]|
|  8|1.0|0.0|7.3|  6|  0.0|[1.0,0.0,7.3]|[0.0,7.3,6.0]|
+---+---+---+---+---+-----+-------------+-------------+

如果我需要对常规列求和,可以使用类似的方法

import org.apache.spark.sql.functions.col

df.withColumn("sum", namesOfColumnsToSum.map(col).reduce((c1, c2)=>c1+c2))

我知道我可以使用“ +”运算符来轻松地对DenseVectors求和

import breeze.linalg._
val v1 = DenseVector(1,2,3)
val v2 = DenseVector(5,6,7)
v1+v2

因此,以上代码为我提供了预期的向量。但是我不确定如何将向量列的总和与vec1vec2列相加。

我确实尝试了here中提到的建议,但是没有运气

1 个答案:

答案 0 :(得分:1)

这是我的看法,但使用PySpark编码。有人可能会帮助将其翻译为Scala:

from pyspark.ml.linalg import Vectors, VectorUDT
import numpy as np
from pyspark.sql.functions import udf 

def vector_sum (arr): 
    return Vectors.dense(np.sum(arr,axis=0))

vector_sum_udf = udf(vector_sum, VectorUDT())

df = df.withColumn('sum',vector_sum_udf(array(['vec1','vec2'])))