在PySpark中将值添加到DenseVector中

时间:2017-10-04 03:34:51

标签: python vector pyspark type-conversion

我有一个我已经处理过的DataFrame:

+---------+-------+
| inputs  | temp  | 
+---------+-------+
| [1,0,0] | 12    |
+---------+-------+
| [0,1,0] | 10    |
+---------+-------+
...

inputs是DenseVectors的一列。 temp是一列值。我想用这些值附加DenseVector并创建一列,但我不知道如何开始。有关此期望输出的任何提示:

+---------------+
| inputsMerged  | 
+---------------+
| [1,0,0,12]    | 
+---------------+
| [0,1,0,10]    |
+---------------+
...

编辑:我正在尝试使用VectorAssembler方法,但我生成的数组不是预期的。

1 个答案:

答案 0 :(得分:2)

您可能会这样做:

df.show()
+-------------+----+
|       inputs|temp|
+-------------+----+
|[1.0,0.0,0.0]|  12|
|[0.0,1.0,0.0]|  10|
+-------------+----+

df.printSchema()
root
 |-- inputs: vector (nullable = true)
 |-- temp: long (nullable = true)

导入

import pyspark.sql.functions as F
from pyspark.ml.linalg import Vectors, VectorUDT

创建udf以合并Vector和元素:

concat = F.udf(lambda v, e: Vectors.dense(list(v) + [e]), VectorUDT())

将udf应用于输入 temp 列:

merged_df = df.select(concat(df.inputs, df.temp).alias('inputsMerged'))

merged_df.show()
+------------------+
|      inputsMerged|
+------------------+
|[1.0,0.0,0.0,12.0]|
|[0.0,1.0,0.0,10.0]|
+------------------+

merged_df.printSchema()
root
 |-- inputsMerged: vector (nullable = true)