如何将Vector拆分为列 - 使用PySpark

时间:2016-07-14 21:12:04

标签: python apache-spark pyspark apache-spark-sql apache-spark-ml

上下文:我有一个DataFrame,其中包含2列:word和vector。其中列类型为" vector"是VectorUDT

一个例子:

word    |  vector
assert  | [435,323,324,212...]

我希望得到这个:

word   |  v1 | v2  | v3 | v4 | v5 | v6 ......
assert | 435 | 5435| 698| 356|....

问题:

如何使用PySpark为每个维度拆分包含多个列中的向量的列?

提前致谢

5 个答案:

答案 0 :(得分:48)

一种可能的方法是转换为RDD和从RDD转换:

from pyspark.ml.linalg import Vectors

df = sc.parallelize([
    ("assert", Vectors.dense([1, 2, 3])),
    ("require", Vectors.sparse(3, {1: 2}))
]).toDF(["word", "vector"])

def extract(row):
    return (row.word, ) + tuple(row.vector.toArray().tolist())

df.rdd.map(extract).toDF(["word"])  # Vector values will be named _2, _3, ...

## +-------+---+---+---+
## |   word| _2| _3| _4|
## +-------+---+---+---+
## | assert|1.0|2.0|3.0|
## |require|0.0|2.0|0.0|
## +-------+---+---+---+

另一种解决方案是创建UDF:

from pyspark.sql.functions import udf, col
from pyspark.sql.types import ArrayType, DoubleType

def to_array(col):
    def to_array_(v):
        return v.toArray().tolist()
    return udf(to_array_, ArrayType(DoubleType()))(col)

(df
    .withColumn("xs", to_array(col("vector")))
    .select(["word"] + [col("xs")[i] for i in range(3)]))

## +-------+-----+-----+-----+
## |   word|xs[0]|xs[1]|xs[2]|
## +-------+-----+-----+-----+
## | assert|  1.0|  2.0|  3.0|
## |require|  0.0|  2.0|  0.0|
## +-------+-----+-----+-----+

对于Scala等效,请参阅Spark Scala: How to convert Dataframe[vector] to DataFrame[f1:Double, ..., fn: Double)]

答案 1 :(得分:0)

def splitVecotr(df, new_features=['f1','f2']):
schema = df.schema
cols = df.columns

for col in new_features: # new_features should be the same length as vector column length
    schema = schema.add(col,DoubleType(),True)

return spark.createDataFrame(df.rdd.map(lambda row: [row[i] for i in cols]+row.features.tolist()), schema)

该功能将特征向量列转换为单独的列

答案 2 :(得分:0)

使用how-to-access-element-of-a-vectorudt-column-in-a-spark-dataframe中的i_th udf要快得多

上面的zero323解决方案中给出的extract函数使用toList,它创建一个Python列表对象,并用Python float对象填充它,并通过遍历列表找到所需的元素,然后需要将其转换回java double;每行重复一次。使用rdd比to_array udf慢得多,后者也调用toList,但是两者都比让SparkSQL处理大部分工作的udf慢得多。

将此处建议的rdd提取和to_array udf与3955864中的i_th udf进行比较的计时代码:

from pyspark.context import SparkContext
from pyspark.sql import Row, SQLContext, SparkSession
from pyspark.sql.functions import lit, udf, col
from pyspark.sql.types import ArrayType, DoubleType
import pyspark.sql.dataframe
from pyspark.sql.functions import pandas_udf, PandasUDFType

sc = SparkContext('local[4]', 'FlatTestTime')

spark = SparkSession(sc)
spark.conf.set("spark.sql.execution.arrow.enabled", True)

from pyspark.ml.linalg import Vectors

# copy the two rows in the test dataframe a bunch of times,
# make this small enough for testing, or go for "big data" and be prepared to wait
REPS = 20000

df = sc.parallelize([
    ("assert", Vectors.dense([1, 2, 3]), 1, Vectors.dense([4.1, 5.1])),
    ("require", Vectors.sparse(3, {1: 2}), 2, Vectors.dense([6.2, 7.2])),
] * REPS).toDF(["word", "vector", "more", "vorpal"])

def extract(row):
    return (row.word, ) + tuple(row.vector.toArray().tolist(),) + (row.more,) + tuple(row.vorpal.toArray().tolist(),)

def test_extract():
    return df.rdd.map(extract).toDF(['word', 'vector__0', 'vector__1', 'vector__2', 'more', 'vorpal__0', 'vorpal__1'])

def to_array(col):
    def to_array_(v):
        return v.toArray().tolist()
    return udf(to_array_, ArrayType(DoubleType()))(col)

def test_to_array():
    df_to_array = df.withColumn("xs", to_array(col("vector"))) \
        .select(["word"] + [col("xs")[i] for i in range(3)] + ["more", "vorpal"]) \
        .withColumn("xx", to_array(col("vorpal"))) \
        .select(["word"] + ["xs[{}]".format(i) for i in range(3)] + ["more"] + [col("xx")[i] for i in range(2)])
    return df_to_array

# pack up to_array into a tidy function
def flatten(df, vector, vlen):
    fieldNames = df.schema.fieldNames()
    if vector in fieldNames:
        names = []
        for fieldname in fieldNames:
            if fieldname == vector:
                names.extend([col(vector)[i] for i in range(vlen)])
            else:
                names.append(col(fieldname))
        return df.withColumn(vector, to_array(col(vector)))\
                 .select(names)
    else:
        return df

def test_flatten():
    dflat = flatten(df, "vector", 3)
    dflat2 = flatten(dflat, "vorpal", 2)
    return dflat2

def ith_(v, i):
    try:
        return float(v[i])
    except ValueError:
        return None

ith = udf(ith_, DoubleType())

select = ["word"]
select.extend([ith("vector", lit(i)) for i in range(3)])
select.append("more")
select.extend([ith("vorpal", lit(i)) for i in range(2)])

# %% timeit ...
def test_ith():
    return df.select(select)

if __name__ == '__main__':
    import timeit

    # make sure these work as intended
    test_ith().show(4)
    test_flatten().show(4)
    test_to_array().show(4)
    test_extract().show(4)

    print("i_th\t\t",
          timeit.timeit("test_ith()",
                       setup="from __main__ import test_ith",
                       number=7)
         )
    print("flatten\t\t",
          timeit.timeit("test_flatten()",
                       setup="from __main__ import test_flatten",
                       number=7)
         )
    print("to_array\t",
          timeit.timeit("test_to_array()",
                       setup="from __main__ import test_to_array",
                       number=7)
         )
    print("extract\t\t",
          timeit.timeit("test_extract()",
                       setup="from __main__ import test_extract",
                       number=7)
         )

结果:

i_th         0.05964796099999958
flatten      0.4842299350000001
to_array     0.42978780299999997
extract      2.9254476840000017

答案 3 :(得分:0)

对于试图将训练PySpark ML模型后生成的rawPredictionprobability列拆分为Pandas列的任何人,您都可以像这样拆分:

your_pandas_df['probability'].apply(lambda x: pd.Series(x.toArray()))

答案 4 :(得分:0)

要将训练PySpark ML模型后生成的rawPredictionprobability列拆分为Pandas列,您可以像这样拆分:

your_pandas_df['probability'].apply(lambda x: pd.Series(x.toArray()))