在pyspark数据帧中平面映射collect_set

时间:2017-01-12 13:23:56

标签: apache-spark pyspark

我有两个数据帧,并且在使用groupby后我在agg中使用了collect_set()。聚合后平面映射结果数组的最佳方法是什么。

schema = ['col1', 'col2', 'col3', 'col4']

a = [[1, [23, 32], [11, 22], [9989]]]

df1 = spark.createDataFrame(a, schema=schema)

b = [[1, [34], [43, 22], [888, 777]]]

df2 = spark.createDataFrame(b, schema=schema)

df = df1.union(
        df2
    ).groupby(
        'col1'
    ).agg(
        collect_set('col2').alias('col2'),
        collect_set('col3').alias('col3'),
        collect_set('col4').alias('col4')
    )

df.collect()

我将此作为输出

[Row(col1=1, col2=[[34], [23, 32]], col3=[[11, 22], [43, 22]], col4=[[9989], [888, 777]])]

但是,我想把它作为输出

[Row(col1=1, col2=[23, 32, 34], col3=[11, 22, 43], col4=[9989, 888, 777])]

1 个答案:

答案 0 :(得分:2)

您可以使用udf

from itertools import chain
from pyspark.sql.types import *
from pyspark.sql.functions import udf

flatten = udf(lambda x: list(chain.from_iterable(x)), ArrayType(IntegerType()))

df.withColumn('col2_flat', flatten('col2'))