在Pyspark中的groupedBy对象爆炸后使用Collect_set

时间:2018-09-11 18:45:16

标签: pandas pyspark aggregate-functions user-defined-functions pandas-groupby

我有一个具有这样的架构的数据框:

root
 |-- docId: string (nullable = true)
 |-- field_a: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- field_b: array (nullable = true)
 |    |-- element: string (containsNull = true)

我想在groupBy上执行field_a,并使用collect_set来汇总field_b中所有不同的值(基本上是列表中的内部值),我不想通过分解field_b来添加新列,然后进行聚合collect_set

如何使用udaf或pandas udf来实现此目的?

例如:

+---------------------+----------------+------------+
|docId                |field_b         |field_a     |
+---------------------+----------------+------------+
|k&;+B8ROh\\NmetBg=DiR|[IDN,SGP]       |[F]         |
|k&;.]^nX7HRdjIO`>S1+ |[IND,KWT]       |[M]         |
|k&;h&)8Sd\\JrDVL%VH>N|[IDN,IND]       |[M]         |
|k&<8nTqjrYNE8taji^$u |[IND,BHR]       |[F]         |
|k&=$M5Hmd6Y>&@'co-^1 |[IND,AUS]       |[M]         |
|k&>pIZ)g^!L/ht!T\'/"f|[IDN,KWT]       |[M]         |
|k&@ZX>Ph%rPdZ[,Pqsc. |[IND,MYS]       |[F]         |
|k&A]C>dmDXVN$hiVEUk/ |[IND,PHL]       |[F]         |
|k&BX1eGhumSQ6`7A8<Zd |[IND,SAU]       |[M]         |
|k&J)2Vo(k*[^c"Mg*f%) |[IND,SGP]       |[F]         |
+---------------------+----------------+------------+

我正在寻找的输出是:

+------------+--------------------------------+
|field_a     |collect_set(field__b)           |
+------------+--------------------------------+
|[F]         |[IDN,IND,SGP,BHR,MYS,PHL]       |
|[M]         |[IND,KWT,IDN,AUS,SAU,KWT]       |
+------------+--------------------------------+

1 个答案:

答案 0 :(得分:0)

我使用熊猫UDF为您的问题写了一个解决方案。我不明白为什么您的field_a列(代表性别?)为什么是列表,所以我将其转换为简单的字符串,但是如果需要,可以将其设置为字符串列表。在这里:

(1)在熊猫中创建虚拟df并生成Spark DataFrame:

import pandas as pd
import random
from pyspark.sql.functions import pandas_udf, PandasUDFType

a_list = ['F', 'M']
b_list = ['IDN', 'IND', 'SGP', 'BHR', 'MYS', 'PHL', 'AUS', 'SAU', 'KWT']
size = 10
dummy_df = pd.DataFrame({'docId': [random.randint(0,100) for _ in range(size)],
                         'field_b': [[random.choice(b_list), random.choice(b_list)] for _ in range(size)],
                         'field_a': [random.choice(a_list) for _ in range(size)]})

df = spark.createDataFrame(dummy_df)

生产:

+-----+-------+----------+
|docId|field_a|   field_b|
+-----+-------+----------+
|   23|      F|[SAU, SGP]|
|   36|      F|[IDN, PHL]|
|   82|      M|[BHR, SAU]|
|   30|      F|[AUS, IDN]|
|   75|      F|[AUS, MYS]|
|   46|      F|[SAU, IDN]|
|   11|      F|[SAU, BHR]|
|   71|      M|[KWT, IDN]|
|   50|      F|[IND, SGP]|
|   78|      F|[IND, SGP]|
+-----+-------+----------+

(2)然后定义熊猫UDF,分组并应用:

@pandas_udf('field_a string, set_field_b array<string>', PandasUDFType.GROUPED_MAP)
def my_pandas_udf(df):
    unique_values = pd.DataFrame(df['field_b'].values.tolist()).stack().unique().tolist()
    return pd.DataFrame({'field_a': df['field_a'].iloc[0], 'set_field_b': [unique_values]})

result = df.groupby('field_a').apply(my_pandas_udf)

获得最终结果:

+-------+--------------------+
|field_a|         set_field_b|
+-------+--------------------+
|      F|[SAU, SGP, IDN, P...|
|      M|[BHR, SAU, KWT, IDN]|
+-------+--------------------+

我不太喜欢pandas值/ tolist / stack / unique方法,也许有更好的方法,但是在pandas数据帧中处理列表通常并不简单。

现在,您必须将性能与explode + groupby + collect_set方法进行比较,不确定哪种方法会更快。告诉我们,什么时候发现!