Pyspark数据框-将元组数据转换为行

时间:2019-07-05 16:03:22

标签: apache-spark dataframe pyspark apache-spark-sql tuples

我想基于两个键将pyspark数据帧中的元组数据转换为行。给定的是原始数据和预期输出。

架构:

    root
     |-- key_1: string (nullable = true)
     |-- key_2: string (nullable = true)
     |-- prod: string (nullable = true)

原始数据:

key_1|key_2|prod
cust1|order1|(p1,p2,)
cust2|order2|(p1,p2,p3)
cust3|order3|(p1,)

预期输出:

key_1|key_2|prod|category
cust1|order1|p1
cust1|order1|p2
cust1|order1|
cust2|order2|p1
cust2|order2|p2
cust2|order2|p3
cust3|order3|p1
cust3|order3|

1 个答案:

答案 0 :(得分:1)

Spark具有一个名为explode的功能,可用于将一行中的列表/数组分解为多行,从而完全满足您的要求。

但是根据您的架构,我们必须再增加一个步骤,将prod字符串列转换为数组类型

转换类型的示例代码

from pyspark.sql.functions import explode
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType, StringType

def squared(s):
    # udf function, convert string (p1,p2,p3) to array [p1, p2, p3]
    items = s[1:-2]  # Not sure it is correct with your data, please double check
    return items.split(',')

# Register udf
squared_udf = udf(squared, ArrayType(StringType()))

# Apply udf to conver prod string to real array
df_2 = df.withColumn('prod_list', squared_udf('prod'))

# Explode prod_list
df_2.select(df.key_1, df.key_2, explode(df_2.prod_list)).show()

我已经测试过,结果是

+-----+------+---+
|key_1| key_2|col|
+-----+------+---+
|cust1|order1| p1|
|cust1|order1| p2|
|cust2|order2| p1|
|cust2|order2| p2|
|cust2|order2| p3|
|cust3|order3| p1|
+-----+------+---+

带有示例数据

    data = [
        {'key_1': 'cust1', 'key_2': 'order1', 'prod': '(p1,p2,)'},
        {'key_1': 'cust2', 'key_2': 'order2', 'prod': '(p1,p2,p3,)'},
        {'key_1': 'cust3', 'key_2': 'order3', 'prod': '(p1,)'},
    ]
相关问题