我有DataFrame
类似于以下内容:
new_df = spark.createDataFrame([
([['hello', 'productcode'], ['red','color']], 7),
([['hi', 'productcode'], ['blue', 'color']], 8),
([['hoi', 'productcode'], ['black','color']], 7)
], ["items", "frequency"])
new_df.show(3, False)
# +------------------------------------------------------------+---------+
# |items |frequency|
# +------------------------------------------------------------+---------+
# |[WrappedArray(hello, productcode), WrappedArray(red, color)]|7 |
# |[WrappedArray(hi, productcode), WrappedArray(blue, color)] |8 |
# |[WrappedArray(hoi, productcode), WrappedArray(black, color)]|7 |
# +------------------------------------------------------------+---------+
我需要生成类似于以下内容的新DataFrame
:
# +-------------------------------------------
# |productcode | color |frequency|
# +-------------------------------------------
# |hello | red | 7 |
# |hi | blue | 8 |
# |hoi | black | 7 |
# +--------------------------------------------
答案 0 :(得分:4)
您可以将项目转换为map
:
from pyspark.sql.functions import *
from operator import itemgetter
@udf("map<string, string>")
def as_map(vks):
return {k: v for v, k in vks}
remapped = new_df.select("frequency", as_map("items").alias("items"))
收集钥匙:
keys = remapped.select("items").rdd \
.flatMap(lambda x: x[0].keys()).distinct().collect()
选择:
remapped.select([col("items")[key] for key in keys] + ["frequency"])
+------------+------------------+---------+
|items[color]|items[productcode]|frequency|
+------------+------------------+---------+
| red| hello| 7|
| blue| hi| 8|
| black| hoi| 7|
+------------+------------------+---------+