请假设我在Pyspark上有这样的数据框;
.setPixelColor()
输出是
import pandas
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode
spark = SparkSession \
.Builder() \
.appName('stackoverflow') \
.getOrCreate()
data = {
'location_id': [1, 2, 3],
'product_model_features': [
[{'key': 'A', 'value': 'B'}, {'key': 'C', 'value': 'D'}, {'key': 'E', 'value': 'F'}],
[{'key': 'A', 'value': 'H'}, {'key': 'E', 'value': 'J'}],
[{'key': 'C', 'value': 'N'}, {'key': 'E', 'value': 'P'}]
]
}
df = pandas.DataFrame(data)
df = spark.createDataFrame(df)
df = df.withColumn('p', explode('product_model_features')) \
.select('location_id', 'p.key', 'p.value')
df.show()
我想将“键”列值转换为带有值的其他列。在下面,您可以看到输出内容。如果您对pyspark有想法,请告诉我
+-----------+---+-----+
|location_id|key|value|
+-----------+---+-----+
| 1| A| B|
| 1| C| D|
| 1| E| F|
| 2| A| H|
| 2| E| J|
| 3| C| N|
| 3| E| P|
+-----------+---+-----+
答案 0 :(得分:1)
您正在寻找pivot()
函数来转换数据框。
import pandas
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, col, first
spark = SparkSession \
.Builder() \
.appName('stackoverflow') \
.getOrCreate()
data = {
'location_id': [1, 2, 3],
'product_model_features': [
[{'key': 'A', 'value': 'B'}, {'key': 'C', 'value': 'D'}, {'key': 'E', 'value': 'F'}],
[{'key': 'A', 'value': 'H'}, {'key': 'E', 'value': 'J'}],
[{'key': 'C', 'value': 'N'}, {'key': 'E', 'value': 'P'}]
]
}
df = pandas.DataFrame(data)
df = spark.createDataFrame(df)
df = df \
.withColumn('p', explode('product_model_features')) \
.select('location_id', 'p.key', 'p.value')
df = df \
.groupby('location_id') \
.pivot('key') \
.agg(first('value')) \
.sort('location_id')
df.show()
输出:
+-----------+----+----+---+
|location_id| A| C| E|
+-----------+----+----+---+
| 1| B| D| F|
| 2| H|null| J|
| 3|null| N| P|
+-----------+----+----+---+