我想将一列的值转换为databricks上pyspark中一个数据帧的多个列。
例如
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark._sc.parallelize([["dapd", "shop", "retail"],
["dapd", "shop", "on-line"],
["dapd", "payment", "credit"],
["wrfr", "shop", "supermarket"],
["wrfr", "shop", "brand store"],
["wrfr", "payment", "cash"]]).toDF(["id", "value1", "value2"])
我需要将其转换为:
id, shop payment
dapd retail|on-line credit
wrfr supermarket|brand store cash
我不确定如何在pyspark中做到这一点?
谢谢
答案 0 :(得分:1)
您正在寻找pivot
和聚合功能(例如collect_list()
或collect_set()
)的组合。在此处查看可用的聚合功能:https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg#module-pyspark.sql.functions。
这是一些代码示例:
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
df = spark._sc.parallelize([
["dapd", "shop", "retail"],
["dapd", "shop", "on-line"],
["dapd", "payment", "credit"],
["wrfr", "shop", "supermarket"],
["wrfr", "shop", "brand store"],
["wrfr", "payment", "cash"]]
).toDF(["id", "value1", "value2"])
df.show()
+----+-------+-----------+
| id| value1| value2|
+----+-------+-----------+
|dapd| shop| retail|
|dapd| shop| on-line|
|dapd|payment| credit|
|wrfr| shop|supermarket|
|wrfr| shop|brand store|
|wrfr|payment| cash|
+----+-------+-----------+
df.groupBy('id').pivot('value1').agg(f.collect_list("value2")).show(truncate=False)
+----+--------+--------------------------+
|id |payment |shop |
+----+--------+--------------------------+
|dapd|[credit]|[retail, on-line] |
|wrfr|[cash] |[supermarket, brand store]|
+----+--------+--------------------------+
答案 1 :(得分:0)
您可以做类似的事情。
newdf=df.groupby('id').pivot('value1').agg(func.collect_list(func.col('value2')))
newdf=newdf.withColumn('shop',func.concat_ws('|',func.col('shop')[0],func.col('shop')[1]))
newdf=newdf.withColumn('payment',func.col('payment')[0])
newdf.show(20, False)
+----+-------+-----------------------+
|id |payment|shop |
+----+-------+-----------------------+
|dapd|credit |retail|on-line |
|wrfr|cash |brand store|supermarket|
+----+-------+-----------------------+