Pyspark agg函数可将行“爆炸”为列

时间:2019-09-23 11:59:07

标签: apache-spark pyspark

基本上,我有一个看起来像这样的数据框:

+----+-------+------+------+
| id | index | col1 | col2 |
+----+-------+------+------+
| 1  | a     | a11  | a12  |
+----+-------+------+------+
| 1  | b     | b11  | b12  |
+----+-------+------+------+
| 2  | a     | a21  | a22  |
+----+-------+------+------+
| 2  | b     | b21  | b22  |
+----+-------+------+------+

我想要的输出是这样

+----+--------+--------+--------+--------+
| id | col1_a | col1_b | col2_a | col2_b |
+----+--------+--------+--------+--------+
| 1  | a11    | b11    | a12    | b12    |
+----+--------+--------+--------+--------+
| 2  | a21    | b21    | a22    | b22    |
+----+--------+--------+--------+--------+

因此,基本上,我想在对index进行分组之后将id列“分解”为新列。顺便说一下,id的计数是相同的,并且每个id都具有相同的index值集。我正在使用pyspark。

1 个答案:

答案 0 :(得分:1)

  

使用数据透视表可以实现所需的输出。

from pyspark.sql import functions as F
df = spark.createDataFrame([[1,"a","a11","a12"],[1,"b","b11","b12"],[2,"a","a21","a22"],[2,"b","b21","b22"]],["id","index","col1","col2"])
df.show()
+---+-----+----+----+                                                           
| id|index|col1|col2|
+---+-----+----+----+
|  1|    a| a11| a12|
|  1|    b| b11| b12|
|  2|    a| a21| a22|
|  2|    b| b21| b22|
+---+-----+----+----+

使用数据透视

 df3 =df.groupBy("id").pivot("index").agg(F.first(F.col("col1")),F.first(F.col("col2")))

collist=["id","col1_a","col2_a","col1_b","col2_b"]

重命名列

df3.toDF(*collist).show()
+---+------+------+------+------+
| id|col1_a|col2_a|col1_b|col2_b|
+---+------+------+------+------+
|  1|   a11|   a12|   b11|   b12|
|  2|   a21|   a22|   b21|   b22|
+---+------+------+------+------+

注意根据您的要求重新排列列。