pyspark中的数据透视框

时间:2020-06-23 14:52:13

标签: python pyspark apache-spark-sql

我的DF测试包含以下列

Type  Name  Country      Year    Value
1     Rec      US        2018      8
2     fg       UK        2019      2
5     vd      India      2020      1
7     se       US        2021      3

我想对它进行透视,我尝试了以下表达式 pivotdata=spark.sql("select * from test").groupby("Country").pivot("Year").sum("Value").show()

我正在获取输出,但是只显示了几列,其余两列

Country  2018  2019  2020  2021
US        -     -
UK        -      -
India     -      -
US        -      -

如果我想要所有列怎么办

1 个答案:

答案 0 :(得分:2)

如果我正确理解了您的需求,则必须在sum()中也提供其他列。考虑下面的示例:

tst=sqlContext.createDataFrame([('2020-04-23',1,2,"india"),('2020-04-24',1,3,"india"),('2020-04-23',1,4,"china"),('2020-04-24',1,5,"china"),('2020-04-23',1,7,"germany"),('2020-04-24',1,9,"germany")],schema=('date','quantity','value','country'))
tst.show()
+----------+--------+-----+-------+
|      date|quantity|value|country|
+----------+--------+-----+-------+
|2020-04-23|       1|    2|  india|
|2020-04-24|       1|    3|  india|
|2020-04-23|       1|    4|  china|
|2020-04-24|       1|    5|  china|
|2020-04-23|       1|    7|germany|
|2020-04-24|       1|    9|germany|
+----------+--------+-----+-------+
df_pivot=tst.groupby('country').pivot('date').sum('quantity','value').show()
df_pivot.show()
+-------+------------------------+---------------------+------------------------+---------------------+
|country|2020-04-23_sum(quantity)|2020-04-23_sum(value)|2020-04-24_sum(quantity)|2020-04-24_sum(value)|
+-------+------------------------+---------------------+------------------------+---------------------+
|germany|                       1|                    7|                       1|                    9|
|  china|                       1|                    4|                       1|                    5|
|  india|                       1|                    2|                       1|                    3|
+-------+------------------------+---------------------+------------------------+---------------------+

如果您不喜欢有趣的列名,则可以使用agg函数为透视列名定义自己的后缀。

tst_res=tst.groupby('country').pivot('date').agg(F.sum('quantity').alias('sum_quantity'),F.sum('value').alias('sum_value'))
tst_res.show()
+-------+-----------------------+--------------------+-----------------------+--------------------+
|country|2020-04-23_sum_quantity|2020-04-23_sum_value|2020-04-24_sum_quantity|2020-04-24_sum_value|
+-------+-----------------------+--------------------+-----------------------+--------------------+
|germany|                      1|                   7|                      1|                   9|
|  china|                      1|                   4|                      1|                   5|
|  india|                      1|                   2|                      1|                   3|
+-------+-----------------------+--------------------+-----------------------+--------------------+