PySpark-如何转置数据框

时间:2018-11-06 11:30:23

标签: python apache-spark dataframe pyspark transpose

我想转置一个数据框。这只是我原始数据框的一小部分摘录-

from pyspark.sql.functions import to_timestamp, date_format 
valuesCol = [('22','ABC Ltd','U.K.','class 1',102),('22','ABC Ltd','U.K.','class 2',73),('22','ABC Ltd','U.K.','class 3',92),
             ('51','Eric AB','Sweden','class 1',52),('51','Eric AB','Sweden','class 2',34),('51','Eric AB','Sweden','class 3',11)]
df = sqlContext.createDataFrame(valuesCol,['ID','Firm','Country','Class','Revenue'])
df.show()
+---+-------+-------+-------+-------+
| ID|   Firm|Country|  Class|Revenue|
+---+-------+-------+-------+-------+
| 22|ABC Ltd|   U.K.|class 1|    102|
| 22|ABC Ltd|   U.K.|class 2|     73|
| 22|ABC Ltd|   U.K.|class 3|     92|
| 51|Eric AB| Sweden|class 1|     52|
| 51|Eric AB| Sweden|class 2|     34|
| 51|Eric AB| Sweden|class 3|     11|
+---+-------+-------+-------+-------+

PySpark中没有这样的转置函数。一种实现必要结果的方法是,在dataframes上创建3个class1, class2 and class3,然后将它们加入(left join)中。但是,这可能涉及通过哈希值分区程序在网络上进行改组,并且代价很高。我敢肯定,应该有一个优雅而简单的方法。

预期输出:

+---+-------+-------+-------+-------+-------+
| ID|   Firm|Country| Class1| Class2| Class3|
+---+-------+-------+-------+-------+-------+
| 22|ABC Ltd|   U.K.|    102|     73|     92|
| 51|Eric AB| Sweden|     52|     34|     11|
+---+-------+-------+-------+-------+-------+

1 个答案:

答案 0 :(得分:0)

对此link表示感谢。枢转时必须使用聚合函数,因为枢转始终与聚合相关。聚合函数可以是求和,计数,平均值,最小值或最大值,具体取决于所需的输出-

df = df.groupBy(["ID","Firm","Country"]).pivot("Class").sum("Revenue")
df.show()
+---+-------+-------+-------+-------+-------+
| ID|   Firm|Country|class 1|class 2|class 3|
+---+-------+-------+-------+-------+-------+
| 51|Eric AB| Sweden|     52|     34|     11|
| 22|ABC Ltd|   U.K.|    102|     73|     92|
+---+-------+-------+-------+-------+-------+