如何将长DF转换为宽DF pyspark

时间:2019-10-01 11:17:10

标签: apache-spark pyspark apache-spark-sql

我的数据框如下:

df = sqlContext.createDataFrame([("count","doc_3",3), ("count","doc_2",6), ("type","doc_1",9), ("type","doc_2",6), ("one","doc_2",10)]).withColumnRenamed("_1","word").withColumnRenamed("_2","document").withColumnRenamed("_3","occurences")

由此,我需要创建如下矩阵:

----------+-----+------+----+
|document |count| type |one | 
+---------+-----+------|----+
|doc_1    |  0  |  9   | 0  |
|doc_2    |  6  |  6   | 10 | 
|doc_3    |  3  |  0   |  0 | 

所以我尝试了

print df.crosstab("document").show()

没有提供我想要的东西。感谢您的帮助

1 个答案:

答案 0 :(得分:1)

您正在寻找pivot

df = sqlContext.createDataFrame([("count","doc_3",3), ("count","doc_2",6), ("type","doc_1",9), ("type","doc_2",6), ("one","doc_2",10)], ["word", "document","occurences"])
#document is the column you want to keep
#word is the columns which contains the rows which should become columns
#all other columns will be used as value for the new dataframe 
#a function like max() is required as wants to know what it should do if
#it has two rows with the same value for document and word
df = df.groupby('document').pivot('word').max()
df = df.fillna(0)
df.show()

输出:

+--------+-----+---+----+ 
|document|count|one|type| 
+--------+-----+---+----+ 
|   doc_1|    0|  0|   9| 
|   doc_3|    3|  0|   0| 
|   doc_2|    6| 10|   6| 
+--------+-----+---+----+