想要将熊猫df代码转换为pyspark df代码?

时间:2020-03-24 15:06:06

标签: python pandas dataframe pyspark

熊猫df代码是

Data = data[data['ObservationDate'] == max(data['ObservationDate'])].reset_index()
Data_world = Data.groupby(["ObservationDate"])["Confirmed","Active_case","Recovered","Deaths"].sum().reset_index()
Data_world

数据框结构是这个。

SNo     ObservationDate     Province/State  Country/Region  Last Update     Confirmed   Deaths  Recovered   Active_case
0   1   01/22/2020  Anhui   China   1/22/2020 17:00     1   0   0   1
1   2   01/22/2020  Beijing     China   1/22/2020 17:00     14  0   0   14
2   3   01/22/2020  Chongqing   China   1/22/2020 17:00     6   0   0   6
3   4   01/22/2020  Fujian  China   1/22/2020 17:00     1   0   0   1
4   5   01/22/2020  Gansu   China   1/22/2020 17:00     0   0   0   0

并想要这样的输出

ObservationDate     Confirmed   Active_case     Recovered   Deaths
0   03/22/2020  335957  223441  97882   14634

如何过滤最大日期?

max_date =  df.select(max("ObservationDate")).first()
group_data = df.groupBy("ObservationDate")
group_data.agg({'Confirmed':'sum', 'Deaths':'sum', 'Recovered':'sum', 'Active_case':'sum'}).show()

1 个答案:

答案 0 :(得分:1)

我想这就是你想要的。您可以先collectmax日期,然后filtergroupBy之前在aggregate.中使用它

from pyspark.sql import functions as F
max_date=df.select(F.max("ObservationDate")).collect()[0][0]
df.filter(F.col("ObservationDate")==max_date)\
.groupBy("ObservationDate")\
.agg({'Confirmed':'sum', 'Deaths':'sum', 'Recovered':'sum', 'Active_case':'sum'})\
.show()