使用窗口函数计算PySpark中的累积和

时间:2017-10-27 16:31:04

标签: python pyspark

我有以下示例DataFrame:

rdd = sc.parallelize([(1,20), (2,30), (3,30)])
df2 = spark.createDataFrame(rdd, ["id", "duration"])
df2.show()

+---+--------+
| id|duration|
+---+--------+
|  1|      20|
|  2|      30|
|  3|      30|
+---+--------+

我想以持续时间的desc顺序对此DataFrame进行排序,并添加一个具有持续时间累积总和的新列。所以我做了以下事情:

windowSpec = Window.orderBy(df2['duration'].desc())

df_cum_sum = df2.withColumn("duration_cum_sum", sum('duration').over(windowSpec))

df_cum_sum.show()

+---+--------+----------------+
| id|duration|duration_cum_sum|
+---+--------+----------------+
|  2|      30|              60|
|  3|      30|              60|
|  1|      20|              80|
+---+--------+----------------+

我想要的输出是:

+---+--------+----------------+
| id|duration|duration_cum_sum|
+---+--------+----------------+
|  2|      30|              30| 
|  3|      30|              60| 
|  1|      20|              80|
+---+--------+----------------+

我如何得到这个?

以下是细分:

+--------+----------------+
|duration|duration_cum_sum|
+--------+----------------+
|      30|              30| #First value
|      30|              60| #Current duration + previous cum sum value
|      20|              80| #Current duration + previous cum sum value     
+--------+----------------+

2 个答案:

答案 0 :(得分:1)

你可以引入row_number来打破关系;如果写在sql

df2.selectExpr(
    "id", "duration", 
    "sum(duration) over (order by row_number() over (order by duration desc)) as duration_cum_sum"
 ).show()

+---+--------+----------------+
| id|duration|duration_cum_sum|
+---+--------+----------------+
|  2|      30|              30|
|  3|      30|              60|
|  1|      20|              80|
+---+--------+----------------+

答案 1 :(得分:0)

在这里您可以检查

df2.withColumn('cumu', F.sum('duration').over(Window.orderBy(F.col('duration').desc()).rowsBetween(Window.unboundedPreceding, 0)
)).show()