我有一张桌子,看起来像:
+----+------+-----+-------+
|time|val1 |val2 | class|
+----+------+-----+-------+
| 1| 3 | 2| b|
| 2| 3 | 1| b|
| 1| 2 | 4| a|
| 2| 2 | 5| a|
| 3| 1 | 5| a|
+----+------+-----+-------+
现在,我想对val1和val2列进行累加和。因此,我创建了一个窗口函数:
windowval = (Window.partitionBy('class').orderBy('time')
.rangeBetween(Window.unboundedPreceding, 0))
new_df = my_df.withColumn('cum_sum1', F.sum("val1").over(windowval))
.withColumn('cum_sum2', F.sum("val2").over(windowval))
但是我认为Spark将在原始表上两次应用窗口函数,这似乎效率较低。由于问题非常简单,是否有一种方法可以简单地一次应用窗口函数,然后对两列进行累加和运算?
答案 0 :(得分:1)
但是我认为Spark将在原始表上两次应用窗口函数,这似乎效率较低。
您的假设是不正确的。看看优化的逻辑就足够了
== Optimized Logical Plan ==
Window [sum(val1#1L) windowspecdefinition(class#3, time#0L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS cum_sum1#9L, sum(val2#2L) windowspecdefinition(class#3, time#0L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS cum_sum2#16L], [class#3], [time#0L ASC NULLS FIRST]
+- LogicalRDD [time#0L, val1#1L, val2#2L, class#3], false
或身体计划
== Physical Plan ==
Window [sum(val1#1L) windowspecdefinition(class#3, time#0L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS cum_sum1#9L, sum(val2#2L) windowspecdefinition(class#3, time#0L ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS cum_sum2#16L], [class#3], [time#0L ASC NULLS FIRST]
+- *(1) Sort [class#3 ASC NULLS FIRST, time#0L ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(class#3, 200)
+- Scan ExistingRDD[time#0L,val1#1L,val2#2L,class#3]
两者均清楚表明Window
仅应用一次。