整个数据框上的Pyspark窗口功能

时间:2020-02-26 16:25:10

标签: dataframe apache-spark pyspark apache-spark-sql window-functions

考虑一个pyspark数据框。我想总结每列的整个数据框,并将结果附加到每一行。

+-----+----------+-----------+
|index|      col1| col2      |
+-----+----------+-----------+
|  0.0|0.58734024|0.085703015|
|  1.0|0.67304325| 0.17850411|

预期结果

+-----+----------+-----------+-----------+-----------+-----------+-----------+
|index|      col1| col2      |  col1_min | col1_mean |col2_min   | col2_mean
+-----+----------+-----------+-----------+-----------+-----------+-----------+
|  0.0|0.58734024|0.085703015|  -5       | 2.3       |  -2       | 1.4 |
|  1.0|0.67304325| 0.17850411|  -5       | 2.3       |  -2       | 1.4 |

据我所知,我需要将整个数据框都作为Window的Window函数,以保留每一行的结果(例如,代替分别进行统计,然后再加入以复制每一行的数据)

我的问题是:

  1. 如何编写没有任何分区或顺序的Window

我知道有一个带有分区和顺序的标准窗口,但没有一个将所有内容都作为一个分区的窗口

w = Window.partitionBy("col1", "col2").orderBy(desc("col1"))
df = df.withColumn("col1_mean", mean("col1").over(w)))

我如何编写一个将所有内容都作为一个分区的窗口?

  1. 为所有列动态编写的任何方法。

比方说我有500列,重复写看起来并不好。

df = df.withColumn("col1_mean", mean("col1").over(w))).withColumn("col1_min", min("col2").over(w)).withColumn("col2_mean", mean().over(w)).....

让我们假设我希望每一列都有多个统计信息,因此每个colx都将生成colx_min, colx_max, colx_mean

2 个答案:

答案 0 :(得分:2)

您可以使用自定义聚合结合交叉联接来实现相同功能,而不必使用窗口:

import pyspark.sql.functions as F
from pyspark.sql.functions import broadcast
from itertools import chain

df = spark.createDataFrame([
  [1, 2.3, 1],
  [2, 5.3, 2],
  [3, 2.1, 4],
  [4, 1.5, 5]
], ["index", "col1", "col2"])

agg_cols = [(
             F.min(c).alias("min_" + c), 
             F.max(c).alias("max_" + c), 
             F.mean(c).alias("mean_" + c)) 

  for c in df.columns if c.startswith('col')]

stats_df = df.agg(*list(chain(*agg_cols)))

# there is no performance impact from crossJoin since we have only one row on the right table which we broadcast (most likely Spark will broadcast it anyway)
df.crossJoin(broadcast(stats_df)).show() 

# +-----+----+----+--------+--------+---------+--------+--------+---------+
# |index|col1|col2|min_col1|max_col1|mean_col1|min_col2|max_col2|mean_col2|
# +-----+----+----+--------+--------+---------+--------+--------+---------+
# |    1| 2.3|   1|     1.5|     5.3|      2.8|       1|       5|      3.0|
# |    2| 5.3|   2|     1.5|     5.3|      2.8|       1|       5|      3.0|
# |    3| 2.1|   4|     1.5|     5.3|      2.8|       1|       5|      3.0|
# |    4| 1.5|   5|     1.5|     5.3|      2.8|       1|       5|      3.0|
# +-----+----+----+--------+--------+---------+--------+--------+---------+

注1:使用广播,我们将避免改组,因为广播的df将发送给所有执行者。

Note2:chain(*agg_cols)展平在上一步中创建的元组列表。

更新:

以下是上述程序的执行计划:

== Physical Plan ==
*(3) BroadcastNestedLoopJoin BuildRight, Cross
:- *(3) Scan ExistingRDD[index#196L,col1#197,col2#198L]
+- BroadcastExchange IdentityBroadcastMode, [id=#274]
   +- *(2) HashAggregate(keys=[], functions=[finalmerge_min(merge min#233) AS min(col1#197)#202, finalmerge_max(merge max#235) AS max(col1#197)#204, finalmerge_avg(merge sum#238, count#239L) AS avg(col1#197)#206, finalmerge_min(merge min#241L) AS min(col2#198L)#208L, finalmerge_max(merge max#243L) AS max(col2#198L)#210L, finalmerge_avg(merge sum#246, count#247L) AS avg(col2#198L)#212])
      +- Exchange SinglePartition, [id=#270]
         +- *(1) HashAggregate(keys=[], functions=[partial_min(col1#197) AS min#233, partial_max(col1#197) AS max#235, partial_avg(col1#197) AS (sum#238, count#239L), partial_min(col2#198L) AS min#241L, partial_max(col2#198L) AS max#243L, partial_avg(col2#198L) AS (sum#246, count#247L)])
            +- *(1) Project [col1#197, col2#198L]
               +- *(1) Scan ExistingRDD[index#196L,col1#197,col2#198L]

在这里我们看到BroadcastExchange中的SinglePartition正在广播一行,因为stats_df可以放入SinglePartition中。因此,此处要改组的数据只有一行(可能的最小值)。

答案 1 :(得分:1)

我们还可以在窗口函数中不使用 orderby,partitionBy 子句来指定 min("<col_name>").over()

Example:

//sample data
val df=Seq((1,2,3),(4,5,6)).toDF("i","j","k")

val df1=df.columns.foldLeft(df)((df, c) => {
  df.withColumn(s"${c}_min",min(col(s"${c}")).over()).
  withColumn(s"${c}_max",max(col(s"${c}")).over()).
  withColumn(s"${c}_mean",mean(col(s"${c}")).over())
})

df1.show()
//+---+---+---+-----+-----+------+-----+-----+------+-----+-----+------+
//|  i|  j|  k|i_min|i_max|i_mean|j_min|j_max|j_mean|k_min|k_max|k_mean|
//+---+---+---+-----+-----+------+-----+-----+------+-----+-----+------+
//|  1|  2|  3|    1|    4|   2.5|    2|    5|   3.5|    3|    6|   4.5|
//|  4|  5|  6|    1|    4|   2.5|    2|    5|   3.5|    3|    6|   4.5|
//+---+---+---+-----+-----+------+-----+-----+------+-----+-----+------+