如何在pyspark的一列上应用窗口功能?

时间:2019-07-26 04:58:38

标签: python dataframe pyspark

我有一个下面的数据框,它捕获每个管道运行中的记录数:

enter image description here

对于相同的表名,我想覆盖现有记录并在该运行中保留最新记录,例如当我在7月26日运行管道时,2条新记录被添加了def和lmn,因为def已经存在,所以我想在def记录本身上添加666,例如:

enter image description here

如何实现?我使用了窗口函数,但不能解决问题。

window = Window.partitionBy("tbl_name").orderBy(F.col("updated_on").desc())
a = a.withColumn('2019_07_26', F.first('2019_07_26').over(window))

1 个答案:

答案 0 :(得分:0)

您可以使用dense_rank对此进行存档,请参见以下示例:

String actualString= new String(Base64.getDecoder().decode("ENCODED STRING"));

然后应用密集等级:

from datetime import datetime
from pyspark.sql.window import *
import pyspark.sql.functions as F  
data = [
  ("def",None,20, datetime(2017, 3, 12, 3, 19, 58)),
  ("ab",None, 20, datetime(2017, 3, 12, 3, 21, 30)),
  ("test",20, None, datetime(2017, 3, 13, 3, 29, 40)),
  ("def",20, None, datetime(2017, 3, 13, 3, 31, 23))
]
df = sqlContext.createDataFrame(data, ["tbl_name","2019","2020","updated_on"])
df.show()
+--------+----+----+-------------------+
|tbl_name|2019|2020|         updated_on|
+--------+----+----+-------------------+
|     def|null|null|2017-03-12 03:19:58|
|      ab|null|  20|2017-03-12 03:21:30|
|    test|  20|null|2017-03-13 03:29:40|
|     def|  20|null|2017-03-13 03:31:23|
+--------+----+----+-------------------+

结果:

wd = Window.partitionBy("tbl_name").orderBy(F.col("updated_on").asc())
wa = Window.partitionBy("tbl_name").orderBy(F.col("updated_on").desc())
df2 = df.select("tbl_name",
                F.first("2019", ignorenulls=True).over(wa).alias("2019"),
                F.first("2020", ignorenulls=True).over(wa).alias("2020"),
                "updated_on",

F.dense_rank().over(wd).alias("rank")).filter(F.col("rank")==1).drop("rank")