我有一个大型的pyspark数据框,其中包含多年期间的用户交互数据。有很多列,但是对此问题有用的三个列是userid
,interaction_date
和interaction_timestamp
。假设表中有给定用户的多个条目。
我需要编写一个函数来添加一列,该列将指示表中给定客户在最新观察到的交互之前的天数。例如,对于输入表
我想添加一列,该列从该用户的最近交互日期开始计数(例如,最近的交互日期是1,下一个最近的交互日期是2,等等):>
任何人都可以引导我朝着正确的方法去做吗?
答案 0 :(得分:1)
您可以使用window之类的dense_rank函数来实现。看看下面的评论:
from pyspark.sql.window import Window
import pyspark.sql.functions as F
cols = ['userid','interaction_timestamp']
data =[( '1' ,'2018-01-02' ),
( '2' , '2018-01-03' ),
( '1' , '2018-01-03' ),
( '1' , '2018-01-04' ),
( '2' , '2018-01-02' ),
( '3' , '2018-01-03' ),
( '4' , '2018-01-03' )]
df = spark.createDataFrame(data, cols)
df = df.withColumn('interaction_timestamp', F.to_date('interaction_timestamp', 'yyyy-MM-dd'))
#rows with the same userid become part of the the same partition
#these partitions will be ordered descending by interaction_timestamp
w = Window.partitionBy('userid').orderBy(F.desc('interaction_timestamp'))
#dense_rank will assign a number to each row according to the defined order
df.withColumn("interaction_date_order", F.dense_rank().over(w)).show()
输出:
+------+---------------------+----------------------+
|userid|interaction_timestamp|interaction_date_order|
+------+---------------------+----------------------+
| 3| 2018-01-03| 1|
| 1| 2018-01-04| 1|
| 1| 2018-01-03| 2|
| 1| 2018-01-02| 3|
| 4| 2018-01-03| 1|
| 2| 2018-01-03| 1|
| 2| 2018-01-02| 2|
+------+---------------------+----------------------+