插入缺少的日期行,并在新行中插入旧值PySpark

时间:2019-12-08 21:58:16

标签: pyspark

我有一个DataFrame,其中包含一个人,一个体重和一个时间戳:

+-----------+-------------------+------+
|     person|          timestamp|weight|
+-----------+-------------------+------+
|          1|2019-12-02 14:54:17| 49.94|
|          1|2019-12-03 08:58:39| 50.49|
|          1|2019-12-06 10:44:01| 50.24|
|          2|2019-12-02 08:58:39| 62.32|
|          2|2019-12-04 10:44:01| 65.64|
+-----------+-------------------+------+

我要填写,以便每个人每个日期都有东西,这意味着上面的应该是:

+-----------+-------------------+------+
|     person|          timestamp|weight|
+-----------+-------------------+------+
|          1|2019-12-02 14:54:17| 49.94|
|          1|2019-12-03 08:58:39| 50.49|
|          1|2019-12-04 00:00:01| 50.49|
|          1|2019-12-05 00:00:01| 50.49|
|          1|2019-12-06 10:44:01| 50.24|
|          1|2019-12-07 00:00:01| 50.24|
|          1|2019-12-08 00:00:01| 50.24|
|          2|2019-12-02 08:58:39| 62.32|
|          2|2019-12-03 00:00:01| 62.32|
|          2|2019-12-04 10:44:01| 65.64|
|          2|2019-12-05 00:00:01| 65.64|
|          2|2019-12-06 00:00:01| 65.64|
|          2|2019-12-07 00:00:01| 65.64|
|          2|2019-12-08 00:00:01| 65.64|
+-----------+-------------------+------+

我定义了一个新表,该表使用 datediff 包含最小和最大日期之间的所有日期:

min_max_date = df_person_weights.select(min("timestamp"), max("timestamp")) \
        .withColumnRenamed("min(timestamp)", "min_date") \
        .withColumnRenamed("max(timestamp)", "max_date")

min_max_date = min_max_date.withColumn("datediff", datediff("max_date", "min_date")) \
        .withColumn("repeat", expr("split(repeat(',', datediff), ',')")) \
        .select("*", posexplode("repeat").alias("date", "val")) \
        .withColumn("date", expr("date_add(min_date, date)"))

这为我提供了一个新的DataFrame,其中包含以下日期:

+----------+
|      date|
+----------+
|2019-12-03|    
|2019-12-03|
|2019-12-04|
|2019-12-05|
|2019-12-06|
|2019-12-07|
|2019-12-08|
+----------+

我尝试了不同的联接,例如:

min_max_date.join(df_price_history, min_max_date.date != df_price_history.date, "leftouter")

但是我没有得到所需的结果,有人可以帮忙吗?如何合并我现在拥有的信息?

1 个答案:

答案 0 :(得分:2)

您要向前填充数据集。这变得有些复杂,因为您需要按类别(人)进行操作。

一种方法是这样的:创建一个新的DataFrame,其中包含每个人要具有其值的所有日期(请参见下文,这只是dates_by_person)。

然后,将原始DataFrame左连接到此数据库,以便您开始创建丢失的行。

接下来,使用开窗函数在person的每组中查找按日期排序的最后一个非空权重。如果每个日期可以有多个条目(因此一个人在一个特定日期可以填写多个记录),则还必须按时间戳列进行排序。

最后,您将各列合并,以便将任何空字段替换为预期值。

from datetime import datetime, timedelta
from itertools import product

import pyspark.sql.functions as psf
from pyspark.sql import Window

data = (  # recreate the DataFrame
    (1, datetime(2019, 12, 2, 14, 54, 17), 49.94),
    (1, datetime(2019, 12, 3, 8, 58, 39), 50.49),
    (1, datetime(2019, 12, 6, 10, 44, 1), 50.24),
    (2, datetime(2019, 12, 2, 8, 58, 39), 62.32),
    (2, datetime(2019, 12, 4, 10, 44, 1), 65.64))
df = spark.createDataFrame(data, schema=("person", "timestamp", "weight"))

min_max_timestamps = df.agg(psf.min(df.timestamp), psf.max(df.timestamp)).head()
first_date, last_date = [ts.date() for ts in min_max_timestamps]
all_days_in_range = [first_date + timedelta(days=d)
                     for d in range((last_date - first_date).days + 1)]
people = [row.person for row in df.select("person").distinct().collect()]
dates_by_person = spark.createDataFrame(product(people, all_days_in_range),
                                        schema=("person", "date"))

df2 = (dates_by_person.join(df,
                            (psf.to_date(df.timestamp) == dates_by_person.date)
                            & (dates_by_person.person == df.person),
                            how="left")
       .drop(df.person)
       )
wind = (Window
        .partitionBy("person")
        .rangeBetween(Window.unboundedPreceding, -1)
        .orderBy(psf.unix_timestamp("date"))
        )
df3 = df2.withColumn("last_weight",
                     psf.last("weight", ignorenulls=True).over(wind))
df4 = df3.select(
    df3.person,
    psf.coalesce(df3.timestamp, psf.to_timestamp(df3.date)).alias("timestamp"),
    psf.coalesce(df3.weight, df3.last_weight).alias("weight"))
df4.show()
# +------+-------------------+------+
# |person|          timestamp|weight|
# +------+-------------------+------+
# |     1|2019-12-02 14:54:17| 49.94|
# |     1|2019-12-03 08:58:39| 50.49|
# |     1|2019-12-04 00:00:00| 50.49|
# |     1|2019-12-05 00:00:00| 50.49|
# |     1|2019-12-06 10:44:01| 50.24|
# |     2|2019-12-02 08:58:39| 62.32|
# |     2|2019-12-03 00:00:00| 62.32|
# |     2|2019-12-04 10:44:01| 65.64|
# |     2|2019-12-05 00:00:00| 65.64|
# |     2|2019-12-06 00:00:00| 65.64|
# +------+-------------------+------+

编辑:根据David在评论中的建议,如果您有很多人,则dates_by_people的构造可以不需要将所有东西都带给驾驶员。在此示例中,我们谈论的是少量整数,没什么大不了的。但如果变大,请尝试:

dates = spark.createDataFrame(((d,) for d in all_days_in_range),
                              schema=("date",))
people = df.select("person").distinct()
dates_by_person = dates.crossJoin(people)