合并每个组以填充时间序列

时间:2018-07-04 10:19:32

标签: python apache-spark pyspark

我正在尝试为每个组合并两个数据帧,以便为每个用户填充时间。考虑以下pyspark数据帧,

df = sqlContext.createDataFrame(
    [
        ('2018-03-01 00:00:00', 'A', 5),
        ('2018-03-01 03:00:00', 'A', 7),
        ('2018-03-01 02:00:00', 'B', 3),
        ('2018-03-01 04:00:00', 'B', 2)
     ],
     ('datetime', 'username', 'count')
)

#and

df1 = sqlContext.createDataFrame(
    [
        ('2018-03-01 00:00:00',1),
        ('2018-03-01 01:00:00', 2),
        ('2018-03-01 02:00:00', 2),
        ('2018-03-01 03:00:00', 3),
        ('2018-03-01 04:00:00', 1),
        ('2018-03-01 05:00:00', 5)
    ],
    ('datetime', 'val')
)

产生

+-------------------+--------+-----+
|           datetime|username|count|
+-------------------+--------+-----+
|2018-03-01 00:00:00|       A|    5|
|2018-03-01 03:00:00|       A|    7|
|2018-03-01 02:00:00|       B|    3|
|2018-03-01 04:00:00|       B|    2|
+-------------------+--------+-----+

#and 

+-------------------+---+
|           datetime|val|
+-------------------+---+
|2018-03-01 00:00:00|  1|
|2018-03-01 01:00:00|  2|
|2018-03-01 02:00:00|  2|
|2018-03-01 03:00:00|  3|
|2018-03-01 04:00:00|  1|
|2018-03-01 05:00:00|  5|
+-------------------+---+

val中的列df1是无关的,在最终结果中不需要,因此我们可以将其删除。最后,预期结果将是

+-------------------+--------+-----+
|           datetime|username|count|
+-------------------+--------+-----+
|2018-03-01 00:00:00|       A|    5|
|2018-03-01 01:00:00|       A|    0|
|2018-03-01 02:00:00|       A|    0|
|2018-03-01 03:00:00|       A|    7|
|2018-03-01 04:00:00|       A|    0|
|2018-03-01 05:00:00|       A|    0|
|2018-03-01 00:00:00|       B|    0|
|2018-03-01 01:00:00|       B|    0|
|2018-03-01 02:00:00|       B|    3|
|2018-03-01 03:00:00|       B|    0|
|2018-03-01 04:00:00|       B|    2|
|2018-03-01 05:00:00|       B|    0|
+-------------------+--------+-----+

我曾尝试groupBy()join,但这没用。我还尝试创建一个函数并将其注册为pandas_udf(),但仍然无法正常工作,即

df.groupBy('usernames').join(df1, 'datetime', 'right')

@pandas_udf('datetime string, username string, count double', F.PandasUDFType.GROUPED_MAP)
def fill_time(df):
    return df.merge(df1, on = 'cdatetime', how = 'right')

有什么建议吗?

1 个答案:

答案 0 :(得分:3)

只需跨产品使用不同的时间戳和用户名,然后外部联接数据即可:

from pyspark.sql.functions import broadcast

(broadcast(df1.select("datetime").distinct())
    .crossJoin(df.select("username").distinct())
    .join(df, ["datetime", "username"], "leftouter")
    .na.fill(0))

要使用pandas_udf,您需要一个本地对象作为参考

from pyspark.sql.functions import PandasUDFType, pandas_udf

def fill_time(df1):
    @pandas_udf('datetime string, username string, count double', PandasUDFType.GROUPED_MAP)
    def _(df):
        df_ = df.merge(df1, on='datetime', how='right')
        df_["username"] = df_["username"].ffill().bfill()
        return df_
    return _

(df.groupBy("username")
    .apply(fill_time(
        df1.select("datetime").distinct().toPandas()
    ))
    .na.fill(0))

但是它会比仅使用SQL的解决方案慢。