如何计算给定时间间隔窗口之间的行数

时间:2019-08-07 12:36:37

标签: dataframe pyspark pyspark-sql

我有一个包含2列事件时间(时间戳)和颜色(字符串)的数据框,我想计算每秒之间的行数。

  event-time              color
  2019-08-01 00:00:00    orange
  2019-08-01 00:00:20    orange
  2019-08-01 00:00:44    yellow
  2019-08-01 00:01:00    pink
  2019-08-01 00:01:20    pink
  2019-08-01 00:02:00    black
      ....               ...
  2019-08-07 00:01:00    pink

我想要这样

    event-time            count
    2019-08-01 00:00:00   3
    2019-08-01 00:01:00   2
    2019-08-01 00:02:00   1
         ...              ...

我尝试了使用窗口功能,但没有得到预期的输出。

3 个答案:

答案 0 :(得分:0)

您可以创建范围变量并将其用于分组和计数。像下面这样的东西应该有所帮助

import pyspark.sql.functions as F

seconds = 1
seconds_window = F.from_unixtime(F.unix_timestamp('event-time')\
       - F.unix_timestamp('event-time') % seconds)
df = df.withColumn('1sec_window', seconds_window)

答案 1 :(得分:0)

您可以在此处使用window函数。

首先创建一个DataFrame,如果event-timeStringType中,请将其转换为TimestampType

df = df.withColumn('time', F.to_timestamp(df['event-time'], 'yyyy-MM-ddHH:mm:ss'))
df.show()

这是我们拥有的DataFrame:

+------------------+------+-------------------+
|        event-time| color|               time|
+------------------+------+-------------------+
|2019-08-0100:00:00|orange|2019-08-01 00:00:00|
|2019-08-0100:00:20|orange|2019-08-01 00:00:20|
|2019-08-0100:00:44|yellow|2019-08-01 00:00:44|
|2019-08-0100:01:00|  pink|2019-08-01 00:01:00|
|2019-08-0100:01:20|  pink|2019-08-01 00:01:20|
|2019-08-0100:02:00| black|2019-08-01 00:02:00|
+------------------+------+-------------------+

接下来,按event-time窗口对1 minute进行分组,并使用aggcount

w = df.groupBy(F.window("time", "1 minute")).agg(F.count("event-time").alias("count"))
w.orderBy('window').show()
w.select(w.window.start.cast("string").alias("start"), w.window.end.cast("string").alias("end"), "count").orderBy('start').show()

这就是你最后得到的:

+--------------------+-----+
|              window|count|
+--------------------+-----+
|[2019-08-01 00:00...|    3|
|[2019-08-01 00:01...|    2|
|[2019-08-01 00:02...|    1|
+--------------------+-----+


+-------------------+-------------------+-----+
|              start|                end|count|
+-------------------+-------------------+-----+
|2019-08-01 00:00:00|2019-08-01 00:01:00|    3|
|2019-08-01 00:01:00|2019-08-01 00:02:00|    2|
|2019-08-01 00:02:00|2019-08-01 00:03:00|    1|
+-------------------+-------------------+-----+

您可以用其他时间间隔替换1 minute,例如 1 second1 day 12 hours2 minutes

请参见the documentation here

答案 2 :(得分:0)

IIUC,您想按分钟对事件时间进行分组,可以尝试pyspark.sql.functions.date_trunc spark 2.3 +

>>> from pyspark.sql.functions import date_trunc, to_timestamp

>>> df.show()                                                                                                                   
+-------------------+------+
|         event-time| color|
+-------------------+------+
|2019-08-01 00:00:00|orange|
|2019-08-01 00:00:20|orange|
|2019-08-01 00:00:44|yellow|
|2019-08-01 00:01:00|  pink|
|2019-08-01 00:01:20|  pink|
|2019-08-01 00:02:00| black|
+-------------------+------+

>>> df.withColumn('event-time', date_trunc('minute', to_timestamp('event-time'))).show()                                    
+-------------------+------+
|         event-time| color|
+-------------------+------+
|2019-08-01 00:00:00|orange|
|2019-08-01 00:00:00|orange|
|2019-08-01 00:00:00|yellow|
|2019-08-01 00:01:00|  pink|
|2019-08-01 00:01:00|  pink|
|2019-08-01 00:02:00| black|
+-------------------+------+

然后对更新的event-time进行分组,并计算行数:

>>> df.withColumn('event-time', date_trunc('minute', to_timestamp('event-time'))) \
  .groupBy('event-time') \
  .count() \
  .show()     
+-------------------+-----+                                                     
|         event-time|count|
+-------------------+-----+
|2019-08-01 00:01:00|    2|
|2019-08-01 00:00:00|    3|
|2019-08-01 00:02:00|    1|
+-------------------+-----+

注意:如果event-time已经是TimestampType,则跳过函数 to_timestamp()并使用event-time字段直接。