在pyspark数据框中查找不重叠的窗口

时间:2019-07-18 20:30:26

标签: apache-spark pyspark apache-spark-sql pyspark-sql

假设我有一个pyspark数据框,其中包含一个id列和一个时间列(t),以秒为单位。对于每个id,我想对行进行分组,以便每个组都具有该组开始时间之后5秒内的所有条目。例如,如果表是:

+---+--+
|id |t |
+---+--+
|1  |0 |
|1  |1 |
|1  |3 |
|1  |8 |
|1  |14|
|1  |18|
|2  |0 |
|2  |20|
|2  |21|
|2  |50|
+---+--+

那么结果应该是:

+---+--+---------+-------------+-------+
|id |t |subgroup |window_start |offset |
+---+--+---------+-------------+-------+
|1  |0 |1        |0            |0      |
|1  |1 |1        |0            |1      |
|1  |3 |1        |0            |3      |
|1  |8 |2        |8            |0      |
|1  |14|3        |14           |0      |
|1  |18|3        |14           |4      |
|2  |0 |1        |0            |0      |
|2  |20|2        |20           |0      |
|2  |21|2        |20           |1      |
|2  |50|3        |50           |0      |
+---+--+---------+-------------+-------+

我不需要子组编号是连续的。只要有效,我可以在Scala中使用自定义UDAF的解决方案。

每个组中的计算(cumsum(t)-(cumsum(t)%5))/5可用于标识第一个窗口,但不能标识第一个窗口。从本质上讲,问题在于找到第一个窗口后,累计和需要重置为0。我可以使用这种累计和方法进行递归操作,但这在大型数据集上效率太低。

与递归调用cumsum相比,以下方法可以工作并且效率更高,但是它仍然很慢,以致于无法在大型数据帧上使用。

d = [[int(x[0]),float(x[1])] for x in [[1,0],[1,1],[1,4],[1,7],[1,14],[1,18],[2,5],[2,20],[2,21],[3,0],[3,1],[3,1.5],[3,2],[3,3.5],[3,4],[3,6],[3,6.5],[3,7],[3,11],[3,14],[3,18],[3,20],[3,24],[4,0],[4,1],[4,2],[4,6],[4,7]]]

schema = pyspark.sql.types.StructType(
  [
    pyspark.sql.types.StructField('id',pyspark.sql.types.LongType(),False),
    pyspark.sql.types.StructField('t',pyspark.sql.types.DoubleType(),False)
  ]
)
df = spark.createDataFrame(
  [pyspark.sql.Row(*x) for x in d],
  schema
)

def getSubgroup(ts):
  result = []
  total = 0
  ts = sorted(ts)
  tdiffs = numpy.array(ts)
  tdiffs = tdiffs[1:]-tdiffs[:-1]
  tdiffs = numpy.concatenate([[0],tdiffs])
  subgroup = 0
  for k in range(len(tdiffs)):
    t = ts[k]
    tdiff = tdiffs[k]
    total = total+tdiff
    if total >= 5:
      total = 0
      subgroup += 1
    result.append([t,float(subgroup)])
  return result

getSubgroupUDF = pyspark.sql.functions.udf(getSubgroup,pyspark.sql.types.ArrayType(pyspark.sql.types.ArrayType(pyspark.sql.types.DoubleType())))

subgroups = df.select('id','t').distinct().groupBy(
  'id'
).agg(
  pyspark.sql.functions.collect_list('t').alias('ts')
).withColumn(
  't_and_subgroup',
  pyspark.sql.functions.explode(getSubgroupUDF('ts'))
).withColumn(
  't',
  pyspark.sql.functions.col('t_and_subgroup').getItem(0)
).withColumn(
  'subgroup',
  pyspark.sql.functions.col('t_and_subgroup').getItem(1).cast(pyspark.sql.types.IntegerType())
).drop(
  't_and_subgroup','ts'
)

df = df.join(
  subgroups,
  on=['id','t'],
  how='inner'
)

df.orderBy(
  pyspark.sql.functions.asc('id'),pyspark.sql.functions.asc('t')
).show()

1 个答案:

答案 0 :(得分:0)

subgroup列等效于按id, window_start进行分区,因此也许您不需要创建它。

要创建window_start,我认为这可以做到: .withColumn("window_start", min("t").over(Window.partitionBy("id").orderBy(asc("t")).rangeBetween(0, 5)))

我不确定rangeBetween的行为。

要创建offset就是.withColumn("offset", col("t") - col("window_start"))

让我知道怎么回事