取消合并Apache Beam每用户会话窗口

时间:2019-03-18 10:43:36

标签: python google-cloud-dataflow apache-beam

我们有一个拥有用户的应用程序;每个用户每次使用我们的应用程序的时间约为10-40分钟,我想根据发生的特定事件(例如“此用户转换”,“该用户”上次会话有问题”,“此用户的上次会话成功”。

(此后,我想每天计算这些较高级别的事件,但这是一个单独的问题)

为此,我一直在研究会话窗口;但是所有docs似乎都面向全局会话窗口,但我想按用户创建它们(这也是自然的分区)。

我无法找到有关如何执行此操作的文档(首选python)。你能指出我正确的方向吗?

换句话说:如何创建每个用户的每个会话窗口,以输出更多结构化(丰富的)事件?

我有什么

class DebugPrinter(beam.DoFn):
  """Just prints the element with logging"""
  def process(self, element, window=beam.DoFn.WindowParam):
    _, x = element
    logging.info(">>> Received %s %s with window=%s", x['jsonPayload']['value'], x['timestamp'], window)
    yield element

def sum_by_event_type(user_session_events):
  logging.debug("Received %i events: %s", len(user_session_events), user_session_events)
  d = {}
  for key, group in groupby(user_session_events, lambda e: e['jsonPayload']['value']):
    d[key] = len(list(group))
  logging.info("After counting: %s", d)
  return d

# ...

by_user = valid \
  | 'keyed_on_user_id'      >> beam.Map(lambda x: (x['jsonPayload']['userId'], x))

session_gap = 5 * 60 # [s]; 5 minutes

user_sessions = by_user \
  | 'user_session_window'   >> beam.WindowInto(beam.window.Sessions(session_gap),
                                               timestamp_combiner=beam.window.TimestampCombiner.OUTPUT_AT_EOW) \
  | 'debug_printer'         >> beam.ParDo(DebugPrinter()) \
  | beam.CombinePerKey(sum_by_event_type)

输出内容

INFO:root:>>> Received event_1 2019-03-12T08:54:29.200Z with window=[1552380869.2, 1552381169.2)
INFO:root:>>> Received event_2 2019-03-12T08:54:29.200Z with window=[1552380869.2, 1552381169.2)
INFO:root:>>> Received event_3 2019-03-12T08:54:30.400Z with window=[1552380870.4, 1552381170.4)
INFO:root:>>> Received event_4 2019-03-12T08:54:36.300Z with window=[1552380876.3, 1552381176.3)
INFO:root:>>> Received event_5 2019-03-12T08:54:38.100Z with window=[1552380878.1, 1552381178.1)

如您所见; Session()窗口不会展开该窗口,而是仅将非常接近的事件组合在一起……发生了什么错?

1 个答案:

答案 0 :(得分:0)

通过在窗口之后添加“按组分组”变换,可以使其工作。您已经为记录分配了键,但是实际上并没有通过键将它们分组在一起,因此会话窗口(每个键都可以使用)不知道这些事件需要合并在一起。

为确认这一点,我对内存中的伪数据做了一个可复制的示例(以将Pub / Sub与问题隔离开来,并能够更快地对其进行测试)。所有这五个事件将具有相同的键或user_id,但它们将相距“ 1、2、4和8秒”依次“到达”。当我使用5秒钟的session_gap时,我希望将前4个元素合并到同一会话中。第五届比赛将在第四届比赛之后进行8秒钟,因此必须降级到下一场比赛(间隔5秒以上)。数据是这样创建的:

data = [{'user_id': 'Thanos', 'value': 'event_{}'.format(event), 'timestamp': time.time() + 2**event} for event in range(5)]

我们使用beam.Create(data)初始化管道,并使用beam.window.TimestampedValue分配“假”时间戳。同样,我们只是以此模拟流行为。之后,借助user_id字段创建键值对,我们进入window.Sessions,然后添加缺少的beam.GroupByKey()步骤。最后,我们使用略微修改的DebugPrinter:记录结果。管道现在看起来像这样:

events = (p
  | 'Create Events' >> beam.Create(data) \
  | 'Add Timestamps' >> beam.Map(lambda x: beam.window.TimestampedValue(x, x['timestamp'])) \
  | 'keyed_on_user_id'      >> beam.Map(lambda x: (x['user_id'], x))
  | 'user_session_window'   >> beam.WindowInto(window.Sessions(session_gap),
                                             timestamp_combiner=window.TimestampCombiner.OUTPUT_AT_EOW) \
  | 'Group' >> beam.GroupByKey()
  | 'debug_printer'         >> beam.ParDo(DebugPrinter()))

其中DebugPrinter是:

class DebugPrinter(beam.DoFn):
  """Just prints the element with logging"""
  def process(self, element, window=beam.DoFn.WindowParam):
    for x in element[1]:
      logging.info(">>> Received %s %s with window=%s", x['value'], x['timestamp'], window)

    yield element

如果我们在不按键分组的情况下进行测试,则会得到相同的行为:

INFO:root:>>> Received event_0 1554117323.0 with window=[1554117323.0, 1554117328.0)
INFO:root:>>> Received event_1 1554117324.0 with window=[1554117324.0, 1554117329.0)
INFO:root:>>> Received event_2 1554117326.0 with window=[1554117326.0, 1554117331.0)
INFO:root:>>> Received event_3 1554117330.0 with window=[1554117330.0, 1554117335.0)
INFO:root:>>> Received event_4 1554117338.0 with window=[1554117338.0, 1554117343.0)

但是添加后,窗口现在可以正常工作了。事件0到3在扩展的12s会话窗口中合并在一起。事件4属于一个单独的5s会话。

INFO:root:>>> Received event_0 1554118377.37 with window=[1554118377.37, 1554118389.37)
INFO:root:>>> Received event_1 1554118378.37 with window=[1554118377.37, 1554118389.37)
INFO:root:>>> Received event_3 1554118384.37 with window=[1554118377.37, 1554118389.37)
INFO:root:>>> Received event_2 1554118380.37 with window=[1554118377.37, 1554118389.37)
INFO:root:>>> Received event_4 1554118392.37 with window=[1554118392.37, 1554118397.37)

完整代码here

还有两件事值得一提。第一个是,即使使用DirectRunner在单台计算机上本地运行此记录,记录也可能无序(在我的情况下,event_3在event_2之前被处理)。这样做是为了模拟here中所述的分布式处理。

最后一个是,如果您得到这样的堆栈跟踪:

TypeError: Cannot convert GlobalWindow to apache_beam.utils.windowed_value._IntervalWindowBase [while running 'Write Results/Write/WriteImpl/WriteBundles']

从2.10.0 / 2.11.0 SDK降级到2.9.0。例如,请参见此answer