Kafka流DSL:聚合,丰富和发送

时间:2017-02-10 18:30:18

标签: apache-kafka-streams

使用Kafka Streams解决以下问题:

1-收到消息。每条消息都标有eventId(消息更新事件)和correlationId(每条消息都是唯一的)。

2-从该消息聚合一些状态(基于eventId)并将其附加到本地存储中已存在的状态

3-为该事件的完全聚合状态丰富该消息,并将其发送到输出主题

重点是我们不能真正丢失一条消息,并且它必须始终使用最新的聚合状态(我们在消息处理期间实际评估)来丰富传入消息。

从我到目前为止看到的,我们不能只使用简单的聚合(类似的东西:)

stateMessageStream
  .map((k, v) => new KeyValue[String, StateMessage](k, v))
  .mapValues[StateMessageWithMarkets](sm => {StateMessageWithMarkets(Some(sm), extract(sm))})
  .groupBy((k, _) => k, stringSerde, marketAggregatorSerde)
  .aggregate[StateMessageWithMarkets](() => StateMessageWithMarkets(), (_, v, aggregatedState) => aggregatedState.updateModelMarketsWith(v), marketAggregatorSerde, kafkaStoreName)
  .to(stringSerde, marketAggregatorSerde, kafkaOutTopic)

因为聚合仅在间隔中产生新记录,这意味着对于两个传入消息,我们可能只生成单个聚合输出消息(因此我们丢失了一条消息)

我的第二次尝试如何实现这个基本上是两个流,一个用于聚合,第二个用于普通消息。最后,我们可以使用join操作将两个流连接在一起,基于correlationId作为键 - 我们可以将正确的状态与正确的消息匹配:

val aggregatedStream : KStream[String, MarketAggregator] = stateMessageStream
  .map((k, v) => new KeyValue[String, StateMessage](k, v))
  .mapValues[StateMessage](v => {
    log.debug("Received State Message, gameId: " + v.metadata().gtpId() + ", correlationId: " + v.correlationId)
    v})
  .mapValues[MarketAggregator](sm => {MarketAggregator(sm.correlationId, extract(sm))})
  .groupBy((k, v) => k, stringSerde, marketAggregatorSerde)
  .aggregate[MarketAggregator](() => MarketAggregator(), (_, v, aggregatedState) => aggregatedState.updateModelMarketsWith(v), marketAggregatorSerde, kafkaStoreName)
  .toStream((k, v) => v.correlationId)

stateMessageStream
  .selectKey[String]((k, v) => v.correlationId)
  .leftJoin[MarketAggregator, StateMessageWithMarkets](aggregatedStream, (stateMessage : StateMessage, aggregatedState : MarketAggregator) => StateMessageWithMarkets(Some(stateMessage), aggregatedState.modelMarkets, stateMessage.correlationId),
      JoinWindows.of(10000),
      stringSerde, stateMessageSerde, marketAggregatorSerde)
  .mapValues[StateMessageWithMarkets](v => {
        log.debug("Producing aggregated State Message, gameId: " + v.stateMessage.map(_.metadata().gtpId()).getOrElse("unknown") +
          ", correlationId: " + v.stateMessage.map(_.correlationId).getOrElse("unknown"))
          v
        })
  .to(stringSerde, stateMessageWithMarketsSerde, kafkaOutTopic)

然而,这似乎也不起作用 - 对于两个传入消息,我仍然只得到输出主题上具有最新聚合状态的单个消息。

有人可以解释为什么以及正确的解决方案是什么?

1 个答案:

答案 0 :(得分:3)

您可以使用方法一,通过禁用缓存为每条输入消息获取输出消息。在StreamsConfig中,您只需将StreamConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG的值设置为零。

有关详细信息,请参阅http://docs.confluent.io/current/streams/developer-guide.html#memory-management