如何对使用会话窗口的kafka流应用程序进行单元测试

时间:2019-08-13 15:24:38

标签: java unit-testing apache-kafka-streams windowing

我正在使用Kafka Stream 2.1

我正在尝试为聚合的流应用程序编写一些测试 使用活动间隔为300ms的会话窗口,通过事件的键(即相关ID)来记录一些事件。

以下是由方法表示的聚合实现:

    private static final int INACTIVITY_GAP = 300;

    public KStream<String, AggregatedCustomObject> aggregate(KStream<String, CustomObject> source) {

        return source
                // group by key (i.e by correlation ID)
                .groupByKey(Grouped.with(Serdes.String(), new CustomSerde()))
                // Define a session window with an inactivity gap of 300 ms
                .windowedBy(SessionWindows.with(Duration.ofMillis(INACTIVITY_GAP)).grace(Duration.ofMillis(INACTIVITY_GAP)))
                .aggregate(
                        // initializer
                        () -> new AggregatedCustomObject(),
                        // aggregates records in same session
                        (s, customObject, aggCustomObject) -> {
                            // ...
                            return aggCustomObject;
                        },
                        // merge sessions
                        (s, aggCustomObject1, aggCustomObject2) -> {
                            // ...
                            return aggCustomObject2;
                        },
                        Materialized.with(Serdes.String(), new AggCustomObjectSerde())
                )
                .suppress(Suppressed.untilWindowCloses(unbounded()))
                .toStream()
                .selectKey((stringWindowed, aggCustomObject) -> "someKey");
    ;
    }

此流处理按预期方式进行。但是对于单元测试,则完全不同。

我的测试流配置如下:

        // ...

        props.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, "test");
        props.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
        props.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, myCustomObjectSerde.getClass());
        // disable cache
        props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
        // commit ASAP
        props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 0);


        StreamsBuilder builder = new StreamsBuilder();
        aggregate(builder.stream(INPUT_TOPIC), OUTPUT_TOPIC, new AggCustomObjectSerde())
.to(OUTPUT_TOPIC);

        Topology topology = builder.build();
        TopologyTestDriver testDriver = new TopologyTestDriver(topology, props);
        ConsumerRecordFactory<String, MyCustomObject> factory = new ConsumerRecordFactory<>(INPUT_TOPIC, new StringSerializer(), myCustomSerializer)

        // ...

一个测试如下:

List<ConsumerRecord<byte[], byte[]>> records = myCustomMessages.stream()
                .map(myCustomMessage -> factory.create(INPUT_TOPIC, myCustomMessage.correlationId, myCustomMessage))
                .collect(Collectors.toList());
testDriver.pipeInput(records);

ProducerRecord<String, AggregatedCustomMessage> record = testDriver.readOutput(OUTPUT_TOPIC, new StringDeserializer(), myAggregatedCustomObjectSerde);

问题是,record始终为空。 我尝试了很多事情:

  • 以超时循环读取
  • 更改配置中的提交间隔,以便尽快提交结果
  • 紧接着发送另一个具有不同键的记录(以触发窗口关闭,因为在KafkaStream中,事件时间基于记录时间戳)
  • 调用测试驱动程序的advanceWallClockTime方法

嗯,没有任何帮助。有人可以告诉我我缺少了什么,我应该如何测试基于会话窗口的流应用程序?

非常感谢

1 个答案:

答案 0 :(得分:1)

SessionWindows事件时间配合使用,而不与壁钟配合使用。尝试正确设置记录的事件时间以模拟不活动间隔。像这样:

testDriver.pipeInput(factory.create(INPUT_TOPIC, key1, record1, eventTimeMs));
testDriver.pipeInput(factory.create(INPUT_TOPIC, key2, record2, eventTimeMs + inactivityGapMs));

但是首先,您需要一个自定义的TimestampExtractor,例如:

 public static class RecordTimestampExtractor implements TimestampExtractor {

    @Override
    public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {
      return record.timestamp();
    }
  }

必须注册为:

 streamProperties.setProperty(
        StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG,
        RecordTimestampExtractor.class.getName()
    );