我创建了一个Spring Boot应用程序,它将消息发送到Kafka主题。我正在使用spring spring-integration-kafka
:
KafkaProducerMessageHandler<String,String>
订阅了一个频道(SubscribableChannel
),并将收到的所有邮件推送到一个主题。
该应用程序工作正常。我看到消息通过控制台消费者(本地kafka)到达Kafka。
我还创建了一个使用KafkaEmbedded
的Integrationtest。我通过订阅测试中的频道来检查预期的消息 - 一切都很好。
但我希望测试也检查放入kafka的消息。可悲的是,Kafka的JavaDoc并不是最好的。到目前为止我尝试的是:
@ClassRule
public static KafkaEmbedded kafkaEmbedded = new KafkaEmbedded(1, true, "myTopic");
//...
@Before
public void init() throws Exception {
mockConsumer = new MockConsumer<>( OffsetResetStrategy.EARLIEST );
kafkaEmbedded.consumeFromAnEmbeddedTopic( mockConsumer,"sikom" );
}
//...
@Test
public void endToEnd() throws Exception {
// ...
ConsumerRecords<String, String> records = mockConsumer.poll( 10000 );
StreamSupport.stream(records.spliterator(), false).forEach( record -> log.debug( "record: " + record.value() ) );
}
问题在于我没有看到任何记录。我不确定我的KafkaEmbedded设置是否正确。 但是频道会收到消息。
答案 0 :(得分:6)
这对我有用。试一试
@RunWith(SpringRunner.class)
@SpringBootTest
public class KafkaEmbeddedTest {
private static String SENDER_TOPIC = "testTopic";
@ClassRule
// By default it creates two partitions.
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
@Test
public void testSend() throws InterruptedException, ExecutionException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
//If you wish to send it to partitions other than 0 and 1,
//then you need to specify number of paritions in the declaration
KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps);
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 0, "message00")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 1, "message01")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 1, 0, "message10")).get();
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("sampleRawConsumer", "false", embeddedKafka);
// Make sure you set the offset as earliest, because by the
// time consumer starts, producer might have sent all messages
consumerProps.put("auto.offset.reset", "earliest");
final List<String> receivedMessages = Lists.newArrayList();
final CountDownLatch latch = new CountDownLatch(3);
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(() -> {
KafkaConsumer<Integer, String> kafkaConsumer = new KafkaConsumer<>(consumerProps);
kafkaConsumer.subscribe(Collections.singletonList(SENDER_TOPIC));
try {
while (true) {
ConsumerRecords<Integer, String> records = kafkaConsumer.poll(100);
records.iterator().forEachRemaining(record -> {
receivedMessages.add(record.value());
latch.countDown();
});
}
} finally {
kafkaConsumer.close();
}
});
latch.await(10, TimeUnit.SECONDS);
assertTrue(receivedMessages.containsAll(Arrays.asList("message00", "message01", "message10")));
}
}
我正在使用倒计时锁存器,因为Producer.Send(..)
是异步操作。所以我在这里做的是等待无限循环轮询kafka每100毫秒,如果有新记录,如果是这样,将其添加到List以供将来断言,然后减少倒计时。为了确定,我总共会等待10秒钟
您也可以使用一个简单的循环,然后在几分钟后退出。(如果您不想使用CountdownLatch和ExecutorService的东西)