我正在进行Kafka消费者实施的集成测试。
我使用wurstmeister / kafka docker镜像和Apache Kafka消费者。
嗡嗡我的场景是当我向主题发送“意外”消息时。 kafkaConsumer.poll(POLLING_TIMEOUT)
似乎在RUN模式下进入无限循环。但是当我调试时,它会在我暂停并跑回来时起作用。
发送预期的消息时没有这个问题(不要在反序列化时抛出异常)。
以下是kafka的docker-compose
配置:
kafka:
image: wurstmeister/kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: "ProductLocation:1:1,ProductInformation:1:1,InventoryAvailableToSell:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Java通用消费者:
@Override
public Collection<T> consume() {
String eventToBePublishedName = ERROR_WHILE_RESETTING_OFFSET;
boolean success = false;
try {
kafkaConsumer.resume(kafkaAssignments);
if (isPollingTypeFull) {
// dummy poll because its needed before resetting offset.
// https://stackoverflow.com/questions/41008610/kafkaconsumer-0-10-java-api-error-message-no-current-assignment-for-partition
kafkaConsumer.poll(POLLING_TIMEOUT);
resetOffset();
} else if (!offsetGotResetFirstTime) {
resetOffset();
offsetGotResetFirstTime = true;
}
eventToBePublishedName = ERROR_WHILE_POLLING;
ConsumerRecords<Object, T> records;
List<T> output = new ArrayList<>();
do {
records = kafkaConsumer.poll(POLLING_TIMEOUT);
records.forEach(cr -> {
T val = cr.value();
if (val != null) {
output.add(cr.value());
}
});
} while (records.count() > 0);
eventToBePublishedName = CONSUMING;
success = true;
kafkaConsumer.pause(kafkaAssignments);
return output;
} finally {
applicationEventPublisher.publishEvent(
new OperationResultApplicationEvent(
this, OperationType.ConsumingOfMessages, eventToBePublishedName, success));
}
}
反序列化:
public T deserialize(String topic, byte[] data) {
try {
JsonNode jsonNode = mapper.readTree(data);
JavaType javaType = mapper.getTypeFactory().constructType(getValueClass());
JsonNode value = jsonNode.get("value");
return mapper.readValue(value.toString(), javaType);
} catch (IllegalArgumentException | IOException | SerializationException e) {
LOGGER.error("Can't deserialize data [" + Arrays.toString(data)
+ "] from topic [" + topic + "]", e);
return null;
}
}
在我的集成测试中,我通过发送带有时间戳的主题名称为每个测试创建一个主题。这会创建新主题并使测试成为无状态。
这是我配置Kafka消费者的方式:
Properties properties = new Properties();
properties.put("bootstrap.servers", kafkaConfiguration.getServer());
properties.put("group.id", kafkaConfiguration.getGroupId());
properties.put("key.deserializer", kafkaConfiguration.getKeyDeserializer().getName());
properties.put("value.deserializer", kafkaConfiguration.getValueDeserializer().getName());
答案 0 :(得分:0)
如果您遇到此问题,请在使用后{}}消费者,或在开始使用之前使用close
后pause
。
答案 1 :(得分:0)
抓住异常并将您提交的偏移量提前+1,以跳过“毒丸”消息。