卡夫卡消费者在获得例外后突然停止

时间:2019-11-20 15:19:01

标签: spring-kafka

以下代码段:

KafkaConsumerConfig类

public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9093");
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "consumerGroupId");
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
    props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 10000);
    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
    props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 60000);
    props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 1000);
    props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    return new DefaultKafkaConsumerFactory<>(props);
}

public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
    ConsumerFactory config = consumerFactory();
    factory.setConsumerFactory(config);
    factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.INFO);
    factory.setConcurrency(kafka.getConcurrency());
    factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
    factory.getContainerProperties().setSyncCommits(true);
    factory.getContainerProperties().setPollTimeout(0);
    factory.getContainerProperties().setAckOnError(false);
    factory.getContainerProperties().setConsumerRebalanceListener(new RebalanceListener());
    return factory;
}

RebalanceListener类

public class RebalanceListener implements ConsumerAwareRebalanceListener {

private Map<TopicPartition, Long> partitionToUncommittedOffsetMap;

public void setPartitionToUncommittedOffsetMap(Map<TopicPartition, Long> partitionToUncommittedOffsetMap) {
    this.partitionToUncommittedOffsetMap = partitionToUncommittedOffsetMap;
}

private void commitOffsets(Map<TopicPartition, Long> partitionToOffsetMap, Consumer consumer) {
    if(partitionToOffsetMap!=null && !partitionToOffsetMap.isEmpty()) {
        Map<TopicPartition, OffsetAndMetadata> partitionToMetadataMap = new HashMap<>();
        for(Map.Entry<TopicPartition, Long> e : partitionToOffsetMap.entrySet()) {
            log.info("Adding partition & offset for topic{}", e.getKey());
            partitionToMetadataMap.put(e.getKey(), new OffsetAndMetadata(e.getValue() + 1));
        }
        log.info("Consumer : {}, committing the offsets : {}", consumer, partitionToMetadataMap);
        consumer.commitSync(partitionToMetadataMap);
        partitionToOffsetMap.clear();
    }
}

@Override
public void onPartitionsRevokedBeforeCommit(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
    log.info("Consumer is going to commit the offsets {}",consumer);
    commitOffsets(partitionToUncommittedOffsetMap, consumer);
    log.info("Committed offsets {}",consumer);
}

KafkaListner类

 @KafkaListener(topics = "#{'${dimebox.kafka.topicName}'.split('"+ COMMA + "')}", groupId ="${dimebox.kafka.consumerGroupId}")
public void receive(@Header(KafkaHeaders.RECEIVED_TOPIC) String topic,@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
    @Header(KafkaHeaders.OFFSET) long offset, Acknowledgment acknowledgment, final String payload) {
        TopicPartition tp = new TopicPartition(topic, partition);
    Map<TopicPartition, Long> partitionToUncommittedOffsetMap = new ConcurrentHashMap<>();
    partitionToUncommittedOffsetMap.put(tp, offset);
    ((RebalanceListener)consumerConfig.kafkaListenerContainerFactory
            (new ApplicationProperties()).getContainerProperties().getConsumerRebalanceListener())
            .setPartitionToUncommittedOffsetMap(partitionToUncommittedOffsetMap);
        LOGGER.info("Insert Message Received from offset : {} ", offset);
        importerService.importer(payload);
        acknowledgment.acknowledge();
}

完成此配置后,Kafka使用者突然停止。 我们处理来自Kafka的消息,下游api返回错误并引发异常。如果处理消息的正常流程已停止,请发布执行信息。请注意,异常是在应用程序级别处理并记录的。我们是否需要使用spring kafka库提供的特定错误处理方法?

0 个答案:

没有答案