我们如何将DeadLetterPublishingRecoverer与RetryTemplate一起使用?

时间:2019-08-01 12:24:41

标签: kafka-consumer-api spring-kafka

我想将RetryTemplate与DeadLetterPublishingRecoverer一起使用。

如何使用,以便它将从RetryTemplate读取重试计数和retryInterval,并在重试后将其移至dlq。

@Bean
public RetryTemplate retryTemplate(){
    RetryTemplate retryTemplate = new RetryTemplate();
    SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
    simpleRetryPolicy.setMaxAttempts(retryMaxAttempts);
    FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
    backOffPolicy.setBackOffPeriod(retryInterval);
    retryTemplate.setRetryPolicy(retryPolicy());
    retryTemplate.setBackOffPolicy(backOffPolicy());
    return retryTemplate;
}


@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory(ChainedKafkaTransactionManager<String, String> chainedTM) {
    ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
    factory.setConsumerFactory(consumerFactory());
    factory.setConcurrency(concurrency);
    factory.getContainerProperties().setPollTimeout(pollTimeout);
    factory.getContainerProperties().setSyncCommits(true);
    factory.setRetryTemplate(retryTemplate());
    factory.getContainerProperties().setAckOnError(false);
    factory.getContainerProperties().setTransactionManager(chainedTM);
    factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template), 1));
    return factory;
}

1 个答案:

答案 0 :(得分:0)

您应该在重试逻辑中而不是在错误处理程序中进行恢复(发布)。参见this answer

        factory.setRecoveryCallback(context -> {
            recoverer.accept((ConsumerRecord<?, ?>) context.getAttribute("record"),
                    (Exception) context.getLastThrowable());
            return null;
        });

recovererDeadLetterPublishingRecoverer的地方。

编辑

/**
 * Create an instance with the provided template and destination resolving function,
 * that receives the failed consumer record and the exception and returns a
 * {@link TopicPartition}. If the partition in the {@link TopicPartition} is less than
 * 0, no partition is set when publishing to the topic.
 * @param template the {@link KafkaTemplate} to use for publishing.
 * @param destinationResolver the resolving function.
 */
public DeadLetterPublishingRecoverer(KafkaTemplate<? extends Object, ? extends Object> template,
        BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> destinationResolver) {
    this(Collections.singletonMap(Object.class, template), destinationResolver);
}

如果DLT没有与原始主题一样多的分区,则需要自定义目标解析器:

(record, exception) -> new TopicPartition("my.DLT", -1)

对于负分区,Kafka将选择该分区;默认的解析器使用相同的分区。

DEFAULT_DESTINATION_RESOLVER = (cr, e) -> new TopicPartition(cr.topic() + ".DLT", cr.partition());

[文档[({https://docs.spring.io/spring-kafka/docs/2.2.7.RELEASE/reference/html/#dead-letters

中对此进行了解释
  

您还可以选择使用BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition>对其进行配置,调用它来解析目标主题和分区。默认情况下,死信记录被发送到名为<originalTopic>.DLT的主题(原始主题名称后缀.DLT)和与原始记录相同的分区。因此,当您使用默认解析器时,死信主题必须具有至少与原始主题一样多的分区。如果返回的TopicPartition具有负分区,则该分区未在ProducerRecord中设置,因此该分区由Kafka选择。