我试图实现一个基于Spring Boot的Kafka消费者,即使出现错误,也会有一些非常强大的消息传递保证。
我们当前的实施符合这些要求:
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setRetryTemplate(retryTemplate());
final ContainerProperties containerProperties = factory.getContainerProperties();
containerProperties.setAckMode(AckMode.MANUAL_IMMEDIATE);
containerProperties.setErrorHandler(errorHandler());
return factory;
}
@Bean
public RetryTemplate retryTemplate() {
final ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(1000);
backOffPolicy.setMultiplier(1.5);
final RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(new AlwaysRetryPolicy());
template.setBackOffPolicy(backOffPolicy);
return template;
}
@Bean
public ErrorHandler errorHandler() {
return new SeekToCurrentErrorHandler();
}
然而,在这里,记录被消费者锁定永远。在某些时候,处理时间将超过max.poll.interval.ms
,服务器将重新分配给其他消费者,从而创建副本。
假设max.poll.interval.ms
等于5分钟(默认)并且失败持续30分钟,这将导致消息被处理ca. 6次。
另一种可能性是在N次重试(例如3次尝试)后使用SimpleRetryPolicy
将消息返回到队列。然后,将重播该消息(感谢SeekToCurrentErrorHandler
)并且处理将从头开始,最多再次尝试5次。这导致延迟形成一系列,例如
10 secs -> 30 secs -> 90 secs -> 10 secs -> 30 secs -> 90 secs -> ...
比不断上升的人更不希望:)
是否有任何第三种情况可以使延迟形成升序系列,同时不会在上述例子中产生重复?
答案 0 :(得分:6)
可以通过有状态重试来完成 - 在这种情况下,每次重试后都会抛出异常,但状态会在重试状态对象中维护,因此下一次传递该消息将使用下一个延迟等。
这需要消息中的某些内容(例如标题)来唯一地标识每个消息。幸运的是,对于Kafka,主题,分区和偏移为该州提供了唯一的密钥。
但是,目前,RetryingMessageListenerAdapter
不支持有状态重试。
您可以在侦听器容器工厂中禁用重试,并使用带有RetryTemplate
参数的execute
方法之一在侦听器中使用有状态RetryState
。
随意为框架添加GitHub问题以支持有状态重试;欢迎捐款! - pull request issued。
修改强>
我刚刚编写了一个测试用例来演示使用@KafkaListener
...
/*
* Copyright 2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.kafka.annotation;
import static org.assertj.core.api.Assertions.assertThat;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.SeekToCurrentErrorHandler;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.retry.RetryState;
import org.springframework.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.retry.support.DefaultRetryState;
import org.springframework.retry.support.RetryTemplate;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
/**
* @author Gary Russell
* @since 5.0
*
*/
@RunWith(SpringRunner.class)
@DirtiesContext
public class StatefulRetryTests {
private static final String DEFAULT_TEST_GROUP_ID = "statefulRetry";
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 1, "sr1");
@Autowired
private Config config;
@Autowired
private KafkaTemplate<Integer, String> template;
@Test
public void testStatefulRetry() throws Exception {
this.template.send("sr1", "foo");
assertThat(this.config.listener1().latch1.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.config.listener1().latch2.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.config.listener1().result).isTrue();
}
@Configuration
@EnableKafka
public static class Config {
@Bean
public KafkaListenerContainerFactory<?> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
@Bean
public DefaultKafkaConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> consumerProps =
KafkaTestUtils.consumerProps(DEFAULT_TEST_GROUP_ID, "false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return consumerProps;
}
@Bean
public KafkaTemplate<Integer, String> template() {
KafkaTemplate<Integer, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
return kafkaTemplate;
}
@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public Map<String, Object> producerConfigs() {
return KafkaTestUtils.producerProps(embeddedKafka);
}
@Bean
public Listener listener1() {
return new Listener();
}
}
public static class Listener {
private static final RetryTemplate retryTemplate = new RetryTemplate();
private static final ConcurrentMap<String, RetryState> states = new ConcurrentHashMap<>();
static {
ExponentialBackOffPolicy backOff = new ExponentialBackOffPolicy();
retryTemplate.setBackOffPolicy(backOff);
}
private final CountDownLatch latch1 = new CountDownLatch(3);
private final CountDownLatch latch2 = new CountDownLatch(1);
private volatile boolean result;
@KafkaListener(topics = "sr1", groupId = "sr1")
public void listen1(final String in, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
@Header(KafkaHeaders.OFFSET) long offset) {
String recordKey = topic + partition + offset;
RetryState retryState = states.get(recordKey);
if (retryState == null) {
retryState = new DefaultRetryState(recordKey);
states.put(recordKey, retryState);
}
this.result = retryTemplate.execute(c -> {
// do your work here
this.latch1.countDown();
throw new RuntimeException("retry");
}, c -> {
latch2.countDown();
return true;
}, retryState);
states.remove(recordKey);
}
}
}
和
Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void org.springframework.kafka.annotation.StatefulRetryTests$Listener.listen1(java.lang.String,java.lang.String,int,long)' threw exception; nested exception is java.lang.RuntimeException: retry
每次交付尝试后。
在这种情况下,我在重试耗尽后添加了一个recoverer来处理消息。您可以执行其他操作,例如停止容器(但在单独的线程上执行此操作,就像我们在ContainerStoppingErrorHandler
中所做的那样)。