Kafka - 如何跳过偏移中的错误消息并消耗其余消息

时间:2018-04-26 12:52:12

标签: java apache-kafka

我正在使用Kafka和Avro进行序列化/反序列化以处理事件。如果不合格的数据不符合avro架构,那么

std::array

并且消息对于相同的偏移量不断增长。是否有可能只是跳过这个偏移并继续读取更多的偏移量,如果再次发生相同的情况,也可以跳过这个?

消费者代码:

.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer app: host: dcId: envId: url: reqId: jsess: secSessId: logUser: effUser: impUser: channelName: - Container exception
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition EventProcessor-0 at offset 2845. If needed, please seek past the record to continue consumption.

工厂:

@KafkaListener(topics = "EventProcessor", containerFactory = "eventProcessorListenerContainerFactory")
    public void listen(Event payLoad) {

        System.out.println("REceived  message ===> " + payLoad);

    }

1 个答案:

答案 0 :(得分:2)

尝试根据@Poppy

的建议调整您的政策
SimpleRetryPolicy policy = new SimpleRetryPolicy();
// Set the max retry attempts
policy.setMaxAttempts(5);
// Retry on all exceptions (this is the default)
policy.setRetryableExceptions(new Class[] {Exception.class});
// ... but never retry SerializationException
policy.setFatalExceptions(new Class[] {SerializationException.class}); //<-- here

// Use the policy...
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(policy);
template.execute(new RetryCallback<Foo>() {
    public Foo doWithRetry(RetryContext context) {
        // business logic here
    }
});

从这里开始:https://docs.spring.io/spring-batch/3.0.x/reference/html/retry.html