Spring Kafka错误处理程序如何避免无限循环

时间:2019-04-23 13:38:58

标签: java spring apache-kafka spring-kafka

我希望有人可以给我提示,希望我在这里做错了。我为批处理侦听器编写了一个自定义错误处理程序,该处理程序应在接收到的记录后面查找,并将它们发送到dlq。我累了很多,但是没有工作。我当前的实现将陷入一个无休止的循环,一次又一次地接收记录。这里是错误处理程序代码:

@Service("consumerAwareListenerErrorHandlerImpl")
public class ConsumerAwareListenerErrorHandlerImpl implements ConsumerAwareListenerErrorHandler {


    private final Executor executor;

    private final KafkaListenerEndpointRegistry registry;

    private final TaskScheduler scheduler;


    @Autowired
    public ConsumerAwareListenerErrorHandlerImpl(KafkaListenerEndpointRegistry registry, TaskScheduler scheduler) {
        this.scheduler = scheduler;
        this.executor = new SimpleAsyncTaskExecutor();
        this.registry = registry;
    }


    @Override
    public Object handleError(Message<?> message, ListenerExecutionFailedException exception, Consumer<?, ?> consumer) {

        MessageHeaders headers = message.getHeaders();
        List<String> topics = headers.get(KafkaHeaders.RECEIVED_TOPIC, List.class);
        List<Integer> partitions = headers.get(KafkaHeaders.RECEIVED_PARTITION_ID, List.class);
        List<Long> offsets = headers.get(KafkaHeaders.OFFSET, List.class);
        Acknowledgment acknowledgment = headers.get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);


        Map<TopicPartition, Long> offsetsToReset = new HashMap<>();

        for (int i = 0; i < topics.size(); i++) {
            int index = i;
            offsetsToReset.compute(new TopicPartition(topics.get(i), partitions.get(i)),
                    (k, v) -> (v == null) ? offsets.get(index) : Math.max(v, offsets.get(index)));
        }
        offsetsToReset.forEach((k, v) -> consumer.seek(k, v));

        if (!(exception.getCause() instanceof DeserializationException)) {
            //pauseAndRestartContainer();
        }

        acknowledgment.acknowledge();
        consumer.commitSync();

        return null;
    }

1 个答案:

答案 0 :(得分:1)

您必须设法偏移+1以获得“过去”。寻求offset将会使其重播。