提取偏移量5705超出分区范围,请重置偏移量

时间:2020-07-04 15:09:15

标签: java spring-boot apache-kafka kafka-consumer-api spring-kafka

我每次在kafka用户中都收到以下信息消息。

2020-07-04 14:54:27.640  INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer   : beginning to consume batch messages , Message Count :11
2020-07-04 14:54:27.809  INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer   : Execution Time :169
2020-07-04 14:54:27.809  INFO 1 --- [istener-0-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {nbi.cm.changes.mo.test23-1=OffsetAndMetadata{offset=5705, leaderEpoch=null, metadata=''}}
2020-07-04 14:54:27.812  INFO 1 --- [istener-0-0-C-1] c.n.o.c.h.p.n.PersistenceKafkaConsumer   : Acknowledgment Success
2020-07-04 14:54:27.813  INFO 1 --- [istener-0-0-C-1] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Fetch offset 5705 is out of range for partition nbi.cm.changes.mo.test23-1, resetting offset
2020-07-04 14:54:27.820  INFO 1 --- [istener-0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Resetting offset for partition nbi.cm.changes.mo.test23-1 to offset 666703.

在调试日志中出现OFFSET_OUT_OF_RANGE错误,并将其重置为其他一些实际上不存在的分区。与所有能够在消费者控制台中接收的消息相同。
但是实际上我只在那之前提交过offset,kafka中可以使用offset,日志保留策略为24小时,因此在kafka中不会删除它。

在调试日志中,我收到以下消息:

beginning to consume batch messages , Message Count :710
2020-07-02 04:58:31.486 DEBUG 1 --- [ce-notification] o.a.kafka.clients.FetchSessionHandler    : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Node 1002 sent an incremental fetch response for session 253529272 with 1 response partition(s)
2020-07-02 04:58:31.486 DEBUG 1 --- [ce-notification] o.a.k.c.consumer.internals.Fetcher       : [Consumer clientId=consumer-1, groupId=cm-persistence-notification] Fetch READ_UNCOMMITTED at offset 11372 for partition nbi.cm.changes.mo.test12-1 returned fetch data (error=OFFSET_OUT_OF_RANGE, highWaterMark=-1, lastStableOffset = -1, logStartOffset = -1, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)

到时候我们都会得到OFFSET_OUT_OF_RANGE。

监听器类:

@KafkaListener( id = "batch-listener-0", topics = "topic1", groupId = "test", containerFactory = KafkaConsumerConfiguration.CONTAINER_FACTORY_NAME )
    public void receive(
        @Payload List<String> messages,
        @Header( KafkaHeaders.RECEIVED_MESSAGE_KEY ) List<String> keys,
        @Header( KafkaHeaders.RECEIVED_PARTITION_ID ) List<Integer> partitions,
        @Header( KafkaHeaders.RECEIVED_TOPIC ) List<String> topics,
        @Header( KafkaHeaders.OFFSET ) List<Long> offsets,
        Acknowledgment ack )
    {
        long startTime = System.currentTimeMillis();

        handleNotifications( messages ); // will take more than 5s to process all messages

        long endTime = System.currentTimeMillis();

        long timeElapsed = endTime - startTime;

        LOGGER.info( "Execution Time :{}", timeElapsed );

        ack.acknowledge();

        
        LOGGER.info( "Acknowledgment Success" );

    }

我是否需要在这里关闭用户,我认为spring-kafka会自动照顾那些人,如果不能,请告诉我如何在apring-kafka中关闭,以及如何检查是否发生了再平衡,因为调试日志看不到任何与重新平衡有关的日志。

1 个答案:

答案 0 :(得分:1)

我认为您的消费者可能正在重新平衡,因为您没有在流程结束时致电library(shiny) render <- " { option: function(data, escape){return '<div class=\"option\">'+data.label+'</div>';}, item: function(data, escape){return '<div class=\"item\">'+data.label+'</div>';} }" ui <- fluidPage( selectizeInput( "age", "Age category", choices = c( "&le; 6 months" = 1, "6 months - 17 years" = 2, "&ge; 18 years" = 3 ), options = list( render = I(render) ) ) ) server <- function(input, output){} shinyApp(ui, server)

这是一个猜测,但是如果保留政策没有生效(并且日志没有被删除),这就是我可以告知该行为的唯一原因。

更新:

将它们设置为consumer.close()时,您可以在 KafkaListenerEndpointRegistry 上调用@KafkaListenersstop()