春季启动kafka session.timeout

时间:2019-10-26 06:04:52

标签: spring spring-boot apache-kafka spring-kafka

在我的springBoot项目中,我使用kafka监听消息,如下所示:

@KafkaListener(topics = Constants.ARTICLE_TOPIC, groupId = "articleConsumer")
private void receiveArticle(String content) {
    try {
        if (null != content) {
            messageHandler.handleMessage(content);
        }
    } catch (Exception e) {
        logger.error("===Kafka[Article]Consumer error===", e);
    }
}

@KafkaListener(topics = Constants.BUSINESS_TOPIC, groupId = "businessConsumer")
private void receiveBusiness(String content) {}

@KafkaListener(topics = Constants.RULE_TOPIC, groupId = "ruleConsumer")
private void receiveRule(String content) {}

handleMessage方法是:

public void handleMessage(String msg) throws Exception {
    logger.error("---------handle message-------------");
    ...code handle message...
}

我在breakpoint中有一个logger.error("---------handle message-------------");

当我收到msg时,程序将在logger.error("---------handle message-------------");中停止,在10s之后,我按F9(同时取消breakpoint)这样程序就会运行。

msg将被20s-30s处理。

我的困惑是,当我有breakpoint并且等待多于10s时,我会收到kafka错误,breakpoint将停止kafka发送heartbeat

错误是:

2019-10-26 14:34:49,710 ERROR [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] internals.ConsumerCoordinator (:) - [Consumer clientId=consumer-4, groupId=businessConsumer] Offset commit failed on partition medium_business_status-0 at offset 0: The coordinator is not aware of this member.
2019-10-26 14:34:49,710 ERROR [org.springframework.kafka.KafkaListenerEndpointContainer#2-0-C-1] internals.ConsumerCoordinator (:) - [Consumer clientId=consumer-2, groupId=ruleConsumer] Offset commit failed on partition rules-0 at offset 0: The coordinator is not aware of this member.
2019-10-26 14:34:49,710 WARN  [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] internals.ConsumerCoordinator (:) - [Consumer clientId=consumer-4, groupId=businessConsumer] Asynchronous auto-commit of offsets {medium_business_status-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
2019-10-26 14:34:49,710 WARN  [org.springframework.kafka.KafkaListenerEndpointContainer#2-0-C-1] internals.ConsumerCoordinator (:) - [Consumer clientId=consumer-2, groupId=ruleConsumer] Asynchronous auto-commit of offsets {rules-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
2019-10-26 14:34:49,711 WARN  [org.springframework.kafka.KafkaListenerEndpointContainer#2-0-C-1] internals.ConsumerCoordinator (:) - [Consumer clientId=consumer-2, groupId=ruleConsumer] Synchronous auto-commit of offsets {rules-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
2019-10-26 14:34:49,711 WARN  [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] internals.ConsumerCoordinator (:) - [Consumer clientId=consumer-4, groupId=businessConsumer] Synchronous auto-commit of offsets {medium_business_status-0=OffsetAndMetadata{offset=0, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

businessConsumerruleConsumer与处理msg无关,同时它们没有消息,但有错误,并且articleConsumer 错误是:

2019-10-26 14:34:50,237 INFO  [kafka-coordinator-heartbeat-thread | articleConsumer] internals.AbstractCoordinator (:) - [Consumer clientId=consumer-6, groupId=articleConsumer] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
2019-10-26 14:34:53,500 INFO  [kafka-coordinator-heartbeat-thread | articleConsumer] internals.AbstractCoordinator (:) - [Consumer clientId=consumer-6, groupId=articleConsumer] Attempt to heartbeat failed for since member id consumer-6-944e8b2d-7519-461c-99e6-e957f9eb97f6 is not valid.

我知道kafka个属性:

 `session.timeout.ms = 10000
  max.poll.interval.ms = 300000`

法官session.timeout.ms使用的consumer还活着;

max.poll.interval.ms是两个poll()之间的最长时间。

springboot版本为<version>2.1.6.RELEASE</version>

0 个答案:

没有答案