org.apache.kafka.clients.consumer.RetriableCommitFailedException:偏移提交失败,并发生可重试的异常

时间:2018-12-06 08:46:24

标签: apache-kafka

我在从kafka消费小批量并使用commitAsync时遇到了此异常

couldn't ack 17 messages
org.apache.kafka.clients.consumer.RetriableCommitFailedException: Offset commit failed with a retriable exception. You should retry committing the latest consumed offsets.
Caused by: org.apache.kafka.common.errors.DisconnectException

看来__consumer_offset主题在5秒钟内无法复制(默认值为offsets.commit.timeout.ms)。

在同一应用程序的其他使用者中,我将较大的批处理提交给kafka,我看不到此错误

config.put("client.id", InetAddress.getLocalHost().getHostAddress() + "_" + clientId + "_" + Thread.currentThread());
        config.put("group.id", "some-id");
        config.put("bootstrap.servers", clusterUrl);
        config.put("auto.offset.reset", "latest");
        config.put("heartbeat.interval.ms", 3000);
        config.put("session.timeout.ms", 60000);
        config.put("request.timeout.ms", 60000 + 5000);
        config.put("enable.auto.commit", "false");
        config.put("key.deserializer", StringDeserializer.class.getName());
        config.put("value.deserializer", StringDeserializer.class.getName());
        config.put("fetch.min.bytes", 1000000);
        config.put("max.partition.fetch.bytes", 1000000);
        config.put("fetch.max.wait.ms", 50);

这可能导致什么?

1 个答案:

答案 0 :(得分:0)

这是kafka connect的概念。 当我们获得可重试的异常时,不会发生消费者提交,并且将再次重试同一批次。

将通过重试进行10次重试,尝试间隔为3秒。

https://docs.confluent.io/current/connect/kafka-connect-jdbc/sink-connector/sink_config_options.html#retries