我对kafka消费者有问题,有时会引发异常。
ERROR [*KafkaConsumerWorker] (Thread-125) [] Kafka Consumer thread 235604751 Exception while polling Kafka.: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:820) [kafka-clients-2.3.0.jar:]
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:692) [kafka-clients-2.3.0.jar:]
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1368) [kafka-clients-2.3.0.jar:]
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1330) [kafka-clients-2.3.0.jar:]
at *.kafka.KafkaConsumerWorker.run(KafkaConsumerWorker.java:64) [classes:]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_51]
我找不到发生这种情况的原因,因为在发生此异常时,使用者没有处理任何消息。这些例外每天发生2-3次。 我的一些消费者配置如下:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [*]
check.crcs = true
client.dns.lookup = default
client.id = 52c94040-05d9-4b57-8006-afcc862f9b62
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = TEST
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 10
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
实施:
{
logger.info("Kafka Consumer thread {} start", hashCode());
Consumer<String, Message> consumer = null;
try {
consumer = KafkaConsumerClient.createConsumer();
while (start) {
try {
ConsumerRecords<String, Message> notifications =
consumer.poll(300000);
if (!notifications.isEmpty()) {
//processing.....
}
consumer.commitSync();
} catch (Exception e) {
logger.error("Kafka Consumer thread {} Exception while polling Kafka.", hashCode(), e);
}
}
logger.info("Kafka Consumer thread {} exit", hashCode());
} finally {
if (consumer != null) {
logger.info("Kafka Consumer thread {} closing consumer.", hashCode());
consumer.close();
}
}
}
我知道,在此版本的kafka clinet中,心跳是从另一个线程发送的,我认为这消除了消费者花费太多时间进行处理(甚至没有任何处理时间)的情况。我想这与配置超时有关,但是找不到确切的值。
答案 0 :(得分:1)
假设您要按顺序处理记录,则应将事件从使用者循环添加到内存队列中,然后将该队列对象移交给processing...
逻辑的全新出队线程>
该错误表明您在这里所做的任何事情都足以阻止并重新平衡消费者
我还建议使用可以处理背压的更高级别的库,例如Connect / Streams API或Vertx或Smallrye Messaging或Akka Streams
答案 1 :(得分:0)
您应将Duration
中的Consumer#poll(Duration)
设置为低于max.poll.interval.ms
,这是Consumer
可以在获取更多记录之前保持空闲状态的最长时间。在Kafka document中:
If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member
在您提交偏移量时,使用方已经失败,分区已经撤销,组正在重新平衡。