我将kafka 0.10.2.1部署在3节点集群中,大部分都配置了默认配置。 Producer配置如下,
"bootstrap.servers", 3; "retry.backoff.ms", "1000" "reconnect.backoff.ms", "1000" "max.request.size", "5242880" "key.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer" "value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer"
我所看到的是,当群集中的一个节点出现故障时,我无法再将消息发布到Kafka。我这样做时会遇到以下异常,
05-Apr-2018 22:29:33,362 PDT ERROR [vm18] [KafkaMessageBroker] (default task-43) |default| Failed to publish message for topic deviceConfigRequest: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for deviceConfigRequest-1: 30967 ms has passed since batch creation plus linger time at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:70) [kafka-clients-0.10.2.1.jar:] at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:65) [kafka-clients-0.10.2.1.jar:] at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25) [kafka-clients-0.10.2.1.jar:] at x.y.x.KafkaMessageBroker.publishMessage(KafkaMessageBroker.java:151) [classes:]
我错过了什么?