如何解决Kafka Producer中的RecordTooLargeException?

时间:2017-04-07 03:51:54

标签: apache-kafka kafka-producer-api flink-streaming

我正在使用 FlinkKafkaProducer08 向Kafka发送记录。但有时我得到以下异常,即使我在错误消息中打印的记录太小,大小为0.02 MB。

java.lang.RuntimeException: Could not forward element to next operator
Caused by: java.lang.RuntimeException: Could not forward element to next operator
Caused by: java.lang.Exception: Failed to send data to Kafka: The message is 1513657 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1513657 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration

尝试在生产者上更改max.request.size,但这需要经纪人更改并重新启动经纪人。

1 个答案:

答案 0 :(得分:2)

还有最大邮件大小的代理设置:

message.max.bytes : 1,000,000 (default)

所以你需要重启你的经纪人 - 但这应该不是问题。 Kafka在经纪人反弹方面建立了稳健的方式。

比照。 http://kafka.apache.org/082/documentation.html#producerconfigs