org.apache.kafka.common.network.InvalidReceiveException:接收无效(大小= 30662099大于30662028)

时间:2017-03-21 13:17:56

标签: apache-kafka devops elastic-stack flume-ng bigdata

我正在尝试使用Kafka接收器将数据从Flume频道推送到Kafka集群,我可以将相关数据看到相关主题,但同时我在Kafka日志中过于频繁地观察下面提到的异常跟踪,

[2017-03-21 16:47:56,250] WARN Unexpected error from /10.X.X.X; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 30662099 larger than 30662028)
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:91)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
        at kafka.network.Processor.run(SocketServer.scala:413)
        at java.lang.Thread.run(Thread.java:745)  

初步分析让我看到我的Flume日志,并在其中观察到异常跟踪,

21 Mar 2017 16:25:32,560 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:158)  - Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to publish events
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:252)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:56)
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:43)
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:229)
        ... 3 more

从第一个堆栈跟踪看起来Flume正在尝试推送大小为30662099字节的数据但是接受Kafka代理限制的msg限制为30662028字节。

我在生产者(Flume)和经纪人(Kafka)即30662028上保留了类似的消息发送和接收大小,我担心如果我的Flume只发送30662028字节那么这些额外的字节是否与我的制作人的消息一起累积并形成大小为30662099的最终消息并导致此消息丢失。

任何帮助都会非常明显!!

0 个答案:

没有答案