读取字段时出错' topic_metadata':读取大小为873589的数组时出错,只有41个字节可用

时间:2017-03-06 19:18:13

标签: elasticsearch apache-kafka logstash apache-zookeeper logstash-configuration

我已经在安装了新的 Ubuntu 的虚拟机中下载了zip文件,因此安装了 logstash 版本5.2.2。

我使用以下条目创建了一个示例配置文件 logstash-sample.conf

input{
        stdin{ }
}
output{
        stdout{ }
}

执行命令 $ bin / logstash -f logstash-simple.conf 它运行得非常好。

现在在同一台Ubuntu机器上,我按照提到的完全相同的过程安装 kafka 在这里https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04,然后一直到第7步。

然后我修改了 logstash-sample.conf 文件以包含以下内容

input {
        kafka{
                bootstrap_servers => "localhost:9092"
                topics => ["TutorialTopic"]
        }
}
output {
        stdout { codec => rubydebug }
}

这次我收到以下错误

sample @ sample-VirtualBox:〜/ Downloads / logstash-5.2.2 $ bin / logstash -f logstash-sample.conf

Sending Logstash's logs to /home/rs-switch/Downloads/logstash-5.2.2/logs which is now configured via log4j2.properties
[2017-03-07T00:26:25,629][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-03-07T00:26:25,650][INFO ][logstash.pipeline        ] Pipeline main started
[2017-03-07T00:26:26,039][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Ruby-0-Thread-14: /home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:229" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available
        at org.apache.kafka.common.protocol.types.Schema.read(org/apache/kafka/common/protocol/types/Schema.java:73)
        at org.apache.kafka.clients.NetworkClient.parseResponse(org/apache/kafka/clients/NetworkClient.java:380)
        at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(org/apache/kafka/clients/NetworkClient.java:449)
        at org.apache.kafka.clients.NetworkClient.poll(org/apache/kafka/clients/NetworkClient.java:269)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:360)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:224)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:192)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:163)
        at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java:179)
        at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:974)
        at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:938)
        at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
        at RUBY.thread_runner(/home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:239)
        at java.lang.Thread.run(java/lang/Thread.java:745)
[2017-03-07T00:26:28,742][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}

任何人都可以帮我解决这个问题吗?我过去几周真的很难设置ELK,但没有成功。

1 个答案:

答案 0 :(得分:4)

您很可能遇到导致此问题的版本冲突。查看Logstash Kafka输入插件文档中的compatibility matrix

您提到的用于安装Kafka的链接是否安装了版本0.8.2.1,该版本不适用于Kafka 0.10客户端。 Kafka具有版本检查和向后兼容性,但前提是代理比客户端更新,这不是这里的情况。 我建议安装当前版本的Kafka,自0.8版以来,如果你尝试降级Logstash,那么你就会错过很多改进。

查看Confluent Platform Quickstart以便轻松入门。