logstash kafka输入性能/配置调优

时间:2017-01-09 19:35:22

标签: elasticsearch apache-kafka logstash apache-kafka-connect

我使用logstash将数据从Kafka传输到Elasticsearch,我收到以下错误:

WARN org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Auto offset commit failed for group kafka-es-sink: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我尝试调整会话超时(至30000)和最大轮询记录(至250)。

该主题以avro格式每秒生成1000个事件。有10个分区(2个服务器)和两个logstash实例,每个实例有5个消费者线程。

我对每秒约100-300个事件的其他主题没有任何问题。

我认为它应该是一个配置问题,因为我在Kafka和Elasticsearch之间的同一主题上有第二个连接器工作正常(汇合的kafka-connect-elasticsearch)

主要目的是将kafka connect和logstash作为连接器进行比较。也许任何人都有一些经验?

0 个答案:

没有答案