我创建了一个docker-compose,用于筹集10个用于原木豚鼠的容器,它们按预期工作,但是kafka和zookeeper饲养者都不断弹出警告和执行命令。容器堆栈是:带有日志文件的本地卷-> Filebeat-> Kafka / Zookeeper平衡(3个Kafka和3个Zookeeper容器)-> LogStash-> ElasticSearch-> Kibana。
它按预期工作:如果我将新文件粘贴到本地卷中或在当前文件中添加新的日志行,则可以在Kibana中看到结果。
尽管如此,我不断收到此问题主题中提到的错误。今天,我正在笔记本电脑上运行所有内容,因为这只是一个POC,但从现在起,我必须根据将要部署每个容器的每个OpenShift VM在一些docker文件中拆分此Docker Compose。在升级到下一个级别之前,我想了解/修复此类消息(完整的日志和文件如下)。
我发现此讨论(issues with SSL)与我的讨论有点接近,指出原因是SSL问题。好吧,据我所知,我完全不使用SSL,尽管在进入生产环境时我会因某些SSL需求而出现问题。
所以我的问题是:为什么我会收到ZOOKEEPER消息“ ...由于java.io.IOException而导致会话0x0关闭的异常:Len错误1212498244 ...”和KAFKA消息“ ...来自/的意外错误172.18.0.1;关闭连接(org.apache.kafka.common.network.Selector)...“,我仍然看到所有连接都正常工作吗?
如果要重现环境,则是带有所有yml文件的github:all files
Docker-compose.yml
version: '3.2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:7.5.2
volumes:
- "./kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "./esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
logstash:
image: docker.elastic.co/logstash/logstash:7.5.2
volumes:
- "./logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
- kafka2
- kafka3
kafka1:
image: wurstmeister/kafka
command: [start-kafka.sh]
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
kafka2:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9093:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9092
KAFKA_BROKER_ID: 2
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
kafka3:
image: wurstmeister/kafka
depends_on:
- zoo1
- zoo2
- zoo3
links:
- zoo1
- zoo2
- zoo3
ports:
- "9094:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9092
KAFKA_BROKER_ID: 3
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_CREATE_TOPICS: "log:3:3"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1,zoo2,zoo3
ports:
- "2181:2181"
zoo2:
image: elevy/zookeeper:latest
environment:
MYID: 2
SERVERS: zoo1,zoo2,zoo3
ports:
- "2182:2181"
zoo3:
image: elevy/zookeeper:latest
environment:
MYID: 3
SERVERS: zoo1,zoo2,zoo3
ports:
- "2183:2181"
filebeat:
image: docker.elastic.co/beats/filebeat:7.5.2
volumes:
- "./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "./sample-logs:/sample-logs"
links:
- kafka1
- kafka2
- kafka3
depends_on:
- kafka1
- kafka2
- kafka3
logstash.conf
input {
kafka {
bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092"
client_id => "logstash"
group_id => "logstash"
consumer_threads => 3
topics => ["log"]
codec => "json"
tags => ["log", "kafka_source"]
type => "log"
}
}
filter {
if [type] == "request-sample" {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
remove_field => ["timestamp"]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{[type]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Kookeeper 1(所有存在的Zoo容器中的日志相同)
2020-02-13 16:37:13,202 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1044] - Closed socket connection for client /172.18.0.1:35522 (no session established for client)
2020-02-13 16:37:13,553 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.1:35546
2020-02-13 16:37:13,555 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@373] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244
2020-02-13 16:37:13,556 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1044] - Closed socket connection for client /172.18.0.1:35546 (no session established for client)
.....
2020-02-13 16:38:00,988 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@373] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244
2020-02-13 16:38:00,988 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1044] - Closed socket connection for client /172.18.0.1:36722 (no session established for client)
2020-02-13 16:38:01,783 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.1:36730
2020-02-13 16:38:01,786 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@373] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1212498244
2020-02-13 16:38:01,787 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1044] - Closed socket connection for client /172.18.0.1:36730 (no session established for client)
Kafka 1(在所有三个Kafka容器中都显示相同的消息)
at java.lang.Thread.run(Thread.java:748)
[2020-02-13 16:35:45,777] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1212498244 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:104)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at kafka.network.Processor.poll(SocketServer.scala:890)
at kafka.network.Processor.run(SocketServer.scala:789)
[2020-02-13 16:35:46,794] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
[2020-02-13 16:35:47,822] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
[2020-02-13 16:35:53,296] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
[2020-02-13 16:35:54,350] WARN [SocketServer brokerId
...