我有三个不同的节点,每个节点上都有带有Ubuntu的docker。我想使Kafka集群具有这三个节点;实际上,我在每个节点上安装了docker,并在其上加载了Ubuntu。我在docker环境中将"zookeeper.properties"
配置为“ 150.20.11.157”,如下所示:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
server.1=0.0.0.0:2888:3888
server.2=150.20.11.134:2888:3888
server.3=150.20.11.137:2888:3888
clientPort=2186
对于节点150.20.11.134,docker环境中的“ zookeeper.properties”文件如下所示:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
server.1=150.20.11.157:2888:3888
server.2=0.0.0.0:2888:3888
server.3=150.20.11.137:2888:3888
clientPort=2186
对于节点150.20.11.137,泊坞窗环境中的“ zookeeper.properties”文件如下:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
server.1=150.20.11.157:2888:3888
server.2=150.20.11.134:2888:3888
server.3=0.0.0.0:2888:3888
clientPort=2186
此外,我为节点150.20.11.157设置“ server.properties”:
broker.id=0
port=9092
listeners = PLAINTEXT://150.20.11.157:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=150.20.11.157:2186,150.20.11.134:2186,
150.20.11.137:2186
节点150.20.11.134的“ server.properties”是:
broker.id=1
port=9092
listeners = PLAINTEXT://150.20.11.134:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=150.20.11.157:2186,150.20.11.134:2186,
150.20.11.137:2186
节点150.20.11.137的“ server.properties”是:
broker.id=2
port=9092
listeners = PLAINTEXT://150.20.11.137:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=150.20.11.157:2186,150.20.11.134:2186,
150.20.11.137:2186
此外,每个节点在docker环境的“ / tmp / zookeeper / data”中都有一个“ myid”文件,其中包含服务器ID。
要制作一个由三个节点组成的Kafka群集(如上图所示),我制作了一个“ docker-compose.yaml”文件和一个dockerfile。
这是我的docker-compose文件:
version: '3.7'
services:
zookeeper:
build: .
command: /root/kafka_2.11-2.0.1/bin/zookeeper-server-start.sh
/root/kafka_2.11-2.0.1/config/zookeeper.properties
ports:
- 2186:2186
kafka1:
build:
context: .
args:
brokerId: 0
command: /root/kafka_2.11-2.0.1/bin/kafka-server-start.sh
/root/kafka_2.11-2.0.1/config/server.properties
depends_on:
- zookeeper
kafka2:
build:
context: .
args:
brokerId: 1
command: /root/kafka_2.11-2.0.1/bin/kafka-server-start.sh
/root/kafka_2.11-2.0.1/config/server.properties
depends_on:
- zookeeper
kafka3:
build:
context: .
args:
brokerId: 2
command: /root/kafka_2.11-2.0.1/bin/kafka-server-start.sh
/root/kafka_2.11-2.0.1/config/server.properties
depends_on:
- zookeeper
producer:
build: .
command: bash -c "sleep 4 && /root/kafka_2.11-2.0.1/bin/kafka-
topics.sh --create --zookeeper zookeeper:2186 --replication-
factor 2 --partitions 3 --topic dates && while true; do date |
/kafka_2.11-2.0.1/bin/kafka-console-producer.sh --broker-list
kafka1:9092,kafka2:9092,kafka3:9092 --topic dates; sleep 1;
done "
depends_on:
- zookeeper
- kafka1
- kafka2
- kafka3
consumer:
build: .
command: bash -c "sleep 6 && /root/kafka_2.11-2.0.1/bin/kafka-
console-consumer.sh localhost:9092 --topic dates --bootstrap-
server kafka1:9092,kafka2:9092,kafka3:9092"
depends_on:
- zookeeper
- kafka1
- kafka2
- kafka3
问题出在“ dockerfile build”之后。当我在每个节点上执行“ sudo docker-compose up”时。它不能完全运行。我的一些日志如下:
zookeeper_1 | [2019-01-17 16:09:27,197]信息从以下位置读取配置:/root/kafka_2.11-2.0.1/config/zookeeper.properties(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
kafka3_1 | [2019-01-17 16:09:29,426]信息已注册kafka:type = kafka.Log4jController MBean(kafka.utils.Log4jControllerRegistration $)
kafka3_1 | [2019-01-17 16:09:29,702]信息开始(kafka.server.KafkaServer)
kafka3_1 | [2019-01-17 16:09:29,702]信息在150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186(kafka.server.KafkaServer)上连接到zookeeper
kafka1_1 | [2019-01-17 16:09:30,012]信息已注册kafka:type = kafka.Log4jController MBean(kafka.utils.Log4jControllerRegistration $)
zookeeper_1 | [2019-01-17 16:09:27,240]信息解析的主机名:150.20.11.137地址:/150.20.11.137(org.apache.zookeeper.server.quorum.QuorumPeer)
kafka1_1 | [2019-01-17 16:09:30,486]信息开始(kafka.server.KafkaServer)
kafka3_1 | [2019-01-17 16:09:29,715]信息[ZooKeeperClient]初始化到150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186的新会话。 (kafka.zookeeper.ZooKeeperClient)
zookeeper_1 | [2019-01-17 16:09:27,241]信息解析的主机名:150.20.11.134地址:/150.20.11.134(org.apache.zookeeper.server.quorum.QuorumPeer)
zookeeper_1 | [2019-01-17 16:09:27,241]信息解析的主机名:0.0.0.0到地址:/0.0.0.0(org.apache.zookeeper.server.quorum.QuorumPeer) kafka3_1 | [2019-01-17 16:09:29,720]信息客户端环境:zookeeper.version = 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03,建立于06/29/2018 00:39 GMT(org.apache.zookeeper.ZooKeeper)
zookeeper_1 | [2019-01-17 16:09:27,241]信息默认为多数仲裁(org.apache.zookeeper.server.quorum.QuorumPeerConfig) kafka3_1 | [2019-01-17 16:09:29,721]信息客户端环境:host.name = be08b050be4c(org.apache.zookeeper.ZooKeeper)
zookeeper_1 | [2019-01-17 16:09:27,242]错误的配置无效,异常退出(org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | org.apache.zookeeper.server.quorum.QuorumPeerConfig $ ConfigException:处理/root/kafka_2.11-2.0.1/config/zookeeper.properties时出错 zookeeper_1 |在org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:156) zookeeper_1 |在org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:104) zookeeper_1 |在org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81) zookeeper_1 |造成原因:java.lang.IllegalArgumentException:/ tmp / zookeeper / data / myid文件丢失 zookeeper_1 |在org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:408) zookeeper_1 |在org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:152) zookeeper_1 | ...还有2个
kafka1_1 | [2019-01-17 16:09:30,487] INFO在150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186(kafka.server.KafkaServer)上连接到zookeeper zookeeper_1 |无效的配置,退出异常
事实上,我在每个节点上都没有使用docker的情况下配置了Kafka集群,并且我可以毫无问题地运行Zookeeper和Kafka服务器。卡夫卡集群就像这张照片:
您能否告诉我配置此cluser时我做错了什么?
任何帮助将不胜感激。
答案 0 :(得分:1)
我更改了docker-compose文件并解决了问题。 Zookeeper和Kafka服务器运行没有问题。主题已创建。此外,消费者和生产者在三个节点上处理主题。我的一个节点的docker-compose就像这样:
version: '3.7'
services:
zookeeper:
image: ubuntu_mesos
command: /root/kafka_2.11-2.0.1/bin/zookeeper-server-start.sh
/root/kafka_2.11-2.0.1/config/zookeeper.properties
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2186
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 10
ZOOKEEPER_SYNC_LIMIT: 5
ZOOKEEPER_SERVERS:
0.0.0.0:2888:3888;150.20.11.134:2888:3888;150.20.11.137:2888:3888
network_mode: host
expose:
- 2186
- 2888
- 3888
ports:
- 2186:2186
- 2888:2888
- 3888:3888
kafka:
image: ubuntu_mesos
command: bash -c "sleep 20; /root/kafka_2.11-2.0.1/bin/kafka-server-
start.sh /root/kafka_2.11-2.0.1/config/server.properties"
network_mode: host
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT:
150.20.11.157:2186,150.20.11.134:2186,150.20.11.137:2186
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://150.20.11.157:9092
expose:
- 9092
ports:
- 9092:9092
producer:
image: ubuntu_mesos
command: bash -c "sleep 40; /root/kafka_2.11-2.0.1/bin/kafka-topics.sh --
create --zookeeper 150.20.11.157:2186 --replication-factor 2 --partitions
3 --topic testFlink -- /root/kafka_2.11-2.0.1/bin/kafka-console-
producer.sh --broker-list 150.20.11.157:9092 --topic testFlink"
depends_on:
- zookeeper
- kafka
consumer:
image: ubuntu_mesos
command: bash -c "sleep 44; /root/kafka_2.11-2.0.1/bin/kafka-console-
consumer.sh --bootstrap-server 150.20.11.157:9092 --topic testFlink --
from-beginning"
depends_on:
- zookeeper
- kafka
另外两个节点也具有上述的docker-compose。 希望对其他人有帮助。