无法从Docker外部访问kafka

时间:2018-11-07 06:05:41

标签: docker apache-kafka docker-compose

我有一个融合的kafka设置和docker-compose.yml文件

https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/kafka-cluster/docker-compose.yml

我已经编辑了文件并将ports选项添加到docker compose文件中,以便可以在主机外部访问它们。

---
version: '2' 
services: 
  zookeeper-1: 
    image: confluentinc/cp-zookeeper:latest 
    environment: 
      ZOOKEEPER_SERVER_ID: 1 
      ZOOKEEPER_CLIENT_PORT: 22181 
      ZOOKEEPER_TICK_TIME: 2000 
      ZOOKEEPER_INIT_LIMIT: 5 
      ZOOKEEPER_SYNC_LIMIT: 2 
      ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888 
    network_mode: "host" 

  zookeeper-2: 
    image: confluentinc/cp-zookeeper:latest 
    environment: 
      ZOOKEEPER_SERVER_ID: 2 
      ZOOKEEPER_CLIENT_PORT: 32181 
      ZOOKEEPER_TICK_TIME: 2000 
      ZOOKEEPER_INIT_LIMIT: 5 
      ZOOKEEPER_SYNC_LIMIT: 2 
      ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888 
    network_mode: "host" 

  zookeeper-3: 
    image: confluentinc/cp-zookeeper:latest 
    environment: 
      ZOOKEEPER_SERVER_ID: 3 
      ZOOKEEPER_CLIENT_PORT: 42181 
      ZOOKEEPER_TICK_TIME: 2000 
      ZOOKEEPER_INIT_LIMIT: 5 
      ZOOKEEPER_SYNC_LIMIT: 2 
      ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888 
    network_mode: "host" 

  kafka-1: 
    image: confluentinc/cp-kafka:latest 
    depends_on: 
      - zookeeper-1 
      - zookeeper-2 
      - zookeeper-3 
    ports: 
      - "19092" 
    environment: 
      KAFKA_BROKER_ID: 1 
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181 
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092 
    network_mode: "host" 

  kafka-2: 
    image: confluentinc/cp-kafka:latest 
    depends_on: 
      - zookeeper-1 
      - zookeeper-2 
      - zookeeper-3 
    ports: 
      - "29092" 
    environment: 
      KAFKA_BROKER_ID: 2 
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181 
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092 
    network_mode: "host" 

  kafka-3: 
    image: confluentinc/cp-kafka:latest 
    depends_on: 
      - zookeeper-1 
      - zookeeper-2 
      - zookeeper-3 
    ports: 
      - "39092" 
    environment: 
      KAFKA_BROKER_ID: 3 
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181 
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092 
    network_mode: "host" 

我已经运行了以下命令:

docker-compose up --no-start
docker-compose start zookeeper-1
docker-compose start zookeeper-2
docker-compose start zookeeper-3

docker-compose run -d --service-ports kafka-1
docker-compose run -d --service-ports kafka-2
docker-compose run -d --service-ports kafka-3

zookeeper-1,2,3启动,但kafka-1失败,退出代码为0。 当我执行docker-compose start kafka-1时,端口没有暴露,但是服务已启动。

以上命令的输出:

docker ps

CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS               NAMES
7abec60edd7a        confluentinc/cp-zookeeper:latest   "/etc/confluent/dock…"   3 minutes ago       Up 3 minutes                            kafka-cluster_zookeeper-2_1_c9f58ba3fbc8
68ec403740d6        confluentinc/cp-zookeeper:latest   "/etc/confluent/dock…"   3 minutes ago       Up 3 minutes                            kafka-cluster_zookeeper-3_1_31e4762a61bb
69d6645487aa        confluentinc/cp-zookeeper:latest   "/etc/confluent/dock…"   3 minutes ago       Up 3 minutes                            kafka-cluster_zookeeper-1_1_8bbd729b09d8

kafka-1,kafka-2,kafka-3的docker日志(全部相同)

[main-SendThread(localhost:42181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:42181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(localhost:42181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:42181, initiating session
[main-SendThread(localhost:42181)] INFO org.apache.zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
[main-SendThread(localhost:32181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:32181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(localhost:32181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/0:0:0:0:0:0:0:1:32181, initiating session
[main-SendThread(localhost:32181)] INFO org.apache.zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
[main-SendThread(localhost:22181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:22181. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(localhost:22181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/0:0:0:0:0:0:0:1:22181, initiating session
[main-SendThread(localhost:22181)] INFO org.apache.zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
[main] ERROR io.confluent.admin.utils.ClusterStatus - Timed out waiting for connection to Zookeeper server [localhost:22181,localhost:32181,localhost:42181].
[main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x0 closed

docker-compose ps

                  Name                              Command            State    Ports
-------------------------------------------------------------------------------------
kafka-cluster_kafka-1_1_c79e5ef5d397       /etc/confluent/docker/run   Exit 0        
kafka-cluster_kafka-2_1_d4399ed0a670       /etc/confluent/docker/run   Exit 0        
kafka-cluster_kafka-3_1_2df6f47759c0       /etc/confluent/docker/run   Exit 0        
kafka-cluster_zookeeper-1_1_8bbd729b09d8   /etc/confluent/docker/run   Up            
kafka-cluster_zookeeper-2_1_c9f58ba3fbc8   /etc/confluent/docker/run   Up            
kafka-cluster_zookeeper-3_1_31e4762a61bb   /etc/confluent/docker/run   Up            

2 个答案:

答案 0 :(得分:0)

这里的问题是必须将KAFKA_ADVERTISED_LISTENERS中的IP地址设置为主机的IP地址。这样,当您在另一台计算机上运行它时,该计算机将能够使用IP地址访问kafka。

      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.1.1.1:39092 

@Ntwobike指出的下一点是删除-p选项。并非不需要,但设置了--net=host选项时,它是多余的。

最重要,是检查防火墙。检查 iptables 规则并将其更改为FORWARD ACCEPT(在我的情况下,因为我是开发主机,所以已经这样做了。)

如果您使用的是CentOS或RHEL,则可以尝试停止firewalld并检查是否是引起问题的防火墙。

systemctl stop firewalld

对于其他人,这是ufw(Ubuntu,薄荷)

systemctl stop ufw

或者,您也可以执行iptables -F,但在此之前请确保您使用

备份iptables规则

iptables-save > /home/iptables_rules_bak

然后刷新规则后,您可以

iptables -P FORWARD ACCEPT

答案 1 :(得分:0)

如果您查看Confluent撰写的all-in-one撰写示例,那么将正确设置所有内容,以便从各个方向访问代理,而不使用network: host“ hacks”(仅在Linux上有效)< / p>

引用

KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
  KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092

在Docker网络外部,您可以通过端口29092进行连接,但在内部则为9092


更重要的是,Zookeeper连接字符串实际上应该指向彼此,而不是localhost

我还要指出的是,在一台计算机上的多个代理没有很多好处,并且如果您想要持久的数据(如果重新启动计算机或Docker,不要丢失所有内容),那么您将需要大量坐骑