无法从Logstash Docker容器连接到Kafka Docker容器

时间:2019-10-11 17:28:27

标签: docker apache-kafka docker-compose logstash

我正在尝试从Logstash泊坞窗容器连接到kafka泊坞窗容器,但我总是收到以下消息:

 Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

我的docker-compose.yml文件是

version: '3.2'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      ELASTIC_PASSWORD: changeme
    networks:
      - elk
    depends_on:
      - kafka

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000"
      - "9600:9600"
    links:
      - kafka
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

  zookeeper:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    container_name: zookeeper
    command: [
      "sh", "-c",
      "bin/zookeeper-server-start.sh config/zookeeper.properties"
    ]
    ports:
      - "2181:2181"
    networks:
      - elk
    environment:
      LOG_DIR: /tmp/logs

  kafka:
    image: strimzi/kafka:0.11.3-kafka-2.1.0
    command: [
      "sh", "-c",
      "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
    ]
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    networks:
      - elk
    environment:
      LOG_DIR: "/tmp/logs"
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

networks:
  elk:
    driver: bridge

volumes:
  elasticsearch:

我的logstash.conf文件是

input {
    kafka{
        bootstrap_servers => "kafka:9092"
        topics => ["logs"]
    }
}

## Add your filters / logstash plugins configuration here

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
    }
}

我所有的容器都正常运行,我可以在容器之外向Kafka主题发送消息。

3 个答案:

答案 0 :(得分:1)

您需要基于可以从客户端解析它的主机名定义您的监听器。如果侦听器为localhost,则客户端(logstash)将尝试从其自己的容器中将其解析为localhost,因此会出现错误。

我已经详细介绍了here,但是本质上您需要这样做:

KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092, PLAINTEXT://kafka:29092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092, PLAINTEXT://kafka:29092

然后Docker网络上的任何容器都使用kafka:29092进行访问,因此logstash配置变为

bootstrap_servers => "kafka:29092"

主机上的任何客户端本身都会继续使用localhost:9092

您可以在此处通过Docker Compose看到这一点:https://github.com/confluentinc/demo-scene/blob/master/build-a-streaming-pipeline/docker-compose.yml#L40

答案 1 :(得分:0)

Kafka广告发布者应该这样定义

KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://kafka:9092   
KAFKA_LISTENERS: PLAINTEXT://kafka:9092

答案 2 :(得分:0)

您可以将HOST机器IP地址用于Kafka公告的侦听器,这样您的Docker服务以及在docker网络外部运行的服务都可以访问它。

KAFKA_ADVERTISED_LISTENERS:PLAINTEXT:// $ HOST_IP:9092
KAFKA_LISTENERS:PLAINTEXT:// $ HOST_IP:9092

作为参考,您可以阅读本文https://rmoff.net/2018/08/02/kafka-listeners-explained/