卡夫卡制片人说“ unknown_topic_or_partition”

时间:2018-11-05 16:21:02

标签: docker apache-kafka docker-compose kafka-producer-api ruby-kafka

几天来我一直在努力使kafka-docker工作,但我不知道自己在做什么错。现在,我无法使用ruby-kafka客户端访问任何主题,因为节点“不存在”。这是我的docker-compose.yml文件:

version: '2'
services:
  zookeeper:
   image: wurstmeister/zookeeper
   ports:
     - "2181:2181"
  kafka:
    image: wurstmeister/kafka:0.9.0.1
    ports:
      - "9092:9092"
    links:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  kafka2:
    image: wurstmeister/kafka:0.9.0.1
    ports:
      - "9093:9092"
    links:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ADVERTISED_PORT: 9093
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
  kafka3:
    image: wurstmeister/kafka:0.9.0.1
    ports:
      - "9094:9092"
    links:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
      KAFKA_ADVERTISED_PORT: 9094
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

我指定“ KAFKA_AUTO_CREATE_TOPICS_ENABLE:'false'”是因为我想手动创建主题,因此我进入了第一个经纪人容器并输入了以下内容:

  

./ kafka-topics.sh --create --zookeeper 172.19.0.2:2181 --topic test1 --partitions 4 --replication-factor 3

一切似乎都很好:

  

./ kafka-topics.sh --list --zookeeper 172.19.0.2:2181-> test1

但是,当我尝试这样做时:

  

./ kafka-console-producer.sh --broker-list localhost:9092 --topic test1

它说:

  

获取关联ID为24的元数据时出现警告错误:{test1 = UNKNOWN_TOPIC_OR_PARTITION}(org.apache.kafka.clients.NetworkClient)

如果我再次创建该主题,则表明该主题已经存在,因此我不知道发生了什么。

2 个答案:

答案 0 :(得分:0)

您需要正确配置网络配置,因为Kafka可跨主机工作,并且需要能够访问所有主机。

This post对其进行了详细说明。

您可能还想参考https://github.com/confluentinc/cp-docker-images/blob/5.0.0-post/examples/cp-all-in-one/docker-compose.yml来了解有效的Docker Compose的示例。

答案 1 :(得分:0)

所以我们在使用kafka connect时遇到了这个问题。这有多种解决方案。修剪所有docker映像或在连接映像中更改用于连接的配置中的组ID,如下所示:-

    image: debezium/connect:1.1
    ports:
      - 8083:8083
    links:
      - schema-registry
    environment:
      - BOOTSTRAP_SERVERS=kafkaanalytics-mgmt.fptsinternal.com:9092
      - GROUP_ID=1
      - CONFIG_STORAGE_TOPIC=my_connect_configs
      - OFFSET_STORAGE_TOPIC=my_connect_offsets
      - STATUS_STORAGE_TOPIC=my_connect_statuses
      - INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
      - INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter