IllegalStateException:找不到连接1001 Kafka Kubernetes的条目

时间:2019-02-04 10:36:42

标签: kubernetes apache-kafka kafka-producer-api

我正在尝试用K8s建立基本的Kafka。但是,每次尝试从具有Kafka的数据生成应用程序连接到K8s中的Kafka服务时,我都会在Kafka日志中得到此异常:

2019-02-04 12:11:28 ERROR Sender:235 kafka-producer-network-thread | avro_data - [Producer clientId=avro_data] Uncaught error in kafka producer I/O thread: 
java.lang.IllegalStateException: No entry found for connection 1001
    at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
    at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
    at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:67)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1086)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:971)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:533)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:309)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:233)
    at java.lang.Thread.run(Thread.java:748

这是生产者日志:

[Producer clientId=avro_data] Initialize connection to node 192.168.99.100:32092 (id: -1 rack: null) for sending metadata request
Updated cluster metadata version 2 to Cluster(id = MpP-9JVnQ4a78VTtCzTm3Q, nodes = [kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null)], partitions = [Partition(topic = avro_topic, partition = 0, leader = 1001, replicas = [1001], isr = [1001], offlineReplicas = [])], controller = kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null))
[Producer clientId=avro_data] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.

Kafka设置或应用程序连接有什么问题?

我尝试连接到Kafka节点端口服务:

  props.put("bootstrap.servers", "192.168.99.100:32092")
    props.put("client.id", "avro_data")
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
    props.put("schema.registry.url", "http://192.168.99.100:32081")

Kafka设置如下:

apiVersion: v1
kind: Service
metadata:
  name: kafka-headless
spec:
  ports:
    - port: 9092
  clusterIP: None
  selector:
    app: kafka
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-np
spec:
  ports:
    - port: 32092
      protocol: TCP
      targetPort: 9092
      nodePort: 32092
  selector:
    app: kafka
  type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: kafka
  name: kafka-broker
spec:
  serviceName: kafka-headless
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      containers:
        - name: kafka
          image: confluentinc/cp-kafka:5.0.1
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-headless:2181
            - name: MINIKUBE_IP
              value: 192.168.99.100
            - name: KAFKA_ADVERTISED_LISTENERS
              value: PLAINTEXT://kafka-broker-0.kafka-headless.default.svc.cluster.local:9092,EXTERNAL://192.168.99.100:32092
            - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
              value: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
          ports:
            - containerPort: 9092

1 个答案:

答案 0 :(得分:1)

我在使用bitnami kafka和zookeeper图片时遇到了这个问题,切换到融合图片(版本4.0.0)解决了我的问题。尽管您已经在使用融合图像,但是请尝试在docker-compose.yml中使用以下图像/版本,以消除正在使用的版本中的错误。

confluentinc/cp-zookeeper:4.0.0
confluentinc/cp-kafka:4.0.0

https://hub.docker.com/r/confluentinc/cp-kafka

https://hub.docker.com/r/confluentinc/cp-zookeeper