TimeoutException:提取主题元数据Kafka时超时已过期

时间:2019-01-18 13:09:20

标签: kubernetes apache-kafka apache-zookeeper confluent-schema-registry

我一直在尝试使用Kubernetes在本地部署带有架构注册表的Kafka。但是,架构注册表窗格的日志显示以下错误消息:

ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

这种行为可能是什么原因? ' 为了在本地运行Kubernetes,我将Minikube v0.32.0版本与Kubernetes v1.13.0版本一起使用

我的Kafka配置:

apiVersion: v1
kind: Service
metadata:
  name: kafka-1
spec:
  ports:
    - name: client
      port: 9092
  selector:
    app: kafka
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-1
spec:
  selector:
    matchLabels:
      app: kafka
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
        server-id: "1"
    spec:
      volumes:
        - name: kafka-data
          emptyDir: {}
      containers:
        - name: server
          image: confluent/kafka:0.10.0.0-cp1
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-1:2181
            - name: KAFKA_ADVERTISED_HOST_NAME
              value: kafka-1
            - name: KAFKA_BROKER_ID
              value: "1"
          ports:
            - containerPort: 9092
          volumeMounts:
            - mountPath: /var/lib/kafka
              name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
  name: schema
spec:
  ports:
    - name: client
      port: 8081
  selector:
    app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-schema-registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-schema-registry
  template:
    metadata:
      labels:
        app: kafka-schema-registry
    spec:
      containers:
        - name: kafka-schema-registry
          image: confluent/schema-registry:3.0.0
          env:
            - name: SR_KAFKASTORE_CONNECTION_URL
              value: zookeeper-1:2181
            - name: SR_KAFKASTORE_TOPIC
              value: "_schema_registry"
            - name: SR_LISTENERS
              value: "http://0.0.0.0:8081"
          ports:
            - containerPort: 8081

Zookeeper配置:

apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper-1
spec:
  ports:
    - name: client
      port: 2181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper-1
spec:
  selector:
    matchLabels:
      app: zookeeper
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: elevy/zookeeper:v3.4.7
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper-1"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
            - mountPath: /zookeeper/data
              name: data
            - mountPath: /zookeeper/wal
              name: wal

5 个答案:

答案 0 :(得分:1)

Kafka获取主题元数据失败的原因有两个:

原因1如果引导服务器不接受您的连接,则可能是由于某些代理问题(例如VPN或某些服务器级安全组)造成的。

原因2:安全协议不匹配,其中预期的可能是SASL_SSL,而实际的可能是SSL。或相反,也可以是PLAIN。

答案 1 :(得分:0)

org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
尝试连接到希望进行SSL连接且未指定客户端配置的代理时,可能会发生

security.protocol=SSL 

答案 2 :(得分:0)

有一次我通过重新启动计算机来解决此问题,但是又发生了一次,并且我不想重新启动计算机,因此我使用server.properties文件中的此属性对其进行了修复

advertised.listeners=PLAINTEXT://localhost:9092

答案 3 :(得分:0)

对于其他可能遇到此问题的人,可能会发生这种情况,因为主题不是在 kafka 代理机器上创建的。 因此,请确保按照代码库中的说明在服务器上创建适当的主题。

答案 4 :(得分:0)

即使创建了所有 SSL 配置和主题,我也遇到了同样的问题。经过长时间的研究,我启用了 spring 调试日志。内部错误是 org.springframework.jdbc.CannotGetJdbcConnectionException。当我检查其他线程时,他们说 Spring Boot 和 Kafka 依赖项不匹配会导致超时异常。所以我已经将 Spring Boot 从 2.1.3 升级到 2.2.4。现在没有错误,kafka连接成功。可能对某人有用。