无法连接到Kafka经纪人

时间:2020-04-08 16:20:31

标签: kubernetes apache-kafka confluent-platform

我已将https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka部署在prek k8s群集上。 我正在尝试使用带有nginx的TCP控制器公开它。

我的TCP nginx configmap看起来像

data:
  "<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
  "<kafka-tcp-port>": <namespace>/cp-kafka:9092

我已经在我的nginx入口控制器中输入了相应的条目

  - name: <zookeper-tcp-port>-tcp
    port: <zookeper-tcp-port>
    protocol: TCP
    targetPort: <zookeper-tcp-port>-tcp
  - name: <kafka-tcp-port>-tcp
    port: <kafka-tcp-port>
    protocol: TCP
    targetPort: <kafka-tcp-port>-tcp

现在,我正在尝试连接到我的kafka实例。 当我尝试使用kafka工具连接到IP和端口时,出现错误消息

Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]

输入后,我假设是正确的经纪人地址(我已经尝试了所有...),我超时了。没有来自nginx控制器excep的日志

[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001

我从kafka-zookeeper-0的吊舱中获取

的负载
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port>  (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)

尽管我不确定它们是否与此有关?

关于我在做什么错的任何想法? 提前致谢。

1 个答案:

答案 0 :(得分:1)

TL; DR:

  • 在部署之前,将nodeport.enabled内的值true更改为cp-kafka/values.yaml
  • 更改TCP NGINX Configmap和Ingress对象中的服务名称和端口。
  • 在您的kafka工具上将bootstrap-server设置为<Cluster_External_IP>:31090

说明:

Headless Service与StatefulSet一起创建。 不会为创建的服务提供clusterIP,而是仅包含Endpoints列表。 然后,这些Endpoints用于生成以下形式的特定于实例的DNS记录: <StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local

它为每个Pod创建一个DNS名称,例如:

[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
  • 这就是使这些服务在群集内相互连接的原因。

我经历了很多试验和错误,直到我意识到它应该如何工作。基于您的TCP Nginx Configmap,我相信您也遇到了同样的问题。

  • Nginx ConfigMap要求:<PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"
  • 我意识到您不需要公开Zookeeper,因为它是内部服务,由kafka经纪人处理。
  • 我还意识到您正在尝试公开cp-kafka:9092,它是无头服务,也仅在内部使用,如上所述。
  • 为了获得外部访问权限,您必须按照以下说明将参数nodeport.enabled设置为true External Access Parameters
  • 它在图表部署期间向每个kafka-N容器添加一项服务。
  • 然后您更改配置映射以映射到其中之一:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090

请注意,创建的服务具有选择器statefulset.kubernetes.io/pod-name: demo-cp-kafka-0,这是服务识别其打算连接的容器的方式。

  • 编辑nginx-ingress-controller:
- containerPort: 31090
  hostPort: 31090
  protocol: TCP
  • 将您的kafka工具设置为<Cluster_External_IP>:31090

复制: -在cp-kafka/values.yaml中编辑的代码段:

nodeport:
  enabled: true
  servicePort: 19092
  firstListenerPort: 31090
  • 部署图表:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
demo-cp-control-center-6d79ddd776-ktggw    1/1     Running   3          113s
demo-cp-kafka-0                            2/2     Running   1          113s
demo-cp-kafka-1                            2/2     Running   0          94s
demo-cp-kafka-2                            2/2     Running   0          84s
demo-cp-kafka-connect-79689c5c6c-947c4     2/2     Running   2          113s
demo-cp-kafka-rest-56dfdd8d94-79kpx        2/2     Running   1          113s
demo-cp-ksql-server-c498c9755-jc6bt        2/2     Running   2          113s
demo-cp-schema-registry-5f45c498c4-dh965   2/2     Running   3          113s
demo-cp-zookeeper-0                        2/2     Running   0          112s
demo-cp-zookeeper-1                        2/2     Running   0          93s
demo-cp-zookeeper-2                        2/2     Running   0          74s

$ kubectl get svc
NAME                         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
demo-cp-control-center       ClusterIP   10.0.13.134   <none>        9021/TCP            50m
demo-cp-kafka                ClusterIP   10.0.15.71    <none>        9092/TCP            50m
demo-cp-kafka-0-nodeport     NodePort    10.0.7.101    <none>        19092:31090/TCP     50m
demo-cp-kafka-1-nodeport     NodePort    10.0.4.234    <none>        19092:31091/TCP     50m
demo-cp-kafka-2-nodeport     NodePort    10.0.3.194    <none>        19092:31092/TCP     50m
demo-cp-kafka-connect        ClusterIP   10.0.3.217    <none>        8083/TCP            50m
demo-cp-kafka-headless       ClusterIP   None          <none>        9092/TCP            50m
demo-cp-kafka-rest           ClusterIP   10.0.14.27    <none>        8082/TCP            50m
demo-cp-ksql-server          ClusterIP   10.0.7.150    <none>        8088/TCP            50m
demo-cp-schema-registry      ClusterIP   10.0.7.84     <none>        8081/TCP            50m
demo-cp-zookeeper            ClusterIP   10.0.9.119    <none>        2181/TCP            50m
demo-cp-zookeeper-headless   ClusterIP   None          <none>        2888/TCP,3888/TCP   50m
  • 创建TCP configmap:
$ cat nginx-tcp-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: kube-system
data:
  31090: "default/demo-cp-kafka-0-nodeport:31090"

$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
  • 编辑Nginx入口控制器:
$ kubectl edit deploy nginx-ingress-controller -n kube-system

$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
        ports:
        - containerPort: 31090
          hostPort: 31090
          protocol: TCP
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
  • 我的入口位于IP 35.226.189.123上,现在让我们尝试从群集外部进行连接。为此,我将连接到拥有迷你库的另一个VM,因此可以使用kafka-client pod进行测试:
user@minikube:~$ kubectl get pods
NAME           READY   STATUS    RESTARTS   AGE
kafka-client   1/1     Running   0          17h

user@minikube:~$ kubectl exec kafka-client -it -- bin/bash

root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/# 

如您所见,我能够从外部访问kafka。

  • 如果您还需要外部访问Zookeeper,我将为您保留服务模型:

zookeeper-external-0.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cp-zookeeper
    pod: demo-cp-zookeeper-0
  name: demo-cp-zookeeper-0-nodeport
  namespace: default
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: external-broker
    nodePort: 31181
    port: 12181
    protocol: TCP
    targetPort: 31181
  selector:
    app: cp-zookeeper
    statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
  sessionAffinity: None
  type: NodePort
  • 它将为其创建服务:
NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
demo-cp-zookeeper-0-nodeport   NodePort    10.0.5.67     <none>        12181:31181/TCP     2s
  • 修补您的配置图:
data:
  "31090": default/demo-cp-kafka-0-nodeport:31090
  "31181": default/demo-cp-zookeeper-0-nodeport:31181
  • 添加入口规则:
        ports:
        - containerPort: 31181
          hostPort: 31181
          protocol: TCP
  • 使用您的外部IP对其进行测试:
pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled

如果您有任何疑问,请在评论中告诉我!