如何在kubernetes上为Kafka多代理设置指定广告监听器并在经验上公开集群?

时间:2017-08-01 22:18:12

标签: apache-kafka kubernetes apache-zookeeper messagebroker

我正在尝试在Azure中托管的kubernetes集群上设置多代理kafka。我有一个代理设置工作。对于多代理设置,目前我有一个管理kafka服务的zookeeper节点集合(3)。我正在将kafka集群部署为复制控制器,复制因子为3.这是3个代理。如何向Zookeeper注册这三个代理,以便它们与Zookeeper注册不同的IP地址?

部署服务后,我启动了复制控制器,并在复制控制器yaml文件中使用群集IP来指定两个advertised.listeners,一个用于SSL,另一个用于PLAINTEXT。但是,在这种情况下,所有代理都使用相同的IP注册并写入副本失败。我不希望将每个代理部署为单独的复制控制器/ pod和服务,因为扩展成为一个问题。我真的很感激任何想法/想法。

编辑1:

我还试图将群集暴露给云中的另一个VPC。我必须使用advertised.listeners为客户端公开SSL和PLAINTEXT端口。如果我使用复制因子为3的statefulset并让kubernetes将pod的规范主机名作为主机名公开,则无法从外部客户端解析这些名称。我使用它的唯一方法是使用/公开与每个代理相对应的外部服务。但是,这不会扩展。

1 个答案:

答案 0 :(得分:1)

Kubernetes有Statefulsets的概念来解决这些问题。 statefulset的每个实例都有自己的DNS名称,因此您可以通过dns名称引用每个实例。

更详细地描述了这个概念here。您还可以查看此complete example

apiVersion: v1
kind: Service
metadata:
  name: zk-headless
  labels:
    app: zk-headless
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: zk-config
data:
  ensemble: "zk-0;zk-1;zk-2"
  jvm.heap: "2G"
  tick: "2000"
  init: "10"
  sync: "5"
  client.cnxns: "60"
  snap.retain: "3"
  purge.interval: "1"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-budget
spec:
  selector:
    matchLabels:
      app: zk
  minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: zk
spec:
  serviceName: zk-headless
  replicas: 3
  template:
    metadata:
      labels:
        app: zk
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"

    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values: 
                    - zk-headless
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: k8szk
        imagePullPolicy: Always
        image: gcr.io/google_samples/k8szk:v1
        resources:
          requests:
            memory: "4Gi"
            cpu: "1"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        env:
        - name : ZK_ENSEMBLE
          valueFrom:
            configMapKeyRef:
              name: zk-config
              key: ensemble
        - name : ZK_HEAP_SIZE
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: jvm.heap
        - name : ZK_TICK_TIME
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: tick
        - name : ZK_INIT_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: init
        - name : ZK_SYNC_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: tick
        - name : ZK_MAX_CLIENT_CNXNS
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: client.cnxns
        - name: ZK_SNAP_RETAIN_COUNT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: snap.retain
        - name: ZK_PURGE_INTERVAL
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: purge.interval
        - name: ZK_CLIENT_PORT
          value: "2181"
        - name: ZK_SERVER_PORT
          value: "2888"
        - name: ZK_ELECTION_PORT
          value: "3888"
        command:
        - sh
        - -c
        - zkGenConfig.sh && zkServer.sh start-foreground
        readinessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 20Gi