错误:广告连播没有立即绑定的PersistentVolumeClaims

时间:2019-08-13 10:41:58

标签: kubernetes apache-kafka ceph kubernetes-rook

我正在尝试使用kubeless运行kafka,但出现此错误吊舱具有立即绑定的PersistentVolumeClaims。我已经使用rook和ceph创建了一个持久卷,并尝试将该持久卷与kubeless kafka一起使用。但是,当我运行代码时,我收到“ pod具有未绑定的持久卷声明”

我在这里做什么错了?

Kafka的持久体积

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: datadir
  labels:
    kubeless: kafka
spec:
  storageClassName: rook-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

动物饲养者的持久量

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: zookeeper
  labels:
    kubeless: zookeeper
spec:
  storageClassName: rook-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

无核卡夫卡

apiVersion: v1
kind: Service
metadata:
  name: kafka
  namespace: kubeless
spec:
  ports:
  - port: 9092
  selector:
    kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
  name: zoo
  namespace: kubeless
spec:
  clusterIP: None
  ports:
  - name: peer
    port: 9092
  - name: leader-election
    port: 3888
  selector:
    kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    kubeless: kafka-trigger-controller
  name: kafka-trigger-controller
  namespace: kubeless
spec:
  selector:
    matchLabels:
      kubeless: kafka-trigger-controller
  template:
    metadata:
      labels:
        kubeless: kafka-trigger-controller
    spec:
      containers:
      - env:
        - name: KUBELESS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: KUBELESS_CONFIG
          value: kubeless-config
        image: kubeless/kafka-trigger-controller:v1.0.2
        imagePullPolicy: IfNotPresent
        name: kafka-trigger-controller
      serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kafka-controller-deployer
rules:
- apiGroups:
  - ""
  resources:
  - services
  - configmaps
  verbs:
  - get
  - list
- apiGroups:
  - kubeless.io
  resources:
  - functions
  - kafkatriggers
  verbs:
  - get
  - list
  - watch
  - update
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kafka-controller-deployer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
  name: controller-acct
  namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kafkatriggers.kubeless.io
spec:
  group: kubeless.io
  names:
    kind: KafkaTrigger
    plural: kafkatriggers
    singular: kafkatrigger
  scope: Namespaced
  version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: kafka
  namespace: kubeless
spec:
  serviceName: broker
  template:
    metadata:
      labels:
        kubeless: kafka
    spec:
      containers:
      - env:
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: broker.kubeless
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_PORT
          value: "9092"
        - name: KAFKA_DELETE_TOPIC_ENABLE
          value: "true"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zookeeper.kubeless:2181
        - name: ALLOW_PLAINTEXT_LISTENER
          value: "yes"
        image: bitnami/kafka:1.1.0-r0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          initialDelaySeconds: 30
          tcpSocket:
            port: 9092
        name: broker
        ports:
        - containerPort: 9092
        volumeMounts:
        - mountPath: /bitnami/kafka/data
          name: datadir
      initContainers:
      - command:
        - sh
        - -c
        - chmod -R g+rwX /bitnami
        image: busybox
        imagePullPolicy: IfNotPresent
        name: volume-permissions
        volumeMounts:
        - mountPath: /bitnami/kafka/data
          name: datadir
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: broker
  namespace: kubeless
spec:
  clusterIP: None
  ports:
  - port: 9092
  selector:
    kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: zoo
  namespace: kubeless
spec:
  serviceName: zoo
  template:
    metadata:
      labels:
        kubeless: zookeeper
    spec:
      containers:
      - env:
        - name: ZOO_SERVERS
          value: server.1=zoo-0.zoo:2888:3888:participant
        - name: ALLOW_ANONYMOUS_LOGIN
          value: "yes"
        image: bitnami/zookeeper:3.4.10-r12
        imagePullPolicy: IfNotPresent
        name: zookeeper
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: peer
        - containerPort: 3888
          name: leader-election
        volumeMounts:
        - mountPath: /bitnami/zookeeper
          name: zookeeper
      initContainers:
      - command:
        - sh
        - -c
        - chmod -R g+rwX /bitnami
        image: busybox
        imagePullPolicy: IfNotPresent
        name: volume-permissions
        volumeMounts:
        - mountPath: /bitnami/zookeeper
          name: zookeeper
  volumeClaimTemplates:
  - metadata:
      name: zookeeper
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: kubeless
spec:
  ports:
  - name: client
    port: 2181
  selector:
    kubeless: zookeeper

错误

vagrant@ubuntu-xenial:~/infra/ansible/scripts/kubeless-kafka-trigger$ kubectl get pod -n kubeless
NAME                                           READY   STATUS    RESTARTS   AGE
kafka-0                                        0/1     Pending   0          8m44s
kafka-trigger-controller-7cbd54b458-pccpn      1/1     Running   0          8m47s
kubeless-controller-manager-5bcb6757d9-nlksd   3/3     Running   0          3h34m
zoo-0                                          0/1     Pending   0          8m42s


Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  45s (x10 over 10m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 2 times

kubectl描述pod kafka-0 -n kubeless

Name:           kafka-0
Namespace:      kubeless
Priority:       0
Node:           <none>
Labels:         controller-revision-hash=kafka-c498d7f6
                kubeless=kafka
                statefulset.kubernetes.io/pod-name=kafka-0
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  StatefulSet/kafka
Init Containers:
  volume-permissions:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chmod -R g+rwX /bitnami
    Environment:  <none>
    Mounts:
      /bitnami/kafka/data from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Containers:
  broker:
    Image:      bitnami/kafka:1.1.0-r0
    Port:       9092/TCP
    Host Port:  0/TCP
    Liveness:   tcp-socket :9092 delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      KAFKA_ADVERTISED_HOST_NAME:  broker.kubeless
      KAFKA_ADVERTISED_PORT:       9092
      KAFKA_PORT:                  9092
      KAFKA_DELETE_TOPIC_ENABLE:   true
      KAFKA_ZOOKEEPER_CONNECT:     zookeeper.kubeless:2181
      ALLOW_PLAINTEXT_LISTENER:    yes
    Mounts:
      /bitnami/kafka/data from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-kafka-0
    ReadOnly:   false
  default-token-wj8vx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wj8vx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  45s (x10 over 10m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 2 times)

2 个答案:

答案 0 :(得分:0)

我让它工作了。对于面临相同问题的人,这将是有用的。

这使用了rook-ceph存储无核卡夫卡

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: kafka
  namespace: kubeless
  labels:
    kubeless: kafka
spec:
  storageClassName: rook-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: zookeeper
  namespace: kubeless  
  labels:
    kubeless: zookeeper
spec:
  storageClassName: rook-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: kafka
  namespace: kubeless
spec:
  ports:
  - port: 9092
  selector:
    kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
  name: zoo
  namespace: kubeless
spec:
  clusterIP: None
  ports:
  - name: peer
    port: 9092
  - name: leader-election
    port: 3888
  selector:
    kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    kubeless: kafka-trigger-controller
  name: kafka-trigger-controller
  namespace: kubeless
spec:
  selector:
    matchLabels:
      kubeless: kafka-trigger-controller
  template:
    metadata:
      labels:
        kubeless: kafka-trigger-controller
    spec:
      containers:
      - env:
        - name: KUBELESS_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: KUBELESS_CONFIG
          value: kubeless-config
        image: kubeless/kafka-trigger-controller:v1.0.2
        imagePullPolicy: IfNotPresent
        name: kafka-trigger-controller
      serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kafka-controller-deployer
rules:
- apiGroups:
  - ""
  resources:
  - services
  - configmaps
  verbs:
  - get
  - list
- apiGroups:
  - kubeless.io
  resources:
  - functions
  - kafkatriggers
  verbs:
  - get
  - list
  - watch
  - update
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kafka-controller-deployer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
  name: controller-acct
  namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: kafkatriggers.kubeless.io
spec:
  group: kubeless.io
  names:
    kind: KafkaTrigger
    plural: kafkatriggers
    singular: kafkatrigger
  scope: Namespaced
  version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: kafka
  namespace: kubeless
spec:
  serviceName: broker
  template:
    metadata:
      labels:
        kubeless: kafka
    spec:
      containers:
      - env:
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: broker.kubeless
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_PORT
          value: "9092"
        - name: KAFKA_DELETE_TOPIC_ENABLE
          value: "true"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zookeeper.kubeless:2181
        - name: ALLOW_PLAINTEXT_LISTENER
          value: "yes"
        image: bitnami/kafka:1.1.0-r0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          initialDelaySeconds: 30
          tcpSocket:
            port: 9092
        name: broker
        ports:
        - containerPort: 9092
        volumeMounts:
        - mountPath: /bitnami/kafka/data
          name: kafka
      initContainers:
      - command:
        - sh
        - -c
        - chmod -R g+rwX /bitnami
        image: busybox
        imagePullPolicy: IfNotPresent
        name: volume-permissions
        volumeMounts:
        - mountPath: /bitnami/kafka/data
          name: kafka
      volumes:
        - name: kafka
          persistentVolumeClaim:
            claimName: kafka  
---
apiVersion: v1
kind: Service
metadata:
  name: broker
  namespace: kubeless
spec:
  clusterIP: None
  ports:
  - port: 9092
  selector:
    kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: zoo
  namespace: kubeless
spec:
  serviceName: zoo
  template:
    metadata:
      labels:
        kubeless: zookeeper
    spec:
      containers:
      - env:
        - name: ZOO_SERVERS
          value: server.1=zoo-0.zoo:2888:3888:participant
        - name: ALLOW_ANONYMOUS_LOGIN
          value: "yes"
        image: bitnami/zookeeper:3.4.10-r12
        imagePullPolicy: IfNotPresent
        name: zookeeper
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: peer
        - containerPort: 3888
          name: leader-election
        volumeMounts:
        - mountPath: /bitnami/zookeeper
          name: zookeeper
      initContainers:
      - command:
        - sh
        - -c
        - chmod -R g+rwX /bitnami
        image: busybox
        imagePullPolicy: IfNotPresent
        name: volume-permissions
        volumeMounts:
        - mountPath: /bitnami/zookeeper
          name: zookeeper
      volumes:
        - name: zookeeper
          persistentVolumeClaim:
            claimName: zookeeper  
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: kubeless
spec:
  ports:
  - name: client
    port: 2181
  selector:
    kubeless: zookeeper

答案 1 :(得分:-1)

在我的minikube中遇到了相同的错误。忘记为我的statefulSet创建卷。

创建PVC。需要注意storageClassName,检查是否可用(我在仪表板上做了)。

{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
   "name": "XXXX",
    "namespace": "kube-public",
    "labels": {
      "kubeless": "XXXX"
    }
  },
  "spec": {
    "storageClassName": "hostpath",
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "1Gi"
      }
    }
  }
}

我有了持久性。 然后我编辑了statefulSet:

    "volumes": [
          {
            "name": "XXX",
             "persistentVolumeClaim": {
               "claimName": "XXX"
             }
          }

添加了“ persistentVolumeClaim”属性,删除了pod,等到创建新pod为止。