storageClass kubernetes.io/no-provisioner是否适用于多节点集群?

时间:2018-06-12 08:42:26

标签: kubernetes local-storage kubeadm kubernetes-pvc kubernetes-statefulset

集群: 1个主人 2名工人

我使用带有3个副本的PV(kubernetes.io/no-provisioner storageClass)使用local-volume部署StatefulSet。 为两个工作节点创建了2个PV。

期望:将在两个工作人员上安排pod并共享相同的音量。

结果:在单个工作节点上创建了3个有状态的pod。 yaml: -

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-local-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv-1
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-node1 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv-2
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-node2

---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: test
  labels:
    app: test
spec:
  ports:
  - name: test-headless
    port: 8000
  clusterIP: None
  selector:
    app: test
---
apiVersion: v1
kind: Service
metadata:
  name: test-service
  labels:
    app: test
spec:
  ports:
  - name: test
    port: 8000
    protocol: TCP
    nodePort: 30063
  type: NodePort
  selector:
    app: test

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: test-stateful
spec:
  selector:
    matchLabels:
      app: test
  serviceName: stateful-service
  replicas: 6
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: container-1
        image: <Image-name>
        imagePullPolicy: Always
        ports:
        - name: http
          containerPort: 8000
        volumeMounts:
        - name: localvolume 
          mountPath: /tmp/
      volumes:
      - name: localvolume
        persistentVolumeClaim:
          claimName: example-local-claim

1 个答案:

答案 0 :(得分:1)

这是因为Kubernetes不关心分配。它具有提供称为Pod Affinity的特定分布的机制。 要在所有工作人员上分发pod,您可以使用Pod Affinity。 此外,你可以使用软亲和力(差异我explain here),它不严格,并允许产生你的所有豆荚。例如,StatefulSet将如下所示:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3 
  template:
    metadata:
      labels:
        app: my-app
    spec:
      affinity:
        podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - my-app
          topologyKey: kubernetes.io/hostname      
      terminationGracePeriodSeconds: 10
      containers:
      - name: app-name
        image: k8s.gcr.io/super-app:0.8
        ports:
        - containerPort: 21
          name: web

此StatefulSet将尝试在新工作者上生成每个pod;如果没有足够的工作人员,它将在pod已经存在的节点上生成pod。