如何在适当的区域中部署Kubernetes Physical Volumes?

时间:2017-04-28 16:14:55

标签: kubernetes

我在GKE中的不同区域中的三个节点上运行kubernetes 1.6.2群集,并且我尝试部署我的statefulset,其中statefulset中的每个pod都附加了PV。问题是kubernetes正在一个我没有节点的区域创建PV!

$ kubectl describe node gke-multi-consul-default-pool-747c9378-zls3|grep 'zone=us-central1'
            failure-domain.beta.kubernetes.io/zone=us-central1-a
$ kubectl describe node gke-multi-consul-default-pool-7e987593-qjtt|grep 'zone=us-central1'
            failure-domain.beta.kubernetes.io/zone=us-central1-f
$ kubectl describe node gke-multi-consul-default-pool-8e9199ea-91pj|grep 'zone=us-central1'
            failure-domain.beta.kubernetes.io/zone=us-central1-c

$ kubectl describe pv pvc-3f668058-2c2a-11e7-a7cd-42010a8001e2|grep 'zone=us-central1'
        failure-domain.beta.kubernetes.io/zone=us-central1-b

我使用的标准存储类没有设置默认区域:

$ kubectl describe storageclass standard
Name:       standard
IsDefaultClass: Yes
Annotations:    storageclass.beta.kubernetes.io/is-default-class=true
Provisioner:    kubernetes.io/gce-pd
Parameters: type=pd-standard
Events:     <none>

所以我认为调度程序会自动在存在集群节点的区域中配置卷,但它似乎并没有这样做。

作为参考,这是我的statefulset的yaml:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: "{{ template "fullname" . }}"
  labels:
    heritage: {{.Release.Service | quote }}
    release: {{.Release.Name | quote }}
    chart: "{{.Chart.Name}}-{{.Chart.Version}}"
    component: "{{.Release.Name}}-{{.Values.Component}}"
spec:
  serviceName: "{{ template "fullname" . }}"
  replicas: {{default 3 .Values.Replicas}}
  template:
    metadata:
      name: "{{ template "fullname" . }}"
      labels:
        heritage: {{.Release.Service | quote }}
        release: {{.Release.Name | quote }}
        chart: "{{.Chart.Name}}-{{.Chart.Version}}"
        component: "{{.Release.Name}}-{{.Values.Component}}"
        app: "consul"
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      securityContext:
        fsGroup: 1000
      containers:
      - name: "{{ template "fullname" . }}"
        image: "{{.Values.Image}}:{{.Values.ImageTag}}"
        imagePullPolicy: "{{.Values.ImagePullPolicy}}"
        ports:
        - name: http
          containerPort: {{.Values.HttpPort}}
        - name: rpc
          containerPort: {{.Values.RpcPort}}
        - name: serflan-tcp
          protocol: "TCP"
          containerPort: {{.Values.SerflanPort}}
        - name: serflan-udp
          protocol: "UDP"
          containerPort: {{.Values.SerflanUdpPort}}
        - name: serfwan-tcp
          protocol: "TCP"
          containerPort: {{.Values.SerfwanPort}}
        - name: serfwan-udp
          protocol: "UDP"
          containerPort: {{.Values.SerfwanUdpPort}}
        - name: server
          containerPort: {{.Values.ServerPort}}
        - name: consuldns
          containerPort: {{.Values.ConsulDnsPort}}
        resources:
          requests:
            cpu: "{{.Values.Cpu}}"
            memory: "{{.Values.Memory}}"
        env:
        - name: INITIAL_CLUSTER_SIZE
          value: {{ default 3 .Values.Replicas | quote }}
        - name: STATEFULSET_NAME
          value: "{{ template "fullname" . }}"
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: STATEFULSET_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/consul
        - name: gossip-key
          mountPath: /etc/secrets
          readOnly: true
        - name: config
          mountPath: /etc/consul
        - name: tls
          mountPath: /etc/tls
        lifecycle:
          preStop:
            exec:
              command:
                - /bin/sh
                - -c
                - consul leave
        livenessProbe:
          exec:
            command:
            - consul
            - members
          initialDelaySeconds: 300
          timeoutSeconds: 5
        command:
          - "/bin/sh"
          - "-ec"
          - "/tmp/consul-start.sh"
      volumes:
      - name: config
        configMap:
          name: consul
      - name: gossip-key
        secret:
          secretName: {{ template "fullname" . }}-gossip-key
      - name: tls
        secret:
          secretName: consul
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
      {{- if .Values.StorageClass }}
        volume.beta.kubernetes.io/storage-class: {{.Values.StorageClass | quote}}
      {{- else }}
        volume.alpha.kubernetes.io/storage-class: default
      {{- end }}
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          # upstream recommended max is 700M
          storage: "{{.Values.Storage}}"

3 个答案:

答案 0 :(得分:1)

此问题here已打开错误。

同时,解决方法是在zones中设置StorageClass参数,以指定Kubernetes群集具有节点的确切区域。 Here就是一个例子。

答案 1 :(得分:0)

Kubernetes关于持续卷的文档的回答:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#gce zone: GCE zone. If not specified, a random zone in the same region as controller-manager will be chosen. 我猜您的控制器管理器位于us-central-1区域,因此可以从该区域选择任何区域,在您的情况下,我猜唯一未覆盖的区域是us-central-1b,因此您必须启动一个节点还有,或者在StorageClass资源中设置区域。

答案 2 :(得分:0)

您可以为每个区域创建storage classes,然后PV / PVC可以指定该存储类。您的有状态集/部署可以设置为通过nodeSelector定位到特定节点,以便始终在特定区域的节点上进行调度(请参阅built-in node labels

storage_class.yml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: us-central-1a
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a

persistent_volume.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: some-volume
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: us-central-1a

请注意,您可以在kubernetes 1.6中使用storageClassName,否则注释volume.beta.kubernetes.io/storage-class也可以使用(但将来会弃用)。