GKE永久卷不保存数据

时间:2020-09-13 18:55:59

标签: kubernetes google-kubernetes-engine

我已经为我在GKE中工作的一个应用程序创建了“持久卷和卷声明”。声明和存储似乎已正确设置,但是,如果重新启动Pod,数据将不会保留。我最初可以保存数据,并且可以在Pod中看到该文件,但是重新启动后该文件就会消失。

我之前曾问过这个问题,但是没有包含我的.yaml文件,因此收到了一些通用答案,因此我决定重新发布.yaml文件,希望有人可以看一下并告诉我在哪里出问题了。从我所看到的一切来看,问题似乎出在持久卷中,因为声明看起来与其他所有人一样。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prod-api-meta-uploads-k8s
  namespace: default
  resourceVersion: "4500192"
  selfLink: /apis/apps/v1/namespaces/default/deployments/prod-api-meta-uploads-k8s
  uid: *******
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: prod-api-meta-uploads-k8s
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        gcb-build-id: *****
        gcb-trigger-id:****
      creationTimestamp: null
      labels:
        app: prod-api-meta-uploads-k8s
        app.kubernetes.io/managed-by: gcp-cloud-build-deploy
        app.kubernetes.io/name: prod-api-meta-uploads-k8s
        app.kubernetes.io/version: becdb864864f25d2dcde2e62a2f70501cfd09f19
    spec:
      containers:
      - image: bitbucket.org/api-meta-uploads-k8s@sha256:7766413c0d
        imagePullPolicy: IfNotPresent
        name: prod-api-meta-uploads-k8s-sha256-1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /uploads/profileImages
          name: uploads-volume-prod
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: uploads-volume-prod
        persistentVolumeClaim:
          claimName: my-disk-claim-1
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2020-09-08T21:00:40Z"
    lastUpdateTime: "2020-09-10T04:54:27Z"
    message: ReplicaSet "prod-api-meta-uploads-k8s-5c8f66f886" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2020-09-10T06:49:41Z"
    lastUpdateTime: "2020-09-10T06:49:41Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 36
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

**数量声明

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2020-09-09T16:12:51Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: uploads-volume-prod
  namespace: default
  resourceVersion: "4157429"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/uploads-volume-prod
  uid: f93e6134
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  storageClassName: standard
  volumeMode: Filesystem
  volumeName: pvc-f93e6
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 30Gi
  phase: Bound

*** PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  finalizers:
  - kubernetes.io/pvc-protection
  name: my-disk-claim-1
  namespace: default
  resourceVersion: "4452471"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/my-disk-claim-1
  uid: d533702b
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
  storageClassName: fast
  volumeMode: Filesystem
  volumeName: pvc-d533702b
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 50Gi
  phase: Bound

1 个答案:

答案 0 :(得分:0)

在使用GKE时,您无需按照1:1的关系手动准备PersistentVolumePersistentVolumeClaim(静态配置),因为GKE可以使用Dynamic Volume Provisioning 。 在Persistent Volumes

中有很好的描述

当管理员创建的所有静态PV均与用户的PersistentVolumeClaim不匹配时,群集可能会尝试动态地为PVC专门配置一个卷。

在一开始的GKE中,您至少有一个名为standard的{​​{3}}。名称旁边也有(default)

$ kubectl get sc
NAME                 PROVISIONER            AGE
standard (default)   kubernetes.io/gce-pd   110m

这意味着如果您不会在storageClassName中指定PersistentVolumeClaim,它将使用storageclass,它被设置为default。在您的YAML中,我可以看到您使用了storageClassName: standard。如果您选中此storageclass,则将确定storageclass设置为delete的情况。输出如下:

$ kubectl describe sc standard
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/gce-pd
Parameters:            type=pd-standard
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

IsDefaultClass:表示此storageclass设置为默认值。

ReclaimPolicy:定义ReclaimPolicy,在这种情况下定义delete

由于ReclaimPolicy设置为Delete

对于支持Delete回收策略的批量插件,删除会同时从Kubernetes中删除PersistentVolume对象以及外部基础架构(例如AWS EBS)中的关联存储资产,GCE PD,Azure Disk或Cinder卷。动态配置的卷将继承其StorageClass的回收策略,该策略默认为“删除” 。管理员应根据用户的期望配置StorageClass。否则,PV必须在创建后进行编辑或打补丁。

根据您的需求,可以使用:

ReclaimPolicy

如果基础卷插件支持,则回收回收策略将对该卷执行基本清理(rm -rf / thevolume / *),并使该卷再次可用于新的声明。 但是,请记住:Warning: The Recycle reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.添加此选项是因为我没有看到您正在使用的K8s版本,但是GKE不支持它。

Recycle

“保留回收”策略允许手动回收资源。删除PersistentVolumeClaim后,PersistentVolume仍然存在,并且该卷被视为“已释放”。但是它尚不适用于其他索赔,因为前一个索赔人的数据保留在卷上。

此外,在使用GKE时,它仅支持DeleteRetain

The StorageClass "another-storageclass" is invalid: reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"

此外,正如您指定的revisionHistoryLimit: 10,将重新创建10次重启后的pod,在这种情况下,{{1 }}将设置为pod

解决方案

作为最简单的解决方案,您应使用与pv不同的pvc创建新的ReclaimPolicy,并在delete中使用它。

StorageClass