Kubernetes 持久卷 GCE 磁盘

时间:2021-01-20 05:59:26

标签: kubernetes google-cloud-platform google-compute-engine disk persistent

我创建了一个 GCE 磁盘,并使用该磁盘创建了一个持久卷并成功声明了 PV。但是当我部署 pod 时,它给了我一个错误。详情如下。

$ gcloud 计算磁盘列表

NAME                    LOCATION                LOCATION_SCOPE  SIZE_GB  TYPE         STATUS
test-kubernetes-disk  asia-southeast1-a  zone            200      pd-standard  READY

pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: /test-pd
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim

pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gce
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage:  200Gi
  storageClassName: fast
  gcePersistentDisk:
    pdName: test-kubernetes-disk
    fsType: ext4

pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:  1Gi
  storageClassName: fast

以下是 Pod 的事件。

Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    12m   default-scheduler  Successfully assigned default/mypod to worker-0
  Warning  FailedMount  9m6s  kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-r4b3f35b2b0354f26ba64375388054054.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount  6m52s  kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-ra8fb00a02d6145fa9c54e88adf81e942.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount  5m52s (x2 over 8m9s)  kubelet, worker-0  Unable to attach or mount volumes: unmounted volumes=[mypd], unattached volumes=[default-token-s82xz mypd]: timed out waiting for the condition
  Warning  FailedMount  4m35s                 kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-rf86d063bc5e44878831dc2734575e9cf.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount  2m18s  kubelet, worker-0  MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-rb9edbe05f62449d0aa0d5ed8bedafb29.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.
  Warning  FailedMount         80s (x3 over 10m)  kubelet, worker-0        Unable to attach or mount volumes: unmounted volumes=[mypd], unattached volumes=[mypd default-token-s82xz]: timed out waiting for the condition
  Warning  FailedAttachVolume  8s (x5 over 11m)   attachdetach-controller  AttachVolume.NewAttacher failed for volume "pv-gce" : Failed to get GCE GCECloudProvider with error <nil>
  Warning  FailedMount         3s                 kubelet, worker-0        MountVolume.SetUp failed for volume "pv-gce" : mount of disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce --scope -- mount  -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce
Output: Running scope as unit: run-r5290d9f978834d4681966a40c3f535fc.scope
mount: /var/lib/kubelet/pods/5ea05129-f32c-46f3-9658-2e5e0afc29af/volumes/kubernetes.io~gce-pd/pv-gce: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/test-kubernetes-disk does not exist.

kubectl 获取 pv

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-gce   200Gi      RWO            Retain           Bound    default/myclaim   fast                    23m

kubectl 获取 pvc

NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myclaim   Bound    pv-gce   200Gi      RWO            fast           22m

请帮忙解决这个问题。

4 个答案:

答案 0 :(得分:0)

您缺少 claimRef 中的 pv 规范。您需要在 claimRef 中添加 pv 字段,这将帮助您将 pv 与所需的 pvc 绑定。

还要确保 pv 和 pod 在同一个区域。 GCE 永久磁盘是区域资源,因此 pod 只能请求位于其区域中的 Persistent Disk

尝试应用它们:

pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-gce
spec:
  claimRef:
    name: myclaim
  accessModes:
    - ReadWriteOnce
  capacity:
    storage:  200Gi
  storageClassName: fast
  gcePersistentDisk:
    pdName: msales-kubernetes-disk
    fsType: ext4
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - australia-southeast1-a
        - key: topology.kubernetes.io/region
          operator: In
          values:
          - australia-southeast1

pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:  200Gi
  storageClassName: fast

存储类应该是这样的:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  fstype: ext4
  replication-type: none

pod 应该是这样的

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In
            values:
            - australia-southeast1-a
          - key: topology.kubernetes.io/region
            operator: In
            values:
            - australia-southeast1
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: /test-pd
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim

答案 1 :(得分:0)

@Emon,这里是磁盘描述的输出。

    $ gcloud compute disks describe test-kubernetes-disk
creationTimestamp: '2021-01-19T18:03:01.982-08:00'
id: '5437882943050232250'
kind: compute#disk
labelFingerprint: 42WmSpB8rSM=
lastAttachTimestamp: '2021-01-19T21:41:26.170-08:00'
lastDetachTimestamp: '2021-01-19T21:46:38.814-08:00'
name: test-kubernetes-disk
physicalBlockSizeBytes: '4096'
selfLink: https://www.googleapis.com/compute/v1/projects/test-01/zones/asia-southeast1-a/disks/test-kubernetes-disk
sizeGb: '200'
status: READY
type: https://www.googleapis.com/compute/v1/projects/test-01/zones/asia-southeast1-a/diskTypes/pd-standard
zone: https://www.googleapis.com/compute/v1/projects/test-01/zones/asia-southeast1-a

答案 2 :(得分:0)

你可以重试吗?只需删除所有内容。

请按照以下步骤操作:

gcloud compute disks create --size=200GB --zone=australia-southeast1-a msales-kubernetes-disk

然后应用这个

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: /test-pd
        name: mypd
  volumes:
  - name: mypd
    # This GCE PD must already exist.
    gcePersistentDisk:
      pdName:  msales-kubernetes-disk
      fsType: ext4

在这里您无需担心 pvpvc

答案 3 :(得分:0)

@Emon,问题仍然存在。我刚刚删除了所有内容。删除了磁盘、pods、pv、pvc 和 storageclass。刚刚执行了提供的 pod.yml。并创建了新磁盘。

$ kubectl describe pod test-pd
Name:         test-pd
Namespace:    default
Priority:     0
Node:         worker-0/10.240.0.20
Start Time:   Thu, 21 Jan 2021 06:18:00 +0000
Labels:       <none>
Annotations:  Status:  Pending
IP:
IPs:          <none>
Containers:
  myfrontend:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /test-pd from mypd (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s82xz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mypd:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     test-kubernetes-disk
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
  default-token-s82xz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s82xz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason              Age   From                     Message
  ----     ------              ----  ----                     -------
  Normal   Scheduled           59s   default-scheduler        Successfully assigned default/test-pd to worker-0
  Warning  FailedAttachVolume  8s    attachdetach-controller  AttachVolume.NewAttacher failed for volume "mypd" : Failed to get GCE GCECloudProvider with error <nil>

顺便说一句,您确定我不想指定云提供商标志吗?