如何使用PersistentVolumeClaim在Deployment / Pod上安装持久卷?

时间:2020-03-04 19:26:39

标签: kubernetes google-kubernetes-engine persistent-volume-claims

我正在尝试通过部署在Pod上安装持久卷。

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
      - image: ...
        volumeMounts:
        - mountPath: /app/folder
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: volume-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: volume-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

但是,吊舱处于“ ContainerCreating”状态,并且事件显示以下错误消息。

Unable to mount volumes for pod "podname": timeout expired waiting for volumes to attach or mount for pod "namespace"/"podname". list of unmounted volumes=[volume]. list of unattached volumes=[volume]

我验证了持久卷声明是正确的,并已绑定到持久卷。

我在这里想念什么?

2 个答案:

答案 0 :(得分:0)

如果在云提供商中执行此操作,则storageClass对象将为您的持久卷声明创建相应的卷。

如果您尝试在minikube或自托管kubernetes集群中本地执行此操作,则需要手动创建为您提供卷的storageClass,或像下面的示例一样手动创建它:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

hostPath变量会将这些数据装入当前的pod节点。

答案 1 :(得分:0)

在GKE群集中创建PVC而不指定PV或类型StorageClass时,它将退回到默认选项:

  • StorageClass: standard
  • Provisioner: kubernetes.io/gce-pd
  • Type: pd-standard

请查看官方文档:Cloud.google.com: Kubernetes engine persistent volumes

在许多情况下,可能会产生遇到的错误消息。

由于不知道您的部署中有多少个副本以及节点数以及在这些节点上如何安排Pod,因此我尝试重现您的问题,并且在执行以下步骤时遇到了相同的错误(GKE群集是最新的创建它是为了防止任何其他可能影响行为的依赖项。

步骤

  • 创建PVC
  • 使用replicas > 1创建部署
  • 检查豆荚状态
  • 其他链接

创建PVC

下面是YAML的示例PVC定义,与您的定义相同:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: volume-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

应用上述定义后,请检查其是否成功创建。您可以使用以下命令来做到这一点:

  • $ kubectl get pvc volume-claim
  • $ kubectl get pv
  • $ kubectl describe pvc volume-claim
  • $ kubectl get pvc volume-claim -o yaml

使用replicas > 1创建部署

下面是示例YAML的部署示例,其中有volumeMountsreplicas> 1:

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: ubuntu-deployment
spec:
  selector:
    matchLabels:
      app: ubuntu
  replicas: 10 # amount of pods must be > 1
  template:
    metadata:
      labels:
        app: ubuntu
    spec:
      containers:
      - name: ubuntu
        image: ubuntu
        command:
        - sleep
        - "infinity"
        volumeMounts:
        - mountPath: /app/folder
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: volume-claim

应用它并等待一段时间。

检查广告连播的状态

您可以使用以下命令检查Pod的状态:

$ kubectl get pods -o wide

以上命令的输出:

NAME                      READY   STATUS              RESTARTS   AGE     IP            NODE                              
ubuntu-deployment-2q64z   0/1     ContainerCreating   0          4m27s   <none>        gke-node-1  
ubuntu-deployment-4tjp2   1/1     Running             0          4m27s   10.56.1.14    gke-node-2   
ubuntu-deployment-5tn8x   0/1     ContainerCreating   0          4m27s   <none>        gke-node-1   
ubuntu-deployment-5tn9m   0/1     ContainerCreating   0          4m27s   <none>        gke-node-3  
ubuntu-deployment-6vkwf   0/1     ContainerCreating   0          4m27s   <none>        gke-node-1  
ubuntu-deployment-9p45q   1/1     Running             0          4m27s   10.56.1.12    gke-node-2  
ubuntu-deployment-lfh7g   0/1     ContainerCreating   0          4m27s   <none>        gke-node-3  
ubuntu-deployment-qxwmq   1/1     Running             0          4m27s   10.56.1.13    gke-node-2 
ubuntu-deployment-r7k2k   0/1     ContainerCreating   0          4m27s   <none>        gke-node-3   
ubuntu-deployment-rnr72   0/1     ContainerCreating   0          4m27s   <none>        gke-node-3

看看上面的输出:

  • 3个吊舱处于Running状态
  • 7个吊舱处于ContainerCreating状态

所有Running吊舱都位于同一gke-node-2

您可以通过以下方式获取有关Pod为何处于ContainerCreating状态的更多详细信息:

$ kubectl describe pod NAME_OF_POD_WITH_CC_STATE

以上命令中的Events部分显示:

Events:
  Type     Reason              Age                From                                             Message
  ----     ------              ----               ----                                             -------
  Normal   Scheduled           14m                default-scheduler                                Successfully assigned default/ubuntu-deployment-2q64z to gke-node-1
  Warning  FailedAttachVolume  14m                attachdetach-controller                          Multi-Attach error for volume "pvc-7d756147-6434-11ea-a666-42010a9c0058" Volume is already used by pod(s) ubuntu-deployment-qxwmq, ubuntu-deployment-9p45q, ubuntu-deployment-4tjp2
  Warning  FailedMount         92s (x6 over 12m)  kubelet, gke-node-1  Unable to mount volumes for pod "ubuntu-deployment-2q64z_default(9dc28e95-6434-11ea-a666-42010a9c0058)": timeout expired waiting for volumes to attach or mount for pod "default"/"ubuntu-deployment-2q64z". list of unmounted volumes=[volume]. list of unattached volumes=[volume default-token-dnvnj]

由于无法安装ContainerCreating,Pod无法通过volume状态。提到的volume已被其他节点上的其他pod使用。

ReadWriteOnce::可以通过单个节点以读写方式安装该卷。

其他链接

请查看:Cloud.google.com: Access modes of persistent volumes

有关访问模式的主题,有详细的答案:Stackoverflow.com: Why can you set multiple accessmodes on a persistent volume

由于您不清楚要实现的目标,请查看部署和状态集之间的比较:Cloud.google.com: Persistent Volume: Deployments vs statefulsets