无法在n个部署副本中安装只读Kubernetes持久卷

时间:2016-11-01 05:23:33

标签: kubernetes google-compute-engine

我已经从gcePersistentDisk创建了一个Kubernetes只读的许多持久卷,如下所示:

apiVersion: v1
kind: PersistentVolume
metadata:
    name: ferret-pv-1
spec:
    capacity:
    storage: 500Gi
    accessModes:
      - ReadOnlyMany
    persistentVolumeReclaimPolicy: Retain
    gcePersistentDisk:
      pdName: data-1
      partition: 1
      fsType: ext4

它从现有的gcePersistentDisk分区创建持久卷,该分区上已有ext4文件系统:

$ kubectl get pv
NAME          CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                    REASON    AGE
ferret-pv-1   500Gi      ROX           Retain          Bound     default/ferret-pvc             5h

然后,我创建了一个Kubernetes只读的许多持久卷声明,如下所示:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ferret-pvc
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 500Gi

它绑定到我在上面创建的只读PV:

$ kubectl get pvc
NAME         STATUS    VOLUME        CAPACITY   ACCESSMODES   AGE
ferret-pvc   Bound     ferret-pv-1   500Gi      ROX           5h

然后我使用我刚创建的PVC创建一个包含2个副本的Kubernetes部署:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ferret2-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: ferret2
    spec:
      containers:
      - image: us.gcr.io/centered-router-102618/ferret2
        name: ferret2
        ports:
        - name: fjds
          containerPort: 1004
          hostPort: 1004
        volumeMounts:
          - name: ferret-pd
            mountPath: /var/ferret
            readOnly: true
      volumes:
          - name: ferret-pd
            persistentVolumeClaim:
              claimName: ferret-pvc

创建部署:

$ kubectl get deployments
NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ferret2-deployment   2         2         2            1           4h

但是,当我从部署中查看相应的两个pod时,只出现了第一个:

$ kubectl get pods
NAME                                  READY     STATUS              RESTARTS   AGE
ferret2-deployment-1336109949-2rfqd   1/1       Running             0          4h
ferret2-deployment-1336109949-yimty   0/1       ContainerCreating   0          4h

看着没有出现的第二个吊舱:

$ kubectl describe pod ferret2-deployment-1336109949-yimty

Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type        Reason      Message
  ---------     --------        -----   ----                            -------------   --------        ------      -------
  4h        1m          128     {kubelet gke-sim-cluster-default-pool-e38a7605-kgdu}            Warning     FailedMount     Unable to mount volumes for pod "ferret2-deployment-1336109949-yimty_default(d1393a2d-9fc9-11e6-a873-42010a8a009e)": timeout expired waiting for volumes to attach/mount for pod "ferret2-deployment-1336109949-yimty"/"default". list of unattached/unmounted volumes=[ferret-pd]
  4h        1m          128     {kubelet gke-sim-cluster-default-pool-e38a7605-kgdu}            Warning     FailedSync      Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ferret2-deployment-1336109949-yimty"/"default". list of unattached/unmounted volumes=[ferret-pd]
  4h        55s         145     {controller-manager }                           Warning     FailedMount     Failed to attach volume "ferret-pv-1" on node "gke-sim-cluster-default-pool-e38a7605-kgdu" with: googleapi: Error 400: The disk resource 'data-1' is already being used by 'gke-sim-cluster-default-pool-e38a7605-fyx4'

它拒绝启动第二个pod,因为它认为第一个pod独占使用PV。但是,当我登录到声称PV的第一个pod时,我发现它已将卷安装为只读:

$ kubectl exec -ti ferret2-deployment-1336109949-2rfqd -- bash
root@ferret2-deployment-1336109949-2rfqd:/opt/ferret# mount | grep ferret
/dev/sdb1 on /var/ferret type ext4 (ro,relatime,data=ordered)

我是否遗漏了使用相同的PVC在部署中跨多个pod安装PV只读的问题?磁盘未由任何其他容器安装。由于它在第一个pod上挂载只读,我原本期望部署中的第二个和任何其他副本没有问题声明/挂载它。另外 - 我如何让ReadWriteOnce正常工作?如何指定哪个pod安装卷rw?

2 个答案:

答案 0 :(得分:0)

要通过gcePersistentDisk备份卷,必须先将磁盘安装到正在运行使用该卷的Pod的VM实例上。

这是通过kubernetes自动完成的,但是根据我的经验,即使有以下清单:

apiVersion: v1
kind: PersistentVolume
metadata:
    name: map-service-pv
spec:
    capacity:
      storage: 25Gi
    accessModes:
      - ReadOnlyMany
    persistentVolumeReclaimPolicy: Retain
    storageClassName: ssd
    gcePersistentDisk:
      pdName: map-service-data
      readOnly: true
      fsType: ext4

它以RW模式将其安装到实例。这样可以防止将磁盘安装到任何其他实例。因此,如果您的Pod在不同的节点(实例)上运行,则除一个节点外的所有节点都将获得googleapi: Error 400: The disk resource xxx is already being used by...

您可以在Google Cloud Console中进行检查: Compute Engine->磁盘->查找磁盘->单击“使用中”链接,该链接会将您带到实例。在那里,您可以看到其他磁盘及其模式。

可以在控制台中手动更改模式。然后第二个吊舱应该能够安装。


编辑:此解决方案似乎无效。我已经在Kuberentes的GitHub上发布了一个问题:https://github.com/kubernetes/kubernetes/issues/67313

答案 1 :(得分:0)

PV / PVC访问模式仅用于绑定PV / PVC。

在pod模板中,确保将spec.volumes.persistentVolumeClaim.readOnly设置为true。这样可以确保以只读模式连接该卷。

还要在pod模板中,确保将spec.containers.volumeMounts[x].readOnly设置为true。这样可以确保该卷以只读模式安装。

此外,由于您正在预配置PV。确保在PV上的claimRef字段上进行设置,以确保没有其他PVC意外地绑定到它。参见https://stackoverflow.com/a/34323691