在将ReadOnlyMany持久卷安装到GKE上的多个Pod时遇到了一些麻烦。目前,它仅安装在一个Pod上,而未能安装在任何其他Pod上(由于第一个Pod正在使用该卷),导致部署仅限于一个Pod。
我怀疑问题与从卷快照填充的卷有关。
通过相关问题,我已经进行了合理性检查 spec.containers.volumeMounts.readOnly = true 和 spec.containers.volumes.persistentVolumeClaim.readOnly = true 这似乎是解决相关问题的最常用方法。
我在下面包括了相关的Yaml。任何帮助将不胜感激!
(大多数)部署规范:
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
image: eu.gcr.io/myimage
imagePullPolicy: IfNotPresent
name: monsoon-server-sha256-1
resources:
requests:
cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/sample-ssd
name: sample-ssd
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-cluster-1-default-pool-3d6123cf-kcjo
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 29
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: sample-ssd
persistentVolumeClaim:
claimName: sample-ssd-read-snapshot-pvc-snapshot-5
readOnly: true
存储类(也是该群集的默认存储类):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sample-ssd
provisioner: pd.csi.storage.gke.io
volumeBindingMode: Immediate
parameters:
type: pd-ssd
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-ssd-read-snapshot-pvc-snapshot-5
spec:
storageClassName: sample-ssd
dataSource:
name: sample-snapshot-5
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi
答案 0 :(得分:0)
Google工程师知道此问题。
有关此问题的更多详细信息,您可以在GitHub上的issue report和pull request中找到。
如果您要尝试从快照设置PD并将其设置为ROX,则有一个临时解决方法:
它将使用源磁盘的内容创建一个新的计算磁盘
,获取已配置的PV,并将其复制到ROX的新PV中。
2.根据docs
您可以使用以下命令执行它:
提供数据源为RWO的PVC;
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: workaround-pvc
spec:
storageClassName: ''
dataSource:
name: sample-ss
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
您可以使用以下命令检查磁盘名称:
kubectl get pvc
并检查VOLUME
列。这是disk_name
采用已配置的PV并将其复制到ROX的新PV中
如docs中所述,您需要使用先前的磁盘(在步骤1中创建)作为源来创建另一个磁盘:
# Create a disk snapshot:
gcloud compute disks snapshot <disk_name>
# Create a new disk using snapshot as source
gcloud compute disks create pvc-rox --source-snapshot=<snapshot_name>
创建一个新的PV和PVC ReadOnlyMany
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ''
capacity:
storage: 20Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: my-readonly-pvc
gcePersistentDisk:
pdName: pvc-rox
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
storageClassName: ''
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 20Gi
按照here的说明,在您的readOnly: true
和volumes
上添加volumeMounts
readOnly: true
答案 1 :(得分:0)
我准确地运行了解决方法(在 1.19.10-gke.1000 上),并遇到了:
MountVolume.MountDevice failed for volume "my-readonly-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/pvc-rox --scope -- mount -t ext4 -o ro,defaults /dev/disk/by-id/google-pvc-rox /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/pvc-rox Output: Running scope as unit: run-r46dbb6913fda42e1a794f28e6a64ba22.scope mount: /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/pvc-rox: cannot mount /dev/sdf read-only.
谢谢!