如果删除cephfs资源池并重建它,则会出现问题,并且您将无法使用cephfilesystem创建的池。
使用的清单如下。
pref
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: myfs
spec:
metadataPool:
replicated:
size: 2
requireSafeReplicaSize: true
dataPools:
- failureDomain: osd
replicated:
size: 2
requireSafeReplicaSize: true
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where operator is deployed.
clusterID: rook-ceph
fsName: myfs
pool: myfs-data0
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
# uncomment the following line for debugging
#- debug
kubectl删除-f
cephfs使用上述命令创建的池无法删除! 因此,我使用以下命令删除了该池。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: rook-cephfs
但是,即使在重建资源之后,pvc仍然会出现以下错误。
ceph fs rm myfs --yes-i-really-mean-it
ceph osd pool delete myfs-data0 myfs-data0 --yes-i-really-really-mean-it
ceph osd pool delete myfs-metadata myfs-metadata --yes-i-really-really-mean-it
是否可以解决此问题并使cephfilesystem可用?