当前,我尝试在yaml文件中实现 PersistentVolume 。 我在互联网上阅读了很多文档,但我不明白为什么当我转到仪表板窗格时收到此消息
找不到持久性卷标“ karaf-conf”
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: "/apps/karaf/etc"
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
PersistentVolumeClaimKaraf.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
PersistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: karaf-conf
labels:
type: local
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/apps/karaf/etc"
您将在命令 kubectl get pv
的结果下方找到NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
karaf-conf 100Mi RWO Retain Terminating default/karaf-conf-claim 17h
karaf-conf-persistentvolume 100Mi RWO Retain Released default/karaf-conf 1h
kubectl获取pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
karaf-conf-claim Terminating karaf-conf 10Mi RWO manual 17h
答案 0 :(得分:1)
使用hostPath时,您不需要PersistentVolume或PersistentVolumeClaim对象,因此根据需要,这可能会更容易:
# file: pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: "/apps/karaf/etc" # Path mounted in container
# Use hostPath here
volumes:
- name: karaf-conf-storage
hostPath:
path: "/apps/karaf/etc" # Path from the host
然后删除其他两个.yaml文件PersistentVolumeClaimKaraf.yml
和PersistentVolume.yml
有关官方文档,请参见:https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
编辑:注意原始帖子中的spec.containers.VolumeMounts.mountPath和spec.containers.volumes.hostPath.path相同,因此在yaml中添加了注释,以阐明每个目的。
答案 1 :(得分:0)
我的建议是重新创建pv和pvc,并确保在配置hostPath的节点主机上运行pod。
答案 2 :(得分:0)
pv / karaf-conf处于终止状态,请尝试将其删除并使用type: DirectoryOrCreate
重新创建。
kind: PersistentVolume
apiVersion: v1
metadata:
name: karaf-conf
labels:
type: local
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/apps/karaf/etc"
type: DirectoryOrCreate
答案 3 :(得分:0)
我认为您问题的根本原因与Terminating
状态有关。
作为此问题的快速解决方案,您应该创建新的PV和PVC(名称与Terminating
状态的名称不同。
kind: PersistentVolume
apiVersion: v1
metadata:
name: karaf-conf-new
labels:
type: local
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/apps/karaf/etc"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-newclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
编辑您的Pod
Yaml以使用新的claimName
:
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-newclaim
当PersistentVolumeClaim
处于“终止”状态时,表明您删除了某个Pod正在使用的PVC
。
问题是您现在发现自己处于死锁状态,换句话说,除非引用的PVC
处于绑定状态,否则您的“ karafpod” Pod不会启动。
从您的输出中,我可以看到有karaf-conf-persistentvolume
个PV被绑定到PVC:karaf-conf
。我猜想您已尝试删除PVCs
。
由于您的PersistentVolumes
已将ReclaimPolicy设置为Retain,PVC: karaf-conf
被无问题地删除了,因为任何Pods
都没有使用它,并且由于该政策,PV: karaf-conf-persistentvolume
被保留。
但是,您的pod: karafpod
声明PVC: karaf-conf-claim
已绑定到PV: karaf-conf
。该pod运行时,PVC
和PV
无法删除。
如果要保持所有名称相同,请修复。
pod: karafpod
,可以使用--grace-period
删除它。
kubectl delete pod <PODNAME> --grace-period=0 --force
PVC: karaf-conf-claim
和PV: karaf-conf
。kubectl get pv,pvc
您还可以检查正在使用PVC的Pod。可以使用this thread
中的命令来实现kubectl get pods --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec | select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
PVC: karaf-conf-claim
和PV: karaf-conf
pod: karafpod