我正在尝试使用https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/中提到的本地持久卷来创建我的statefulset pod。但是,当我的吊舱尝试索取音量时。我收到以下错误消息:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x243 over 20m) default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
以下是我创建的存储类和持久卷:
storageclass-kafka-broker.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: kafka-broker
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
storageclass-kafka-zookeeper.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: kafka-zookeeper
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
pv-zookeeper.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv-zookeeper
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: kafka-zookeeper
local:
path: /D/kubernetes-mount-path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
pv-kafka.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: kafka-broker
local:
path: /D/kubernetes-mount-path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
以下是使用该音量的pod 50pzoo.yml :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pzoo
namespace: kafka
spec:
selector:
matchLabels:
app: zookeeper
storage: persistent
serviceName: "pzoo"
replicas: 1
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: zookeeper
storage: persistent
annotations:
spec:
terminationGracePeriodSeconds: 10
initContainers:
- name: init-config
image: solsson/kafka-initutils@sha256:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2
command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
volumeMounts:
- name: configmap
mountPath: /etc/kafka-configmap
- name: config
mountPath: /etc/kafka
- name: data
mountPath: /var/lib/zookeeper/data
containers:
- name: zookeeper
image: solsson/kafka:2.0.0@sha256:8bc5ccb5a63fdfb977c1e207292b72b34370d2c9fe023bdc0f8ce0d8e0da1670
env:
- name: KAFKA_LOG4J_OPTS
value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
command:
- ./bin/zookeeper-server-start.sh
- /etc/kafka/zookeeper.properties
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
resources:
requests:
cpu: 10m
memory: 100Mi
readinessProbe:
exec:
command:
- /bin/sh
- -c
- '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
volumeMounts:
- name: config
mountPath: /etc/kafka
- name: data
mountPath: /var/lib/zookeeper/data
volumes:
- name: configmap
configMap:
name: zookeeper-config
- name: config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: kafka-zookeeper
resources:
requests:
storage: 1Gi
以下是kubectl get events
命令的输出
[root@quagga kafka-kubernetes-testing-single-node]# kubectl get events --namespace kafka
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
1m 1m 1 pzoo.15517ca82c7a4675 StatefulSet Normal SuccessfulCreate statefulset-controller create Claim data-pzoo-0 Pod pzoo-0 in StatefulSet pzoo success
1m 1m 1 pzoo.15517ca82caed9bc StatefulSet Normal SuccessfulCreate statefulset-controller create Pod pzoo-0 in StatefulSet pzoo successful
13s 1m 9 data-pzoo-0.15517ca82c726833 PersistentVolumeClaim Normal WaitForFirstConsumer persistentvolume-controller waiting for first consumer to be created before binding
9s 1m 22 pzoo-0.15517ca82cb90238 Pod Warning FailedScheduling default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
kubectl get pv
的输出是:
[root@quagga kafka-kubernetes-testing-single-node]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
example-local-pv 200Gi RWO Retain Available kafka-broker 4m
example-local-pv-zookeeper 2Gi RWO Retain Available kafka-zookeeper 4m
答案 0 :(得分:2)
这是一个愚蠢的错误。我在my-node
文件中的节点名称值中提到了pv
。修改它以更正节点名称可以解决我的问题。
答案 1 :(得分:1)
感谢分享!我犯了同样的错误。我想k8s docu可以更清楚地说明这一点(尽管这是相当麻烦的),所以这是一个复制粘贴陷阱。
更清楚一点:如果您的群集具有3个节点,则需要创建三个不同的命名PV并为“ my-node”(kubectl get节点)提供正确的节点名称。 volumeClaimTemplate和PV之间的唯一引用是存储类的名称。
我使用了类似于“ local-pv-node-X”的PV名称,因此当我查看kubernetes仪表板中的PV部分时,可以直接看到该卷的位置。
您可能会使用“ my-note;-)
上的提示更新列表