I am trying to use the cinder plugin for kubernetes to create both statically defined PVs as well as StorageClasses, but I see no activity between my cluster and cinder for creating/mounting the devices.
Kubernetes Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
The command kubelet was started with and its status:
systemctl status kubelet -l
● kubelet.service - Kubelet service
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
Main PID: 2408 (kubelet)
CGroup: /system.slice/kubelet.service
├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf
Here is my cloud.conf file:
# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne
It appears that k8s is able to communicate successfully with openstack. From /var/log/messages:
kubelet: I1020 11:43:51.770948 2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642 2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679 2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688 2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332 2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]
My PV/PVC yaml files, and cinder list output:
# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jk-test
labels:
type: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
cinder:
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
fsType: ext4
# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available | jk-cinder | 10 | - | false |
As seen above, cinder reports the device with the ID referenced in the pv.yaml file is available. When I create them, things seem to work:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv/jk-test 10Gi RWO Retain Bound default/myclaim 5h
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/myclaim Bound jk-test 10Gi RWO 5h
Then I try to create a pod using the pvc, but it fails to mount the volume:
# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
name: jk-test3
labels:
name: jk-test
spec:
containers:
- name: front-end
image: example-front-end:latest
ports:
- hostPort: 6000
containerPort: 3000
volumes:
- name: jk-test
persistentVolumeClaim:
claimName: myclaim
And here is the state of the pod:
3h 46s 109 {kubelet jk-kube2-master} Warning FailedMount Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
3h 46s 109 {kubelet jk-kube2-master} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
I've verified that my openstack provider is exposing cinder v1 and v2 APIs and the previous logs from openstack_instances show the nova API is accessible. Despite that, I never see any attempts on k8s part to communicate with cinder or nova to mount the volume.
Here are what I think are the relevant log messages regarding the failure to mount:
kubelet: I1020 06:51:11.840341 24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424 24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474 24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420 24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566 24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.
Is there a piece I am missing? I've followed the instructions here: k8s - mysql-cinder-pd example But haven't been able to get any communication. As another datapoint I tried defining a Storage class as provided by k8s, here are the associated StorageClass and PVC files:
# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dynamicclaim
annotations:
volume.beta.kubernetes.io/storage-class: "gold"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
The StorageClass reports success, but when I try to create the PVC it gets stuck in the 'pending' state and reports 'no volume plugin matched':
# kubectl get storageclass
NAME TYPE
gold kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name: dynamicclaim
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 15s 5867 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
This contradicts whats in the logs for plugins that were loaded:
grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517 22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"
And I have the nova and cinder clients installed on my machine:
# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder
Any help is appreciated, I'm sure I'm missing something simple here.
Thanks!
答案 0 :(得分:2)
cinder卷确实可以使用Kubernetes 1.5.0和1.5.3(我认为它们也适用于我第一次尝试的1.4.6,我不知道以前的版本)。
在您丢失的Pod yaml文件中:volumeMounts:
部分。
实际上,当您已有现有的煤渣量时,您可以使用Pod(或部署),不需要PV或PVC。例:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
spec:
containers:
- name: nginx
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
cinder:
volumeID: e143368a-440a-400f-b8a4-dd2f46c51888
这将创建一个部署和一个Pod。煤渣体积将安装在nginx容器中。要验证您是否正在使用卷,可以在/usr/share/nginx/html/
目录中的nginx容器内编辑文件并停止容器。 Kubernetes将创建一个新容器,在其中,/usr/share/nginx/html/
目录中的文件将与停止容器中的文件相同。
删除部署资源后,不会删除cinder卷,但会将其与vm分离。
其他可能性,如果您已有现有的煤渣量,则可以使用PV和PVC资源。你说你想使用存储类,虽然Kubernetes文档允许不使用它:
没有注释或其类注释设置为“”的PV没有类,只能绑定到不请求特定类的PVC
示例存储类是:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
# to be used as value for annotation:
# volume.beta.kubernetes.io/storage-class
name: cinder-gluster-hdd
provisioner: kubernetes.io/cinder
parameters:
# openstack volume type
type: gluster_hdd
# openstack availability zone
availability: nova
然后,您在PV中使用ID为48d2d1e6-e063-437a-855f-8b62b640a950的现有煤渣量:
apiVersion: v1
kind: PersistentVolume
metadata:
# name of a pv resource visible in Kubernetes, not the name of
# a cinder volume
name: pv0001
labels:
pv-first-label: "123"
pv-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
cinder:
# ID of cinder volume
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
然后创建一个PVC,标签选择器匹配PV的标签:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: vol-test
labels:
pvc-first-label: "123"
pvc-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"
spec:
accessModes:
# the volume can be mounted as read-write by a single node
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
selector:
matchLabels:
pv-first-label: "123"
pv-second-label: abc
然后部署:
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
environment: testing
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
environment: testing
spec:
nodeSelector:
"is_worker": "true"
containers:
- name: nginx-exist-vol
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
persistentVolumeClaim:
claimName: vol-test
删除k8s资源后,cinder卷不会被删除,但会从vm中删除。
使用PV可以设置persistentVolumeReclaimPolicy
。
如果您没有创建cinder卷,Kubernetes可以为您创建它。然后,您必须提供PVC资源。我不会描述这种变体,因为它没有被要求。
我建议有兴趣找到最佳选择的人应该自己试验并比较一下这些方法。另外,我使用pv-first-label
和pvc-first-label
这样的标签名称只是为了更好地理解原因。你可以用例如first-label
无处不在。
答案 1 :(得分:1)
我怀疑动态StorageClass方法不起作用,因为在文档(http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner)中给出以下语句时,Cinder配置程序尚未实现:
存储类具有一个配置程序,用于确定用于配置PV的卷插件。必须指定此字段。在测试期间,可用的供应商类型为kubernetes.io/aws-ebs和kubernetes.io/gce-pd
至于为什么使用Cinder卷ID的静态方法不起作用,我不确定。我遇到了完全相同的问题。 Kubernetes 1.2似乎工作正常,1.3和1.4没有。这似乎与1.3-beta2(https://github.com/kubernetes/kubernetes/pull/26801)中PersistentVolume处理的主要变化相吻合:
在kubelet中引入了一个新的卷管理器,用于同步卷装入/卸载(如果未启用附加/分离控制器,则连接/分离)。 (#26801,@ saad-ali)
这消除了pod创建循环和孤立卷循环之间的竞争条件。它还会从syncPod()路径中删除卸载/分离,因此卷清理永远不会阻止syncPod循环。