我正在建立一个kubernetes实验室,仅使用一个节点并学习设置kubernetes nfs。 我从以下链接一步一步地关注kubernetes nfs示例: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
尝试第一部分,NFS服务器部分,执行了3个命令:
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
我遇到问题,我看到以下事件:
PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
研究完成:
https://github.com/kubernetes/kubernetes/issues/43120
https://github.com/kubernetes/examples/pull/30
上述链接都没有帮助我解决我遇到的问题。 我确定它使用的是图像0.8。
Image: gcr.io/google_containers/volume-nfs:0.8
有谁知道这条消息是什么意思? 非常感谢有关如何解决此问题的线索和指导。 谢谢。
$ docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:41:23 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:49 2017
OS/Arch: linux/amd64
Experimental: false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
lab-kube-06 Ready master 2m v1.8.3
$ kubectl describe nodes lab-kube-06
Name: lab-kube-06
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=lab-kube-06
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Thu, 16 Nov 2017 16:51:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.0.6
Hostname: lab-kube-06
Capacity:
cpu: 2
memory: 8159076Ki
pods: 110
Allocatable:
cpu: 2
memory: 8056676Ki
pods: 110
System Info:
Machine ID: e198b57826ab4704a6526baea5fa1d06
System UUID: 05EF54CC-E8C8-874B-A708-BBC7BC140FF2
Boot ID: 3d64ad16-5603-42e9-bd34-84f6069ded5f
Kernel Version: 3.10.0-693.el7.x86_64
OS Image: Red Hat Enterprise Linux Server 7.4 (Maipo)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.8.3
Kube-Proxy Version: v1.8.3
ExternalID: lab-kube-06
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-lab-kube-06 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-lab-kube-06 250m (12%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-lab-kube-06 200m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-545bc4bfd4-gmdvn 260m (13%) 0 (0%) 110Mi (1%) 170Mi (2%)
kube-system kube-proxy-68w8k 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-lab-kube-06 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-7zlbg 20m (1%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
830m (41%) 0 (0%) 110Mi (1%) 170Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 39m kubelet, lab-kube-06 Starting kubelet.
Normal NodeAllocatableEnforced 39m kubelet, lab-kube-06 Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x7 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure
Normal Starting 38m kube-proxy, lab-kube-06 Starting kube-proxy.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv-provisioning-demo Pending 14s
$ kubectl get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
18m 18m 1 lab-kube-06.14f79f093119829a Node Normal Starting kubelet, lab-kube-06 Starting kubelet.
18m 18m 8 lab-kube-06.14f79f0931d0eb6e Node Normal NodeHasSufficientDisk kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk
18m 18m 8 lab-kube-06.14f79f0931d1253e Node Normal NodeHasSufficientMemory kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory
18m 18m 7 lab-kube-06.14f79f0931d131be Node Normal NodeHasNoDiskPressure kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure
18m 18m 1 lab-kube-06.14f79f0932f3f1b0 Node Normal NodeAllocatableEnforced kubelet, lab-kube-06 Updated Node Allocatable limit across pods
18m 18m 1 lab-kube-06.14f79f122a32282d Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
17m 17m 1 lab-kube-06.14f79f1cdfc4c3b1 Node Normal Starting kube-proxy, lab-kube-06 Starting kube-proxy.
17m 17m 1 lab-kube-06.14f79f1d94ef1c17 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
14m 14m 1 lab-kube-06.14f79f4b91cf73b3 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
58s 11m 42 nfs-pv-provisioning-demo.14f79f766cf887f2 PersistentVolumeClaim Normal FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set
14s 4m 20 nfs-server-kq44h.14f79fd21b9db5f9 Pod Warning FailedScheduling default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
4m 4m 1 nfs-server.14f79fd21b946027 ReplicationController Normal SuccessfulCreate replication-controller Created pod: nfs-server-kq44h
2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-kq44h 0/1 Pending 0 16s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-kq44h 0/1 Pending 0 26s
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
nfs-server 1 1 0 40s
$ kubectl describe pods nfs-server-kq44h
Name: nfs-server-kq44h
Namespace: default
Node: <none>
Labels: role=nfs-server
Annotations: kubernetes.io/created-
by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...
Status: Pending
IP:
Created By: ReplicationController/nfs-server
Controlled By: ReplicationController/nfs-server
Containers:
nfs-server:
Image: gcr.io/google_containers/volume-nfs:0.8
Ports: 2049/TCP, 20048/TCP, 111/TCP
Environment: <none>
Mounts:
/exports from mypvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mypvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pv-provisioning-demo
ReadOnly: false
default-token-plgv5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plgv5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s (x22 over 5m) default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
答案 0 :(得分:2)
每个持久卷声明(PVC)都需要一个可以绑定的持久卷(PV)。在您的示例中,您只创建了一个PVC,而不是卷本身。
可以手动创建PV,也可以使用 Volume class 自动创建PV。有关更多信息,请查看the docs of static and dynamic provisioning:
可以通过两种方式配置PV:静态或动态。
静态
群集管理员创建了许多PV。它们包含可供群集用户使用的实际存储的详细信息。 [...]
动态
如果管理员创建的静态PV都不匹配用户的
private void cmbCustomers_SelectedIndexChanged(object sender, EventArgs e) { TechSupportEntities techSupport = new TechSupportEntities(); var customers = (from customer in techSupport.Customers join incidents in techSupport.Incidents on customer.CustomerID equals incidents.CustomerID orderby customer.Name where incidents.TechID != null where incidents.DateOpened != null select new { incidents.ProductCode, incidents.TechID, incidents.Title, incidents.DateOpened, incidents.DateClosed, incidents.Description}).Distinct(); this.dataGridView1.DataSource = techSupport; } private void dataGridView_CellContentClick(object sender, DataGridViewCellEventArgs e) { } private void button2_Click(object sender, EventArgs e) { this.Close(); } } }
,则群集可能会尝试为PVC专门动态配置卷。此配置基于PersistentVolumeClaim
:PVC必须请求类,管理员必须已创建并配置该类,以便进行动态配置。
在您的示例中,您正在创建一个存储类配置程序(在StorageClasses
中定义),似乎是针对在Google云中使用而定制的(它可能无法在您的实验室设置中实际创建PV) )。
您可以自己手动创建持久卷。创建PV后,PVC应自动将其自身绑定到卷,并且您的pod应该启动。下面是使用节点的本地文件系统作为卷的持久卷的示例(对于单节点测试设置可能是可以的):
examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
对于生产设置,您可能希望在apiVersion: v1
kind: PersistentVolume
metadata:
name: someVolume
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/on/host
选择different volume type,尽管根据您所处的环境,您可用的卷类型会有很大差异in(云或自托管/裸机)。