我刚刚使用digital ocean和Ansible的说明将Kubernetes安装在Ubuntu的集群上。一切似乎都很好;但是在验证集群时,主节点处于未就绪状态:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
jwdkube-master-01 NotReady master 44m v1.12.2
jwdkube-worker-01 Ready <none> 44m v1.12.2
jwdkube-worker-02 Ready <none> 44m v1.12.2
这是版本:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
当我检查主节点时,kube-proxy挂起在启动模式:
# kubectl describe nodes jwdkube-master-01
Name: jwdkube-master-01
Roles: master
...
LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 104.248.207.107
Hostname: jwdkube-master-01
Capacity:
cpu: 1
ephemeral-storage: 25226960Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1008972Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 23249166298
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 906572Ki
pods: 110
System Info:
Machine ID: 771c0f669c0a40a1ba7c28bf1f05a637
System UUID: 771c0f66-9c0a-40a1-ba7c-28bf1f05a637
Boot ID: 2532ae4d-c08c-45d8-b94c-6e88912ed627
Kernel Version: 4.18.0-10-generic
OS Image: Ubuntu 18.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.12.2
Kube-Proxy Version: v1.12.2
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-jwdkube-master-01 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-jwdkube-master-01 250m (25%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-jwdkube-master-01 200m (20%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-p8cbq 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-jwdkube-master-01 100m (10%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (55%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientDisk 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48m (x5 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 48m kubelet, jwdkube-master-01 Updated Node Allocatable limit across pods
Normal Starting 48m kube-proxy, jwdkube-master-01 Starting kube-proxy.
更新
运行kubectl get pods -n kube-system
:
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-8p7k2 1/1 Running 0 4h47m
coredns-576cbf47c7-s5tlv 1/1 Running 0 4h47m
etcd-jwdkube-master-01 1/1 Running 1 140m
kube-apiserver-jwdkube-master-01 1/1 Running 1 140m
kube-controller-manager-jwdkube-master-01 1/1 Running 1 140m
kube-flannel-ds-5bzrx 1/1 Running 0 4h47m
kube-flannel-ds-bfs9k 1/1 Running 0 4h47m
kube-proxy-4lrzw 1/1 Running 1 4h47m
kube-proxy-57x28 1/1 Running 0 4h47m
kube-proxy-j8bf5 1/1 Running 0 4h47m
kube-scheduler-jwdkube-master-01 1/1 Running 1 140m
tiller-deploy-6f6fd74b68-5xt54 1/1 Running 0 112m
答案 0 :(得分:0)
这似乎是法兰绒.catch((o)=>..)
与Kubernetes群集v0.9.1
兼容的问题。在主配置手册中替换网址后,它应该会帮助您
v1.12.2
要在当前群集上实施此解决方案:
在主节点上,删除Flannel v0.9.1的相关对象:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/
kube-flannel.yml
kubectl delete clusterrole flannel -n kube-system
kubectl delete clusterrolebinding flannel-n kube-system
kubectl delete clusterrolebinding flannel -n kube-system
kubectl delete serviceaccount flannel -n kube-system
kubectl delete configmap kube-flannel-cfg -n kube-system
也可以删除Flannel Pods:
kubectl delete daemonset.extensions kube-flannel-ds -n kube-system
kubectl delete pod kube-flannel-ds-5bzrx -n kube-system
并检查是否不再存在与Flannel相关的对象:
kubectl delete pod kube-flannel-ds-bfs9k -n kube-system
kubectl get all --all-namespaces
对我来说,它可行,但是,如果您发现任何其他问题,请在此答案下方写上注释。