我在ubuntu 16.04.3虚拟机上部署了一个k8s集群。 集群由一个主节点和3个节点组成。法兰绒叠加网络。
# kubectl get no
NAME STATUS ROLES AGE VERSION
buru Ready <none> 70d v1.8.4
fraser Ready,SchedulingDisabled <none> 2h v1.8.4
tasmania Ready <none> 1d v1.8.4
whiddy Ready,SchedulingDisabled master 244d v1.8.4
尽管配置方式完全相同,但我的2个节点(buru和tasmania)正常工作,而第三个节点(fraser)根本不想协作。
如果我在fraser服务器中ssh,我可以正确到达覆盖网络:
root@fraser:~# ifconfig flannel.1
flannel.1 Link encap:Ethernet HWaddr 52:4a:da:84:8a:7b
inet addr:10.244.3.0 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::504a:daff:fe84:8a7b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:11 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:8 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:756 (756.0 B) TX bytes:756 (756.0 B)
root@fraser:~# ping 10.244.0.1
PING 10.244.0.1 (10.244.0.1) 56(84) bytes of data.
64 bytes from 10.244.0.1: icmp_seq=1 ttl=64 time=0.764 ms
^C
--- 10.244.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.764/0.764/0.764/0.000 ms
root@fraser:~# ping 10.244.0.1
PING 10.244.0.1 (10.244.0.1) 56(84) bytes of data.
64 bytes from 10.244.0.1: icmp_seq=1 ttl=64 time=0.447 ms
64 bytes from 10.244.0.1: icmp_seq=2 ttl=64 time=1.20 ms
64 bytes from 10.244.0.1: icmp_seq=3 ttl=64 time=0.560 ms
^C
--- 10.244.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.447/0.736/1.203/0.334 ms
但是pod显然无法到达覆盖网络:
# kubectl --all-namespaces=true get po -o wide | grep fraser
kube-system test-fraser 1/1 Running 0 20m 10.244.3.7 fraser
# kubectl -n kube-system exec -ti test-fraser ash
/ # ping 10.244.0.1
PING 10.244.0.1 (10.244.0.1): 56 data bytes
^C
--- 10.244.0.1 ping statistics ---
12 packets transmitted, 0 packets received, 100% packet loss
test-fraser
pod只是我用于排除故障的高山静态吊舱。
在另一个节点(buru)中以相同方式部署的同一个pod工作正常。
由于覆盖网络在主机上工作,我会说法兰绒在这里工作得很好。 但是出于某种原因,pod内的网络无法正常工作。
其他说明
有人可以帮我解决这个问题吗?
修改
kubectl describe no fraser
Name: fraser
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=fraser
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"52:4a:da:84:8a:7b"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=80.211.157.110
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Thu, 07 Dec 2017 12:51:22 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 12:51:22 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 14:47:57 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 14:47:57 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Thu, 07 Dec 2017 15:27:27 +0100 Thu, 07 Dec 2017 14:48:07 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 80.211.157.110
Hostname: fraser
Capacity:
cpu: 4
memory: 8171244Ki
pods: 110
Allocatable:
cpu: 4
memory: 8068844Ki
pods: 110
System Info:
Machine ID: cb102c57fd539a2fb8ffab52578f27bd
System UUID: 423E50F4-C4EF-23F0-F300-B568F4B4B8B1
Boot ID: ca80d640-380a-4851-bab0-ee1fffd20bb2
Kernel Version: 4.4.0-92-generic
OS Image: Ubuntu 16.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.8.4
Kube-Proxy Version: v1.8.4
PodCIDR: 10.244.3.0/24
ExternalID: fraser
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system filebeat-mghqx 100m (2%) 0 (0%) 100Mi (1%) 200Mi (2%)
kube-system kube-flannel-ds-gvw4s 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-62vts 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system test-fraser 0 (0%) 0 (0%) 0 (0%) 0 (0%)
prometheus prometheus-prometheus-node-exporter-mwq67 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
100m (2%) 0 (0%) 100Mi (1%) 200Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 48m kubelet, fraser Starting kubelet.
Normal NodeAllocatableEnforced 48m kubelet, fraser Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 48m kubelet, fraser Node fraser status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m kubelet, fraser Node fraser status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48m kubelet, fraser Node fraser status is now: NodeHasNoDiskPressure
Normal NodeNotReady 48m kubelet, fraser Node fraser status is now: NodeNotReady
Normal NodeNotSchedulable 48m kubelet, fraser Node fraser status is now: NodeNotSchedulable
Normal NodeReady 48m kubelet, fraser Node fraser status is now: NodeReady
Normal NodeNotSchedulable 48m kubelet, fraser Node fraser status is now: NodeNotSchedulable
Normal NodeAllocatableEnforced 48m kubelet, fraser Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 48m kubelet, fraser Node fraser status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m kubelet, fraser Node fraser status is now: NodeHasSufficientMemory
Normal Starting 48m kubelet, fraser Starting kubelet.
Normal NodeNotReady 48m kubelet, fraser Node fraser status is now: NodeNotReady
Normal NodeHasNoDiskPressure 48m kubelet, fraser Node fraser status is now: NodeHasNoDiskPressure
Normal NodeReady 48m kubelet, fraser Node fraser status is now: NodeReady
Normal Starting 39m kubelet, fraser Starting kubelet.
Normal NodeAllocatableEnforced 39m kubelet, fraser Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 39m kubelet, fraser Node fraser status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 39m (x2 over 39m) kubelet, fraser Node fraser status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x2 over 39m) kubelet, fraser Node fraser status is now: NodeHasNoDiskPressure
Normal NodeNotReady 39m kubelet, fraser Node fraser status is now: NodeNotReady
Normal NodeNotSchedulable 39m kubelet, fraser Node fraser status is now: NodeNotSchedulable
Normal NodeReady 39m kubelet, fraser Node fraser status is now: NodeReady
Normal Starting 39m kube-proxy, fraser Starting kube-proxy.
答案 0 :(得分:1)
该问题已在评论部分得到解答。
对于调试k8s Node,我们需要确保以下组件(Kubelet,Docker,Kube-proxy和IPtables)完美运行。
我们可以通过以下命令获得全面的信息
kubectl get nodes
kubectl describe nodes NODE-NAME
综合结果,我们可以检查kube-proxy,kubelet,docker和CNI插件(法兰绒)是否正常运行。
如果它的网络问题我们将检查IPtables
iptables -L -v