Kubernate:无法在其他节点上ping Pod ip

时间:2020-01-19 13:17:36

标签: kubernetes nodes project-calico bare-metal-server

Pod ip仅从同一节点ping。

当我尝试从其他节点/工作者ping pod ip时,它无法pinging。

master2@master2:~$ kubectl get pods --namespace=kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6ff8cbb789-lxwqq   1/1     Running   0          6d21h   192.168.180.2     master2   <none>           <none>
calico-node-4mnfk                          1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
calico-node-c4rjb                          1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
calico-node-dgqwx                          1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
calico-node-fhtvz                          1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
calico-node-mhd7w                          1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
coredns-8b5d5b85f-fjq72                    1/1     Running   0          45m     192.168.135.11    node3     <none>           <none>
coredns-8b5d5b85f-hgg94                    1/1     Running   0          45m     192.168.166.136   node1     <none>           <none>
etcd-master1                               1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
etcd-master2                               1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-apiserver-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-apiserver-master2                     1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-controller-manager-master1            1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-controller-manager-master2            1/1     Running   2          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-66nxz                           1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-fnrrz                           1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-proxy-lq5xp                           1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
kube-proxy-vxhwm                           1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
kube-proxy-zgwzq                           1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
kube-scheduler-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-scheduler-master2                     1/1     Running   1          6d21h   10.10.41.159      master2   <none>           <none>

当我尝试从节点3对节点2上的IP 192.168.104.8进行ping操作时,它失败并显示100%数据丢失

master1@master1:~/cluster$ sudo kubectl get pods  -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES

contentms-cb475f569-t54c2    1/1     Running   0          6d21h   192.168.104.1    node2   <none>           <none>
nav-6f67d5bd79-9khmm         1/1     Running   0          6d8h    192.168.104.8    node2   <none>           <none>
react                        1/1     Running   0          7m24s   192.168.135.12   node3   <none>           <none>
statistics-5668cd7dd-thqdf   1/1     Running   0          6d15h   192.168.104.4    node2   <none>           <none>

1 个答案:

答案 0 :(得分:1)

这是路线问题

我为每个节点eth0和eth1使用了两个IP。

在路由中,它使用eth1代替eth0 ip。

我禁用了eth1 ips,并且一切正常。

相关问题