Kubernetes-NodePort服务只能在已部署Pod的节点上访问

时间:2020-09-27 14:28:00

标签: docker kubernetes iptables

我已经基于三个Centos 8 VM建立了一个kubernetes集群,并使用nginx部署了一个pod。

虚拟机的IP地址:

kubemaster 192.168.56.20
kubenode1 192.168.56.21
kubenode2 192.168.56.22

在每个VM上,接口和路由的定义如下:

ip addr:
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:d2:1b:97 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3
       valid_lft 75806sec preferred_lft 75806sec
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:df:77:05 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.22/24 brd 192.168.56.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ff:47:9a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:ff:47:9a brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:19:52:19:b1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 22:b8:b4:5a:5a:26 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.0/32 brd 10.244.2.0 scope global flannel.1
       valid_lft forever preferred_lft forever

ip route:
default via 10.0.2.2 dev enp0s3 proto dhcp metric 100
default via 192.168.56.1 dev enp0s8 proto static metric 101
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.22 metric 101
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

在每个VM上,我有两个网络适配器,一个用于Internet访问的NAT(enp0s3)和一个用于3个VM相互通信(enp0s8)的仅主机网络(可以,我用ping命令对其进行了测试)。 / p>

在每个VM上,我应用了以下防火墙规则:

firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API server
firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
firewall-cmd --permanent --add-port=8285/udp # Flannel
firewall-cmd --permanent --add-port=8472/udp # Flannel
firewall-cmd --add-masquerade –permanent
firewall-cmd --reload

最后,我使用以下命令部署了集群和nginx:

sudo kubeadm init --apiserver-advertise-address=192.168.56.20 --pod-network-cidr=10.244.0.0/16 (for Flannel CNI)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80

我的集群的更多一般信息:

kubectl获取-o宽的节点

NAME         STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                 CONTAINER-RUNTIME
kubemaster   Ready    master   3h8m   v1.19.2   192.168.56.20   <none>        CentOS Linux 8 (Core)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13
kubenode1    Ready    <none>   3h6m   v1.19.2   192.168.56.21   <none>        CentOS Linux 8 (Core)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13
kubenode2    Ready    <none>   165m   v1.19.2   192.168.56.22   <none>        CentOS Linux 8 (Core)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13

kubectl获取容器--all-namespaces -o wide

NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES
default       nginx-6799fc88d8-mrvsg               1/1     Running   0          3h     10.244.1.3      kubenode1    <none>           <none>
kube-system   coredns-f9fd979d6-6qxk9              1/1     Running   0          3h9m   10.244.1.2      kubenode1    <none>           <none>
kube-system   coredns-f9fd979d6-bj2fd              1/1     Running   0          3h9m   10.244.0.2      kubemaster   <none>           <none>
kube-system   etcd-kubemaster                      1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-apiserver-kubemaster            1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-controller-manager-kubemaster   1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-flannel-ds-fdv4p                1/1     Running   0          166m   192.168.56.22   kubenode2    <none>           <none>
kube-system   kube-flannel-ds-vvhsz                1/1     Running   0          3h6m   192.168.56.21   kubenode1    <none>           <none>
kube-system   kube-flannel-ds-vznl5                1/1     Running   0          3h6m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-proxy-45tmz                     1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>
kube-system   kube-proxy-nb7jt                     1/1     Running   0          3h7m   192.168.56.21   kubenode1    <none>           <none>
kube-system   kube-proxy-tl9n5                     1/1     Running   0          166m   192.168.56.22   kubenode2    <none>           <none>
kube-system   kube-scheduler-kubemaster            1/1     Running   0          3h9m   192.168.56.20   kubemaster   <none>           <none>

kubectl获得广泛的服务

kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3h10m   <none>
nginx        NodePort    10.102.152.25   <none>        80:30086/TCP   179m    app=nginx

Kubernetes版本:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

iptables版本:

iptables v1.8.4 (nf_tables)

结果和问题:

  • 如果我从任何VM卷曲192.168.56.21:30086->好,我得到了nginx代码。
  • 如果我尝试其他IP(例如,卷曲192.168.56.22:30086),它将失败...(卷曲:(7)无法连接到192.168.56.22端口30086:连接超时)

我尝试调试的内容:

sudo netstat -antup | grep kube-proxy
o   tcp        0      0 0.0.0.0:30086           0.0.0.0:*               LISTEN      4116/kube-proxy
o   tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      4116/kube-proxy
o   tcp        0      0 192.168.56.20:49812     192.168.56.20:6443      ESTABLISHED 4116/kube-proxy
o   tcp6       0      0 :::10256                :::*                    LISTEN      4116/kube-proxy

因此,在每个VM上,似乎kube-proxy都在端口30086上侦听,这是可以的。

我试图在每个节点(在另一票证上找到)上应用此规则,但未成功:

iptables -A FORWARD -j ACCEPT

您知道为什么我无法从主节点和节点2到达服务吗?

首次更新:

  • Centos 8似乎与kubeadm不兼容。我换了Centos 7,但仍然有问题;
  • 创建的法兰绒吊舱使用错误的接口(enp0s3)而不是enp0s8。我修改了kube-flannel.yaml文件并添加了参数(--iface = enp0s8)。现在我的吊舱使用的界面正确。
kubectl logs kube-flannel-ds-nn6v4 -n kube-system:
I0929 06:19:36.842149       1 main.go:531] Using interface with name enp0s8 and address 192.168.56.22
I0929 06:19:36.842243       1 main.go:548] Defaulting external address to interface address (192.168.56.22)

即使修复了这两件事,我仍然遇到相同的问题...

第二次更新:

最终的解决方案是使用以下命令刷新每个VM上的iptables:

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

现在它可以正常工作了:)

2 个答案:

答案 0 :(得分:0)

这是因为您正在CentOS 8上运行k8s。

根据kubernetes文档,受支持的主机操作系统列表如下:

  • Ubuntu 16.04 +
  • Debian 9 +
  • CentOS 7
  • 红帽企业Linux(RHEL)7
  • Fedora 25 +
  • HypriotOS v1.0.1 +
  • Flatcar Container Linux(已测试2512.3.0)

article提到RHEL 8上存在网络问题:

(2020/02/11更新:安装后,我一直面临着pod网络问题,就像部署的pod无法访问外部网络或部署在不同工作进程中的pod无法相互ping通一样甚至我都可以通过 kubectl获取节点看到所有节点(主节点,worker1和worker2)都已经准备就绪。通过Kubernetes.io官方网站检查后,我发现nfstables后端与当前的kubeadm软件包不兼容。 。请参阅“ 确保iptables工具不使用nfstables后端”中的以下链接。

这里最简单的解决方案是在支持的操作系统上重新安装节点。

答案 1 :(得分:0)

在切换到Centos 7并纠正了Flannel配置之后,我终于找到了解决方案(请参阅其他评论)。 实际上,我注意到运行coredns的Pod中存在一些问题。以下是这些吊舱之一内部发生的情况的示例:

kubectl logs coredns-f9fd979d6-8gtlp -n kube-system:
E0929 07:09:40.200413       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
[INFO] plugin/ready: Still waiting on: "kubernetes"

最终的解决方案是使用以下命令刷新每个VM上的iptables:

systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

然后我可以访问从每个VM部署的服务:)

我仍然不确定是否清楚地了解问题所在。这里是一些信息:

我将继续调查并在此处发布更多信息。

相关问题