Kubernetes无法访问其他节点服务

时间:2017-09-18 16:41:08

标签: networking kubernetes virtualbox

我正在使用CentOS 7,1个主人和2个小伙子在3个VirtualBox虚拟机中玩Kubernetes。不幸的是,安装手册说的是every service will be accessible from every node, every pod will see all other pods,但我没有看到这种情况发生。我只能从运行特定pod的节点访问该服务。请帮助找出我缺少的东西,我对Kubernetes很新。

每个VM都有2个适配器: NAT 仅限主机。第二个有IPs 10.0.13.101-254。

  • Master-1:10.0.13.104
  • Minion-1:10.0.13.105
  • Minion-2:10.0.13.106

从主人那里获取所有pod:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
default       busybox                            1/1       Running   4          37m
default       nginx-demo-2867147694-f6f9m        1/1       Running   1          52m
default       nginx-demo2-2631277934-v4ggr       1/1       Running   0          5s
kube-system   etcd-master-1                      1/1       Running   1          1h
kube-system   kube-apiserver-master-1            1/1       Running   1          1h
kube-system   kube-controller-manager-master-1   1/1       Running   1          1h
kube-system   kube-dns-2425271678-kgb7k          3/3       Running   3          1h
kube-system   kube-flannel-ds-pwsq4              2/2       Running   4          56m
kube-system   kube-flannel-ds-qswt7              2/2       Running   4          1h
kube-system   kube-flannel-ds-z0g8c              2/2       Running   12         56m
kube-system   kube-proxy-0lfw0                   1/1       Running   2          56m
kube-system   kube-proxy-6263z                   1/1       Running   2          56m
kube-system   kube-proxy-b8hc3                   1/1       Running   1          1h
kube-system   kube-scheduler-master-1            1/1       Running   1          1h

获取所有服务:

$ kubectl get services
NAME          CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes    10.96.0.1       <none>        443/TCP   1h
nginx-demo    10.104.34.229   <none>        80/TCP    51m
nginx-demo2   10.102.145.89   <none>        80/TCP    3s

获取Nginx pods IP信息:

$ kubectl get pod nginx-demo-2867147694-f6f9m -o json | grep IP
        "hostIP": "10.0.13.105",
        "podIP": "10.244.1.58",

$ kubectl get pod nginx-demo2-2631277934-v4ggr -o json | grep IP
        "hostIP": "10.0.13.106",
        "podIP": "10.244.2.14",

如你所见 - 第一个小兵上有一个Nginx例子,第二个小兵就是第二个例子。

问题是 - 我只能从节点 10.0.13.105 访问 nginx-demo (使用Pod IP和服务IP),使用curl:

curl 10.244.1.58:80
curl 10.104.34.229:80

nginx-demo2 仅来自 10.0.13.106 ,反之亦然,而不是来自主节点。 Busybox在节点 10.0.13.105 上,因此它可以达到 nginx-demo ,但不能 nginx-demo2

如何独立访问服务节点?法兰绒配置错误吗?

master上的路由表:

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    100    0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 enp0s3
10.0.13.0       0.0.0.0         255.255.255.0   U     100    0        0 enp0s8
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.0.0      0.0.0.0         255.255.0.0     U     0      0        0 flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

minion-1上的路由表:

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    100    0        0 enp0s3
10.0.2.0        0.0.0.0         255.255.255.0   U     100    0        0 enp0s3
10.0.13.0       0.0.0.0         255.255.255.0   U     100    0        0 enp0s8
10.244.0.0      0.0.0.0         255.255.0.0     U     0      0        0 flannel.1
10.244.1.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

也许默认网关是一个问题(NAT适配器)?另一个问题 - 从Busybox容器我尝试服务DNS解析它也不起作用:

$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
If you don't see a command prompt, try pressing enter.
/ # 
/ # nslookup nginx-demo
Server:    10.96.0.10
Address 1: 10.96.0.10

nslookup: can't resolve 'nginx-demo'
/ # 
/ # nslookup nginx-demo.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10

nslookup: can't resolve 'nginx-demo.default.svc.cluster.local'

降低了客户操作系统的安全性:

setenforce 0
systemctl stop firewalld

如果需要,请随时提出更多信息。

地址信息

kube-dns 日志:

$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k kubedns
I0919 07:48:45.000397       1 dns.go:48] version: 1.14.3-4-gee838f6
I0919 07:48:45.114060       1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0919 07:48:45.114129       1 server.go:113] FLAG: --alsologtostderr="false"
I0919 07:48:45.114144       1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0919 07:48:45.114155       1 server.go:113] FLAG: --config-map=""
I0919 07:48:45.114162       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0919 07:48:45.114169       1 server.go:113] FLAG: --config-period="10s"
I0919 07:48:45.114179       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0919 07:48:45.114186       1 server.go:113] FLAG: --dns-port="10053"
I0919 07:48:45.114196       1 server.go:113] FLAG: --domain="cluster.local."
I0919 07:48:45.114206       1 server.go:113] FLAG: --federations=""
I0919 07:48:45.114215       1 server.go:113] FLAG: --healthz-port="8081"
I0919 07:48:45.114223       1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0919 07:48:45.114230       1 server.go:113] FLAG: --kube-master-url=""
I0919 07:48:45.114238       1 server.go:113] FLAG: --kubecfg-file=""
I0919 07:48:45.114245       1 server.go:113] FLAG: --log-backtrace-at=":0"
I0919 07:48:45.114256       1 server.go:113] FLAG: --log-dir=""
I0919 07:48:45.114264       1 server.go:113] FLAG: --log-flush-frequency="5s"
I0919 07:48:45.114271       1 server.go:113] FLAG: --logtostderr="true"
I0919 07:48:45.114278       1 server.go:113] FLAG: --nameservers=""
I0919 07:48:45.114285       1 server.go:113] FLAG: --stderrthreshold="2"
I0919 07:48:45.114292       1 server.go:113] FLAG: --v="2"
I0919 07:48:45.114299       1 server.go:113] FLAG: --version="false"
I0919 07:48:45.114310       1 server.go:113] FLAG: --vmodule=""
I0919 07:48:45.116894       1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0919 07:48:45.117296       1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0919 07:48:45.117329       1 dns.go:147] Starting endpointsController
I0919 07:48:45.117336       1 dns.go:150] Starting serviceController
I0919 07:48:45.117702       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.117716       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.620177       1 dns.go:171] Initialized services and endpoints from apiserver
I0919 07:48:45.620217       1 server.go:129] Setting up Healthz Handler (/readiness)
I0919 07:48:45.620229       1 server.go:134] Setting up cache handler (/cache)
I0919 07:48:45.620238       1 server.go:120] Status HTTP port 8081



$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k dnsmasq
I0919 07:48:48.466499       1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0919 07:48:48.478353       1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0919 07:48:48.697877       1 nanny.go:111] 
W0919 07:48:48.697903       1 nanny.go:112] Got EOF from stdout
I0919 07:48:48.697925       1 nanny.go:108] dnsmasq[10]: started, version 2.76 cachesize 1000
I0919 07:48:48.697937       1 nanny.go:108] dnsmasq[10]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0919 07:48:48.697943       1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa 
I0919 07:48:48.697947       1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa 
I0919 07:48:48.697950       1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local 
I0919 07:48:48.697955       1 nanny.go:108] dnsmasq[10]: reading /etc/resolv.conf
I0919 07:48:48.697959       1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa 
I0919 07:48:48.697962       1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa 
I0919 07:48:48.697965       1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local 
I0919 07:48:48.697968       1 nanny.go:108] dnsmasq[10]: using nameserver 85.254.193.137#53
I0919 07:48:48.697971       1 nanny.go:108] dnsmasq[10]: using nameserver 92.240.64.23#53
I0919 07:48:48.697975       1 nanny.go:108] dnsmasq[10]: read /etc/hosts - 7 addresses



$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k sidecar
ERROR: logging before flag.Parse: I0919 07:48:49.990468       1 main.go:48] Version v1.14.3-4-gee838f6
ERROR: logging before flag.Parse: I0919 07:48:49.994335       1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
ERROR: logging before flag.Parse: I0919 07:48:49.994369       1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: I0919 07:48:49.994435       1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}

kube-flannel 从一个广告连播中记录。但与其他人类似:

$ kubectl -n kube-system logs kube-flannel-ds-674mx kube-flannel
I0919 08:07:41.577954       1 main.go:446] Determining IP address of default interface
I0919 08:07:41.579363       1 main.go:459] Using interface with name enp0s3 and address 10.0.2.15
I0919 08:07:41.579408       1 main.go:476] Defaulting external address to interface address (10.0.2.15)
I0919 08:07:41.600985       1 kube.go:130] Waiting 10m0s for node controller to sync
I0919 08:07:41.601032       1 kube.go:283] Starting kube subnet manager
I0919 08:07:42.601553       1 kube.go:137] Node controller sync successful
I0919 08:07:42.601959       1 main.go:226] Created subnet manager: Kubernetes Subnet Manager - minion-1
I0919 08:07:42.601966       1 main.go:229] Installing signal handlers
I0919 08:07:42.602036       1 main.go:330] Found network config - Backend type: vxlan
I0919 08:07:42.606970       1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I0919 08:07:42.608380       1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0919 08:07:42.609579       1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
I0919 08:07:42.611257       1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I0919 08:07:42.612595       1 main.go:279] Wrote subnet file to /run/flannel/subnet.env
I0919 08:07:42.612606       1 main.go:284] Finished starting backend.
I0919 08:07:42.612638       1 vxlan_network.go:56] Watching for L3 misses
I0919 08:07:42.612651       1 vxlan_network.go:64] Watching for new subnet leases


$ kubectl -n kube-system logs kube-flannel-ds-674mx install-cni
+ cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf
+ true
+ sleep 3600
+ true
+ sleep 3600

我添加了更多服务并使用 NodePort 类型进行了曝光,这是我从主机扫描端口时所获得的:

# nmap 10.0.13.104 -p1-50000

Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.104
Host is up (0.0014s latency).
Not shown: 49992 closed ports
PORT      STATE    SERVICE
22/tcp    open     ssh
6443/tcp  open     sun-sr-https
10250/tcp open     unknown
10255/tcp open     unknown
10256/tcp open     unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:90:26:1C (Oracle VirtualBox virtual NIC)

Nmap done: 1 IP address (1 host up) scanned in 1.96 seconds



# nmap 10.0.13.105 -p1-50000

Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.105
Host is up (0.00040s latency).
Not shown: 49993 closed ports
PORT      STATE    SERVICE
22/tcp    open     ssh
10250/tcp open     unknown
10255/tcp open     unknown
10256/tcp open     unknown
30029/tcp open     unknown
31844/tcp open     unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:F8:E3:71 (Oracle VirtualBox virtual NIC)

Nmap done: 1 IP address (1 host up) scanned in 1.87 seconds



# nmap 10.0.13.106 -p1-50000

Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:21 EEST
Nmap scan report for 10.0.13.106
Host is up (0.00059s latency).
Not shown: 49993 closed ports
PORT      STATE    SERVICE
22/tcp    open     ssh
10250/tcp open     unknown
10255/tcp open     unknown
10256/tcp open     unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp open     unknown
MAC Address: 08:00:27:D9:33:32 (Oracle VirtualBox virtual NIC)

Nmap done: 1 IP address (1 host up) scanned in 1.92 seconds

如果我们在端口 32619 上采用最新服务 - 它存在于任何地方,但仅在相关节点上打开,而其他节点则已过滤。

关于Minion-1

的tcpdump信息

使用curl 10.0.13.105:30572

从主机到Minion-1的连接
# tcpdump -ni enp0s8 tcp or icmp and not port 22 and not host 10.0.13.104
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes

13:11:39.043874 IP 10.0.13.1.41132 > 10.0.13.105.30572: Flags [S], seq 657506957, win 29200, options [mss 1460,sackOK,TS val 504213496 ecr 0,nop,wscale 7], length 0
13:11:39.045218 IP 10.0.13.105 > 10.0.13.1: ICMP time exceeded in-transit, length 68

flannel.1 界面:

# tcpdump -ni flannel.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes


13:11:49.499148 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499239 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499247 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0

.. ICMP time exceeded in-transit只有错误和SYN数据包,所以pods网络之间没有连接,因为curl 10.0.13.106:30572有效。

Minion-1接口

# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:35:72:ab brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 77769sec preferred_lft 77769sec
    inet6 fe80::772d:2128:6aaa:2355/64 scope link 
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:f8:e3:71 brd ff:ff:ff:ff:ff:ff
    inet 10.0.13.105/24 brd 10.0.13.255 scope global dynamic enp0s8
       valid_lft 1089sec preferred_lft 1089sec
    inet6 fe80::1fe0:dba7:110d:d673/64 scope link 
       valid_lft forever preferred_lft forever
    inet6 fe80::f04f:5413:2d27:ab55/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:59:53:d7:fd brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.2/24 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether fa:d3:3e:3e:77:19 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::f8d3:3eff:fe3e:7719/64 scope link 
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
    link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::c4f9:96ff:fed8:8cb6/64 scope link 
       valid_lft forever preferred_lft forever
13: veth5e2971fe@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP 
    link/ether 1e:70:5d:6c:55:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::1c70:5dff:fe6c:5533/64 scope link 
       valid_lft forever preferred_lft forever
14: veth8f004069@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP 
    link/ether ca:39:96:59:e6:63 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::c839:96ff:fe59:e663/64 scope link 
       valid_lft forever preferred_lft forever
15: veth5742dc0d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP 
    link/ether c2:48:fa:41:5d:67 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::c048:faff:fe41:5d67/64 scope link 
       valid_lft forever preferred_lft forever

2 个答案:

答案 0 :(得分:2)

它可以通过禁用防火墙或运行以下命令来工作。

我在搜索中发现了这个漏洞。看起来这与docker&gt; = 1.13和flannel

有关

参考:https://github.com/coreos/flannel/issues/799

答案 1 :(得分:0)

我不擅长网络。 我们和你处于相同的情况,我们设置了四个虚拟机,一个是master,另一个是工作节点。我试图在pod中的某个容器中使用nslookup一些服务,但它无法查找,卡在获取kubernetes dns的响应。 我意识到dns配置或网络组件不正确,因此查看运河的日志(我们使用此CNI建立kubernete网络),并发现它是使用NAT看起来使用的默认接口进行初始化但是不是主机,如下所示。然后我们纠正它,它现在有效。

https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/canal.yaml

  

#运河用于主机的接口&lt; - &gt;主持人沟通。

     

#如果留空,则使用节点的

选择界面      

#default route。

     

canal_iface:“”

不确定您使用的CNI,但希望这可以帮助您检查。