VPN存取丛集服务/ Pod:无法ping除OpenVPN伺服器以外的任何东西

时间:2019-12-01 19:17:31

标签: kubernetes vpn rancher

我正在尝试设置VPN来访问群集的工作负载而无需设置公共端点。

使用OpenVPN掌舵图部署服务,使用Rancher v2.3.2部署kubernetes

  • 通过简单的服务发现替换L4负载均衡器
  • 编辑configMap,以允许TCP通过负载均衡器并到达VPN

什么是行不通的?

  • OpenVPN客户端可以成功连接
  • 无法ping通公共服务器
  • 无法ping Kubernetes服务或Pod
  • 可以ping openvpn群集IP“ 10.42.2.11”

我的文件

vars.yml

---
replicaCount: 1
nodeSelector:
  openvpn: "true"
openvpn:
  OVPN_K8S_POD_NETWORK: "10.42.0.0"
  OVPN_K8S_POD_SUBNET: "255.255.0.0"
  OVPN_K8S_SVC_NETWORK: "10.43.0.0"
  OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
  storageClass: "local-path"
service:
  externalPort: 444

连接有效,但是我无法在群集中找到任何IP。 我可以访问的唯一IP是openvpn群集IP。

openvpn.conf

server 10.240.0.0 255.255.0.0
verb 3

key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem



key-direction 0
keepalive 10 60
persist-key
persist-tun

proto tcp
port  443
dev tun0
status /tmp/openvpn-status.log

user nobody
group nogroup

push "route 10.42.2.11 255.255.255.255"

push "route 10.42.0.0 255.255.0.0"


push "route 10.43.0.0 255.255.0.0"



push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"

client.ovpn

client
nobind
dev tun

remote xxxx xxx tcp
CERTS CERTS

dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net

我真的不知道如何调试它。

我正在使用Windows

客户端上的

route命令

Destination     Gateway         Genmask         Flags Metric Ref    Use Ifac
0.0.0.0         livebox.home    255.255.255.255 U     0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     256    0        0 eth0
192.168.1.17    0.0.0.0         255.255.255.255 U     256    0        0 eth0
192.168.1.255   0.0.0.0         255.255.255.255 U     256    0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 eth0
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 eth0
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 eth1
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 eth1
0.0.0.0         10.240.0.5      255.255.255.255 U     0      0        0 eth1
10.42.2.11      10.240.0.5      255.255.255.255 U     0      0        0 eth1
10.42.0.0       10.240.0.5      255.255.0.0     U     0      0        0 eth1
10.43.0.0       10.240.0.5      255.255.0.0     U     0      0        0 eth1
10.240.0.1      10.240.0.5      255.255.255.255 U     0      0        0 eth1
127.0.0.0       0.0.0.0         255.0.0.0       U     256    0        0 lo  
127.0.0.1       0.0.0.0         255.255.255.255 U     256    0        0 lo  
127.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 lo  
224.0.0.0       0.0.0.0         240.0.0.0       U     256    0        0 lo  
255.255.255.255 0.0.0.0         255.255.255.255 U     256    0        0 lo  

最后是ifconfig

        inet 192.168.1.17  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 2a01:cb00:90c:5300:603c:f8:703e:a876  prefixlen 64  scopeid 0x0<global>
        inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2  prefixlen 128  scopeid 0x0<global>
        inet6 fe80::603c:f8:703e:a876  prefixlen 64  scopeid 0xfd<compat,link,site,host>
        ether 00:d8:61:31:22:32  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.240.0.6  netmask 255.255.255.252  broadcast 10.240.0.7
        inet6 fe80::b9cf:39cc:f60a:9db2  prefixlen 64  scopeid 0xfd<compat,link,site,host>
        ether 00:ff:42:04:53:4d  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 1500
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0xfe<compat,link,site,host>
        loop  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

3 个答案:

答案 0 :(得分:1)

对于正在寻找工作样本的任何人,这将连同容器定义一起进入openvpn部署:

initContainers:
- args:
  - -w
  - net.ipv4.ip_forward=1
  command:
  - sysctl
  image: busybox
  name: openvpn-sidecar
  securityContext:
    privileged: true

答案 1 :(得分:0)

不知道这是否是正确的答案。

但是我通过在吊舱中添加小车来执行来实现它 net.ipv4.ip_forward=1

解决了这个问题

答案 2 :(得分:0)

您可以在values.yaml中将ipForwardInitContainer选项设置为“ true”