Docker覆盖网络是否支持从主机到容器的通信?

时间:2018-08-04 20:04:21

标签: linux docker networking

我有运行服务的裸机主机和提供连接到主机服务的应用程序的docker主机的混合网络。 (在我的情况下,运行pyspark的容器中的Jupyter笔记本计算机连接到Spark Master主机以分发工作。)因此,我要求主机网络能够解析容器的地址并将流量路由回去。例如,容器正在向服务主机发出RMI调用,但是主机无法路由回(并且失败)。

如果我使用覆盖网络将服务设置为从容器到容器,则一切正常,但是由于无法路由回容器,我无法使用主机网络中的基本管理控制台。

下面是我设置覆盖网络并在两台主机之间进行测试的方式。

Docker覆盖网络是否支持从主机到容器的通信?


设置

 bgercken@docker-manager ~]$  docker network create --driver=overlay --attachable --subnet=192.168.0.208/28 --gateway=192.168.0.209 container-net

[bgercken@docker-manager ~]$ docker container run -d --rm --name container1 -h container1 --net container-net --ip 192.168.0.210 alpine ping 8.8.8.8

[bgercken@titan ~]$ docker container run -d --rm --name container2 -h container2 --net container-net --ip 192.168.0.211 alpine ping 8.8.8.8
31853b87848f7c70e806f6f9c9d7b457fce6d64c0efa832b0e6991034132f453

[bgercken@titan ~]$ docker container exec -it container2 sh
/ # ping -c 1 container1
PING container1 (192.168.0.210): 56 data bytes
64 bytes from 192.168.0.210: seq=0 ttl=64 time=0.656 ms

[bgercken@docker-manager ~]$ docker container exec -it container1 sh
/ # ping -c 1 container2
PING container2 (192.168.0.211): 56 data bytes
64 bytes from 192.168.0.211: seq=0 ttl=64 time=0.557 ms

--- container2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.557/0.557/0.557 ms

/ # ping -c 1 192.168.0.1
PING 192.168.0.1 (192.168.0.1): 56 data bytes
64 bytes from 192.168.0.1: seq=0 ttl=63 time=0.492 ms

--- 192.168.0.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.492/0.492/0.492 ms


[root@docker-manager bgercken]# route add -net 192.168.0.208 netmask 255.255.255.240 gw 192.168.0.209


iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT

iptables -A DOCKER -p icmp --icmp-type echo-request -j ACCEPT
iptables -A DOCKER -p icmp --icmp-type echo-reply -j ACCEPT


[root@docker-manager bgercken]# ping -c 1 container1
PING container1.ktdev.net (192.168.0.210) 56(84) bytes of data.
From docker-manager.ktdev.net (192.168.0.191) icmp_seq=1 Destination Host Unreachable

--- container1.ktdev.net ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

[root@docker-manager bgercken]# netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.0.1     0.0.0.0         UG        0 0          0 ens192
10.0.156.64     192.168.0.160   255.255.255.192 UG        0 0          0 tunl0
10.0.173.0      0.0.0.0         255.255.255.192 U         0 0          0 *
10.0.173.11     0.0.0.0         255.255.255.255 UH        0 0          0 calicc9e1866c3e
10.0.173.12     0.0.0.0         255.255.255.255 UH        0 0          0 calibb3849d04c0
10.0.198.64     192.168.0.200   255.255.255.192 UG        0 0          0 tunl0
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker_gwbridge
192.168.0.0     0.0.0.0         255.255.255.0   U         0 0          0 ens192
192.168.0.208   192.168.0.209   255.255.255.240 UG        0 0          0 ens192

Docker信息

Containers: 52
 Running: 37
 Paused: 0
 Stopped: 15
Images: 37
Server Version: 17.06.2-ee-16
Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 6.259GB
 Data Space Total: 153GB
 Data Space Available: 146.7GB
 Metadata Space Used: 2.085MB
 Metadata Space Total: 1.606GB
 Metadata Space Available: 1.604GB
 Thin Pool Minimum Free Space: 15.3GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.146-RHEL7 (2018-01-22)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: nzx7kf0t3z42yuxmqsw3e96s8
 Is Manager: true
 ClusterID: m5851n1g1jt62my16of5bbrgw
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 10
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
  External CAs:
    cfssl: https://192.168.0.191:12381/api/v1/cfssl/sign
 Root Rotation In Progress: false
 Node Address: 192.168.0.191
 Manager Addresses:
  192.168.0.191:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 462c82662200a17ee39e74692f536067a3576a50
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-862.9.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.51GiB
Name: docker-manager.ktdev.net
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

1 个答案:

答案 0 :(得分:0)

已解决:Docker与救援的桥梁

找到了解决我特定问题的方法。在此配置中,源自容器的呼叫可以到达主机网络,并且呼叫可以正确完成。

同意这不是最佳实践,但是会弥合差距,直到所有工作节点都移入容器为止。

干杯。

详细信息

为地址池指定一个子网,该地址池将用于在主节点上启动容器。

子网的所有容器名称都在DNS中:

#
# 192.168.0.208/28
#
192.168.0.208   dbr0-ep1
192.168.0.209   dbr0-gw
192.168.0.210   container1
...
192.168.0.222   container12
192.168.0.223 dbr0-ep2

在主控主机(titan 192.168.0.160)上创建docker bridge:

docker network create --driver=bridge --subnet=192.168.0.208/28 --gateway=192.168.0.209 dbr0

在每个工作节点主机上添加一条路由,该路由返回到指定用于运行容器的子网(x.x.x160是主机):

route add -net 192.168.0.208 netmask 255.255.255.240 gw 192.168.0.160

在主服务器(titan)上启动容器:

[bgercken@titan ~]$ docker container run -d --rm --name container1 -h container1 \
  --net=dbr0 --ip=192.168.0.210 debian sleep infinity
...
[bgercken@titan ~]$ ping -c 1 192.168.0.210
PING 192.168.0.210 (192.168.0.210) 56(84) bytes of data.
64 bytes from 192.168.0.210: icmp_seq=1 ttl=64 time=0.027 ms

--- 192.168.0.210 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms

在工作节点上:

[bgercken@node1 ~]$ ping -c 1 container1
PING container1.ktdev.net (192.168.0.210) 56(84) bytes of data.
64 bytes from container1.ktdev.net (192.168.0.210): icmp_seq=1 ttl=63 time=0.256 ms

--- container1.ktdev.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms

[bgercken@node2 ~]$ ping -c 1 container1
PING container1.ktdev.net (192.168.0.210) 56(84) bytes of data.
64 bytes from container1.ktdev.net (192.168.0.210): icmp_seq=1 ttl=63 time=0.370 ms

--- container1.ktdev.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms