通过OVS + DPDK连接的Docker容器,``Ping''工作正常,但``iperf''无效

时间:2019-06-11 08:11:09

标签: docker ping openvswitch dpdk iperf

我正在尝试使用DockerOVS+DPDK构建平台。

1。设置DPDK + OVS

我使用DPDK+OVSdpdk-2.2.0来设置openvswitch-2.5.1。首先,我编译DPDK的代码,设置大页面。我不绑定NIC,因为我没有来自外部的流量。

然后,我编译openvswitch的代码,设置with-dpdk。使用以下脚本启动OVS

#!/bin/sh
sudo rm /var/log/openvswitch/my-ovs-vswitchd.log*

export PATH=$PATH:/usr/local/share/openvswitch/scripts

export DB_SOCK=/usr/local/var/run/openvswitch/db.sock

sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
                     --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
                     --private-key=db:Open_vSwitch,SSL,private_key \
                     --certificate=db:Open_vSwitch,SSL,certificate \
                     --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
                     --pidfile --detach

sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

sudo ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x6

sudo ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach \
                        --log-file=/var/log/openvswitch/my-ovs-vswitchd.log

一切正常,我的OVSDPDK的支持下可以正常工作。

2。创建Docker容器,并设置网桥和端口。

我使用来自Docker的{​​{1}}图片,如下所示:

ubuntu:14.04

然后,我使用脚本创建一个# # Ubuntu Dockerfile # # https://github.com/dockerfile/ubuntu # # Pull base image. FROM ubuntu:14.04 # Install. RUN \ sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \ apt-get update && \ apt-get -y upgrade && \ apt-get install -y build-essential && \ apt-get install -y software-properties-common && \ apt-get install -y byobu curl git htop man unzip vim wget && \ apt-get install -y iperf net-tools && \ rm -rf /var/lib/apt/lists/* # Add files. ADD root/.bashrc /root/.bashrc ADD root/.gitconfig /root/.gitconfig ADD root/.scripts /root/.scripts # Set environment variables. ENV HOME /root # Define working directory. WORKDIR /root # Install tcpreply RUN apt-get update RUN apt-get install -y libpcap-dev ADD tcpreplay-4.3.2 /root/tcpreplay-4.3.2 WORKDIR /root/tcpreplay-4.3.2 RUN ./configure RUN make RUN make install # Copy pcap file ADD test_15M /root/test_15M # Define default command. CMD ["bash"] 网桥,即OVS,并使用ovs-br1创建两个端口:

ovs-docker

现在,我有一个桥#!/bin/sh sudo ovs-vsctl add-br ovs-br1 -- set bridge ovs-br1 datapath_type=netdev sudo ifconfig ovs-br1 173.16.1.1 netmask 255.255.255.0 up sudo docker run -itd --name="box1" "ubuntu14-tcpreplay:v1" sudo docker run -itd --name="box2" "ubuntu14-tcpreplay:v1" sudo ovs-docker add-port ovs-br1 eth1 box1 --ipaddress=173.16.1.2/24 sudo ovs-docker add-port ovs-br1 eth1 box2 --ipaddress=173.16.1.3/24 ,带有两个端口(无名称)。一个连接到ovs-br1(容器1),另一个连接到box1(容器2)。

3。检查box2box1之间的连接

首先,我转储box2的流量

ovs-br1

然后,我转到wcf@wcf-OptiPlex-7060:~/ovs$ sudo ovs-ofctl dump-flows ovs-br1 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=130.711s, table=0, n_packets=10, n_bytes=768, idle_age=121, priority=0 actions=NORMAL 并ping box1

box2

一切正常。 wcf@wcf-OptiPlex-7060:~/ovs$ sudo docker exec -it box1 "/bin/bash" [ root@45514f0108a9:~/tcpreplay-4.3.2 ]$ ping 173.16.1.3 PING 173.16.1.3 (173.16.1.3) 56(84) bytes of data. 64 bytes from 173.16.1.3: icmp_seq=1 ttl=64 time=0.269 ms 64 bytes from 173.16.1.3: icmp_seq=2 ttl=64 time=0.149 ms 64 bytes from 173.16.1.3: icmp_seq=3 ttl=64 time=0.153 ms 64 bytes from 173.16.1.3: icmp_seq=4 ttl=64 time=0.155 ms 64 bytes from 173.16.1.3: icmp_seq=5 ttl=64 time=0.167 ms 64 bytes from 173.16.1.3: icmp_seq=6 ttl=64 time=0.155 ms ^C --- 173.16.1.3 ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 4997ms rtt min/avg/max/mdev = 0.149/0.174/0.269/0.045 ms 可以ping box1

最后,我测试了box2iperf之间的box1。我在两个容器中都安装了box2

iperf2

box1

[ root@45514f0108a9:~/tcpreplay-4.3.2 ]$ iperf -c 173.16.1.3 -u -t 5 ------------------------------------------------------------ Client connecting to 173.16.1.3, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 173.16.1.2 port 49558 connected with 173.16.1.3 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 5.0 sec 642 KBytes 1.05 Mbits/sec [ 3] Sent 447 datagrams [ 3] WARNING: did not receive ack of last datagram after 10 tries.

box2

来自[ root@2e19a616d2af:~/tcpreplay-4.3.2 ]$ iperf -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ 的{​​{1}}数据包未收到来自iperf的响应。

我使用wireshark监视box1box2的{​​{1}}和两个ovs-br1端口。

OVS不查看任何流量,但是,两个box1端口均查看流量。 Wireshark的屏幕截图:

wireshark: two OVS ports

感谢您分享您的想法。

最美好的祝愿

1 个答案:

答案 0 :(得分:0)

如果意图是从一个容器1到容器2的直接数据包,则应该有流规则说明该数据包。例如./ovs-ofctl add-flow br0 in_port=1,action=output:2./ovs-ofctl add-flow br0 in_port=2,action=output:1

一旦应用了流规则,就可以确保从Linux内核堆栈中至少配置了默认路由,以将数据包发送到所需的接口。例如default route entry for 173.16.1.0/24