我正在使用height
测试两个navigationBar
容器的吞吐量。使用class VC: UIViewController {
var observer: NSKeyValueObservation?
override func viewDidLoad() {
super.viewDidLoad()
self.observer = self.navigationController?.navigationBar.observe(\.bounds, options: [.new], changeHandler: { (navigationBar, changes) in
if let height = changes.newValue?.height {
if height > 44.0 {
//Large Title
self.title = "Large Title"
} else {
//Small Title
self.title = "Small Title"
}
}
})
}
}
设置的接口已经看到this is my tab1
<ion-tabs [selectedIndex]="myIndex" #mytabs>
<ion-tab [root]="tab1Root" tabIcon="md-home"></ion-tab>
<ion-tab [root]="tab3Root" tabIcon="md-chatboxes"></ion-tab>
<ion-tab [root]="tab4Root" tabIcon="md-call"></ion-tab>
<ion-tab [root]="tab5Root" tabIcon="md-settings"></ion-tab>
</ion-tabs>
this is my tab2
<ion-tabs [selectedIndex]="myIndex" #myheattabs>
<ion-tab [root]="tab1Root" tabIcon="md-home"></ion-tab>
<ion-tab [root]="tab3Root" tabIcon="md-chatboxes"></ion-tab>
<ion-tab [root]="tab4Root" tabIcon="md-alert"></ion-tab>
<ion-tab [root]="tab5Root" tabIcon="md-alert"></ion-tab>
</ion-tabs>
my file.ts
constructor(private fire: AngularFireAuth, public navCtrl:
NavController, public navParams: NavParams, public translate:
TranslateService, public loadingCtrl: LoadingController
,public global: Global, public menu: MenuController, public api:
ApiProvider, public http: HttpClient, public toastCtrl: ToastController)
{
this.tabBarElement = document.querySelector('.tabbar.show-tabbar');
this.heattabBarElement = document.querySelector('#
myheattabs.tabbar.show-tabbar')
}
//hide tabs
ionViewWillEnter() {
this.tabBarElement.style.display = 'none';
this.heattabBarElement.style.display = 'block';
console.log('did eneter');
this.presentToast2("Please wait while the map is loading");
}
ionViewWillLeave() {
this.tabBarElement.style.display = 'flex';
this.heattabBarElement.style.display = 'none';
}
客户端的数据包。但是,iperf
服务器似乎没有意识到数据包,也没有反应。
previous question中说明了我的平台的设置。我进一步解决了这个问题。但是仍然失败。请告诉我任何线索。
两个容器的网络配置为:
Docker
的网络状态:
ovs-docker
此处,iperf
连接到iperf
,而不是OVS。 box1
是使用eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:03
inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:470 errors:0 dropped:0 overruns:0 frame:0
TX packets:315 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648279 (648.2 KB) TX bytes:26687 (26.6 KB)
eth1 Link encap:Ethernet HWaddr 92:9c:88:f6:cb:a0
inet addr:173.16.1.3 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3492 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5219001 (5.2 MB) TX bytes:728 (728.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
创建的,并连接到名为eth0
的网桥。
docker0
的状态:
eth1
ovs-docker
在主机操作系统中的配置:
ovs-br1
ovs-br1
的网络状态:
wcf@wcf-OptiPlex-7060:~/ovs$ sudo ovs-ofctl show ovs-br1
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000de3219b1f549
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(001d342f61744_l): addr:32:e7:19:7e:dc:8f
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max
2(cb95b3dba5644_l): addr:b2:f3:4b:8e:d6:d7
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max
LOCAL(ovs-br1): addr:de:32:19:b1:f5:49
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
ovs-br1
错误吗?
我正在考虑wcf@wcf-OptiPlex-7060:~/ovs$ ifconfig ovs-br1
ovs-br1 Link encap:Ethernet HWaddr de:32:19:b1:f5:49
inet addr:173.16.1.1 Bcast:173.16.1.255 Mask:255.255.255.0
inet6 addr: fe80::dc32:19ff:feb1:f549/64 Scope:Link
UP BROADCAST RUNNING PROMISC MTU:1500 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:342 (342.0 B) TX bytes:732 (732.0 B)
是否阻止来自box2
(来自[ root@ddf06e436b89:~/tcpreplay-4.3.2 ]$ ifconfig -a
eth0 Link encap:Ethernet HWaddr 02:42:ac:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:35 errors:0 dropped:0 overruns:0 frame:0
TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6651 (6.6 KB) TX bytes:190596 (190.5 KB)
eth1 Link encap:Ethernet HWaddr e2:2d:e7:e5:ee:5c
inet addr:173.16.1.2 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:40 errors:0 dropped:0 overruns:0 frame:0
TX packets:3465 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5458 (5.4 KB) TX bytes:5214474 (5.2 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
)的流量。但似乎不是。我删除了所有iptables
规则,并且iptables
处的173.16.1.2
服务器仍然无法正常工作。
box1-eth1
中iptables
的状态:
iperf
box2
中的iptables
是否从box2
获得流量
是的。我从[ root@c5cb95fa2ca7:~/tcpreplay-4.3.2 ]$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
开始eth1
box1
,并得到:
box2
在tcpdump
的 eth1
观察到来自box2
的流量!
因此,我尝试使用[ root@360f58e1ebec:~/tcpreplay-4.3.2 ]$ tcpdump -i eth1 -n -vvv
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
05:21:00.108864 IP (tos 0x0, ttl 64, id 21670, offset 0, flags [DF], proto UDP (17), length 1498)
173.16.1.2.37156 > 173.16.1.3.9990: [bad udp cksum 0x61fd -> 0xced0!] UDP, length 1470
05:21:00.111266 IP (tos 0x0, ttl 64, id 21673, offset 0, flags [DF], proto UDP (17), length 1498)
173.16.1.2.37156 > 173.16.1.3.9990: [bad udp cksum 0x61fd -> 0xa065!] UDP, length 1470
05:21:00.123365 IP (tos 0x0, ttl 64, id 21676, offset 0, flags [DF], proto UDP (17), length 1498)
173.16.1.2.37156 > 173.16.1.3.9990: [bad udp cksum 0x61fd -> 0x728f!] UDP, length 1470
05:21:00.134962 IP (tos 0x0, ttl 64, id 21678, offset 0, flags [DF], proto UDP (17), length 1498)
173.16.1.2.37156 > 173.16.1.3.9990: [bad udp cksum 0x61fd -> 0x4437!] UDP, length 1470
05:21:00.146668 IP (tos 0x0, ttl 64, id 21680, offset 0, flags [DF], proto UDP (17), length 1498)
173.16.1.2.37156 > 173.16.1.3.9990: [bad udp cksum 0x61fd -> 0x16a1!] UDP, length 1470
测试eth1
和box2
之间的吞吐量。因为我想向OVS添加一些新功能,然后再次重新测试吞吐量。但是,在我的情况下,box1-eth1
服务器无法正常工作。
请分享有关我的问题的任何想法。谢谢您的宝贵时间。
最良好的祝愿。
编辑
我让box1
服务器正常工作。但是我仍然有些困惑。
这次,我删除了OVS端口的box2
。
新设置:
iperf
旧设置:
iperf
删除iperf
后,一切正常。
我想知道为什么datatype_type
对sudo ovs-vsctl add-br ovs-br1
不起作用?
这是创建sudo ovs-vsctl add-br ovs-br1 -- set bridge ovs-br1 datapath_type=netdev
和两个datapath_type
端口和 datapath_type=netdev
后的iperf
的日志:
openvswitch
这是OVS 没有 ovs-br1
的日志:
docker-ovs