如何衡量服务器和客户端之间的实际带宽,以决定要发送多少实时数据?
我的服务器每秒30次向客户端发送读取时间数据。如果服务器有太多数据,它会优先处理数据块并丢弃任何不适合可用带宽的内容,因为无论如何这个数据都会在下一个滴答时失效。数据通过可靠(20%)和不可靠的通道(80%)发送(基于UDP,但如果TCP作为可靠通道可以提供任何好处,请告诉我)。数据对延迟非常敏感。服务器通常(但不总是!)拥有的数据多于可用带宽。发送尽可能多的数据但不超过可用带宽至关重要,以避免数据包丢失或延迟更长。
服务器和客户端是自定义应用程序,因此可以实现任何算法/协议。
我的主要问题是如何跟踪可用带宽。此外,任何有关典型带宽抖动的统计信息都会有所帮助(服务器位于云端,客户端是家庭用户,全球范围内)。
目前我正在考虑如何利用:
-
可靠频道的延迟信息。它应该与带宽相关联,因为如果延迟增加,这可能(!)意味着由于数据包丢失而涉及重传,因此服务器必须降低数据速率。
-
客户端在时间范围内在不可靠信道上接收的数据量。特别是如果数据量低于从服务器发送的数据量。
-
如果当前延迟接近或低于最低记录延迟,则可以增加带宽
问题在于这种方法过于复杂,并且涉及许多“启发式”,例如应该是增加/减少带宽等的步骤。
寻找过去处理过类似问题的人或任何有意义的想法的建议
2 个答案:
答案 0 :(得分:1)
The first symptom of trying to use more bandwidth than you actually have will be increased latency, as you fill up the buffers between the sender and whatever the bottleneck is. See https://en.wikipedia.org/wiki/Bufferbloat. My guess is that if you can successfully detect increased latency as you start to fill up the bandwidth and back off then you can avoid packet loss.
I wouldn't underestimate TCP - people have spent a lot of time tuning its congestion avoidance to get a reasonable amount of the available bandwidth while still being a good network citizen. It may not be easy to do better.
On the other hand, a lot will depend on the attitude of the intermediate nodes, which may treat UDP differently from TCP. You may find that under load they either prioritize or discard UDP. Also some networks, especially with satellite links, may use https://en.wikipedia.org/wiki/TCP_acceleration without you even knowing about it. (This was a painful surprise for us - we relied on the TCP connection failing and keep-alive to detect loss of connectivity. Unfortunately the TCP accelerator in use maintained a connection to us, pretending to be the far end, even when connectivity to the far end had in fact been lost).
答案 1 :(得分:0)
经过一番研究,问题有一个名称:拥塞控制或拥塞避免算法。这是一个相当复杂的话题,有很多关于它的材料。 TCP Congestion Control随着时间的推移不断发展,非常好。还有其他协议可以实现它,例如UDT或SCTP