Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes

10/31/2021
by   Bjørn Ivar Teigen, et al.
0

End-to-end congestion control is the main method of congestion control in the Internet, and achieving consistent low queuing latency with end-to-end methods is a very active area of research. Even so, achieving consistent low queuing latency in the Internet still remains an unsolved problem. Therefore, we ask "What are the fundamental limits of end-to-end congestion control?" We find that the unavoidable queuing latency for best-case end-to-end congestion control is on the order of hundreds of milliseconds under conditions that are common in the Internet. Our argument depends on two things: The latency of congestion signaling – at minimum the speed of light – and the fact that link capacity may change rapidly for an end-to-end path in the Internet.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro