This paper discusses ways to avoid network congestion and mainly focus on five algorithms putted in 4BSD TCP which are:
1: round-trip-time variance estimation
2: exponential retransmit timer backoff
3: slow-start
4: more aggressive receiver ack policy
5: dynamic window sizing on congestion
This paper starts with a brief introduction to the 1980s network: explosive growth computer networks leads to the common situation that gateways drop off 10% of the incoming packets. And in October '86, the "first congestion collapses" made things worse. By investigation of some of this problems, the author find out that much of these cause lies in transport protocol implementations rather than protocols themselves.
According to the author, the main reason that "network congestion" could occur is because of the broken of so called 'conservation of packets' , and the author points out that there are only three ways for packet conservation to fail:
1: The connection doesn't get to equilibrium, or
2: A sender injects a new packet before an old packet has exited, or
3: The equilibrium can't be reached because of resource limits along the path.
So the following paragraphs are divided in to three sections correspondingly , each section shows different solutions to the ways mentioned above.
The first section focus on getting to network equilibrium, in short, it is about how to start transiting data between sender and receiver and how to start this transiting in equilibrium. Here the author introduces Slow-start algorithm which can be concluded in four sentences: Add a congestion, cwnd, to the per-connection state. When starting or restarting after a loss, set cwnd to one packet. On each ack for new data, increase cwnd by one packet. When sending, send the minimum of the receiver's advertised window and cwnd. Since the window increase exponentially, it takes negligible effect on performance. By contracting with connection without slow-start, slow-start model shows good performance on 'equilibrium'.
The second shows solution to " A sender injects a new packet before an old packet has exited". Here the author points out that if protocol implementation is correct, then it is because the wrong 'round trip timer estimator' that leads to the failure of sender's retransimit timer. But, unfortunately, though it is important to have a good round trip time estimator, but it is frequently botched. The author lists two common mistakes. First is using fixed value rather than a ‘pleasant side effect of estimating β’. Another mistake is in the backoff after a retransmit, here the author proposes "Exponential Backoff" method to fix up this mistake.
The third section introduces strategy about congestion avoidance which is, in my view, detecting the network's congestion and recovering from congestion. Since a timeout indicates a lost packet, if the timers are in good shape, then the timeout also is a signal that congestion is occurring. In this case, the sender should decrease the window size in a multiplicative way. On the other hand, since the available bandwidth changed dynamically, in order to increase the network's utilization, rather than simply multiplicative increase, the author points out the best increase policy is to make small, constant changes to the window size which is called "additive-increase".
The last paragraph gives us a brief introduction about gateway side congestion control and what the author will do in the future. Since algorithms at transport endpoints cannot insure fair sharing of that capacity, and gateway has enough information to control sharing and fair allocation. So, it is necessary to view the gateway 'congestion detection' algorithm as the next big step.
Critique:
First, I got confused when I read the slow-start algorithm, it's obvious that the author forget to mention when should the sender quit the "slow-start" model.There must be a signal to tell the sender to stop slow-start. Should it be when the cwnd equals window size in packets or should it be a timeout? And what should the sender do after "slow-start" model? Second, at the last paragraph of this paper the author says algorithms at the transport endpoints can insure the network capacity isn't exceeded, and in order for fair allocation, the author take gateway 'congestion detection' algorithm as the next step. But, in fact, all these algorithms in endpoints can not afford a good robust for the entire net work, since the author's solution to network congestion are based on the precondition that all the endpoints obey these algorithms. What if a bad endpoint misbehave by sending large amount of waste packets? Then the entire net work would breakdown. So the congestion avoidance on the gateway side is also a 'patch' for the previous algorithms.