Saturday, September 6, 2008

Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks

The authors say that congestion control and avoidance algorithms in computer networks fall in the general category of dynamic resource management problems. They analyze the algorithms based on metrics like efficiency, fairness, distributed nature, convergence time and size of oscillations. The analysis is done on analyze increase/decrease algorithms used in the binary feedback scheme. This scheme sets a bit in the packet to send feedback whether the resource is overloaded or underloaded.

The network performance graph in the paper shows the pattern of throughput as load increases. The point at which packets start getting lost is called a cliff while knee represents the point at which increase is throughput is very less compared to the large increase in response time. Congestion avoidance schemes try to keep the network to the left of the cliff.

In order to analyze the algorithms the authors show graphs that show the system state transitions in a 2 dimensional space depicting 2 users. The graphs aid in understanding the argument to a great length. The efficiency line shows settings where the total is equal to the maximum load while the fairness line is at 45 degrees depicting equal usage. The optimal point lies that the intersection of these two lines. The trajectory path of additive increase/multiplicative decrease converges to the optimal state. The authors then use rigorous mathematical proofs to show that in order to achieve fairness policy multiplicative decrease should be used while additive increase along with multiplicative decrease aids in maximizing fairness and efficiency at each feedback cycle.

I liked this paper because it explains the mathematical reasoning behind the algorithms used in computer networks. Reading such papers helps in finding the proofs behind the algorithms used in the field of networking. It is nice to have this paper in the syllabus.

The only thing that I did not like in the paper is that it does not assume things like delayed feedback and asynchronous operations. The authors discuss these issues in the paper very briefly but the settings have changed significantly since the time this paper was written. We now have different type of traffic flowing on the same link. Does this complicate calculation of congestion per link? Maybe for different types of flows the feedback signal should be interpreted differently. There are many questions like this that come up because of the current design and usage of networks.


No comments: