
Changing the subject again, since I'm diving into one little corner of the thread... On 10/28/2011 03:00 AM, Varun Singh wrote:
7. The congestion control algorithm SHOULD attempt to keep the total bandwidth controlled so as to minimize the media- stream end-to-end delays between the participants.
Not sure I understand this. If I understand it, suggest to rewrite as
7. The congestion control algorithm SHOULD attempt to minimize the media-stream end-to-end delays between the participants, by controlling bandwidth appropriately. The receiver doesn't know the end-to-end delay, the RTT is calculated at the sender. So is the sender making this decision or the receiver? We shouldn't make this decision as part of the requirements list, but:
It's not necessary to know the absolute value of something in order to attempt to minimize it; the delay-based algorithm is jiggling things around and seeing if the sender->receiver end-to-end delay increases or decreases. There's a floor somewhere based on speed of light in fiber, distance and clocking intervals, but it's not necessary to know what that floor is in order to try to approach it. I would argue that RTT is in fact a distraction for this optimization; the time it takes packets to go from receiver to sender does not give any information about the one-way end-to-end delay from sender to receiver. (Imagine using this with a typical cable TV network with a fat, uncongested downlink and an anemic, highly congested uplink - if optimizing a downlink stream based on RTT measurements, we would experience wild swings in RTT because of the upstream congestion delaying the feedback packets, but tweaking the sending rate based on this information would downgrade, not improve, the experience.)