
On Aug 14, 2012, at 5:36 PM, Bill Ver Steeg (versteb) wrote:
Sorry for the spasm of email--- I am getting caught up correspondence.
Note that the periodic feedback can convey more than a single temporal datapoint. As I discussed in a previous email, the receiver can inform the sender about trends or discontinuities in the data. This is particularly useful if the sender and receiver can both recognize/signal discontinuities in the data.
So, I am arguing for periodic feedback that has a structure that allows a rich information flow back to the sender. Based on such data flows, the sender would like to glean "I am OK when I send at 2 Mbps or 3 Mbps, but the buffers are slowly building when I burst at 4 Mbps and catastrophically congested when I burst at 5 Mbps". It could also glean "I am sending at a constant rate, but I seem to be getting sporadic cross traffic (or fade) that occasionally drives delay". It obviously can also glean "Based on increasing delay and the onset of loss, the channel can no longer support this rate, I need to downshift" and "no loss and constant delay --- I am fine" with some very short, infrequent messages. Knowing the statistical/ temporal detail about delay is important, and worth some resources.
Note that in addition to periodic feedback, timely feedback when anomalous conditions occur is probably desirable.
So --- IMHO, this is not a "do we do congestion control in the sender or the receiver?" question. It is a question of cooperation between the two to detect and set the correct rate. Knowing when to upshift is as important as knowing when to downshift.
The tricky bit will be to avoid making it all too complex... Cheers, Michael