
On 4/5/2012 2:12 PM, Matt Mathis wrote:
I assembled the following text for possible inclusion in an introduction or problem statement: ---- The RTC web congestion control is distinctly different from other congestion control problems. The goal is to choose a codec and/or codec parameters that simultaneously minimize self inflicted queueing delay while maximizing [multi-media image/signal] quality. In almost all cases the selected network operating point will be well below the operating point that would be chosen by any traditional throughput maximizing congestion control algorithm.
I'm not sure this statement is true; in an fairly unloaded network where the bottleneck is a transport link without ongoing TCP flows, the algorithms *should* largely saturate the link at the same throughput that TCP/etc would achieve (but with much lower delay/queue-lengths at the bottleneck). This is NOT an uncommon situation. Typically you'll have occasional short-lived bursts of TCP flows (short-lived flows, or short-lived traffic on a long-life flow such as is used in Gmail, etc) competing for bandwidth.
The RTC web congestion control is explicitly designed to detect if a given encoding causes persistent queues, and to select a lower rate if so. This is nearly guaranteed to leave some unused network capacity and would be considered suboptimal for a throughput maximizing protocol. Thus it is unlikely that any properly functioning RTC web application will endanger any throughput maximizing protocol.
Agreed (when the bottleneck is caused by competing flows (cross or otherwise). See my comments in an earlier message about how in most cases a delay-sensing CC algorithm should back off before TCP even sees losses - though not in all cases. In any steady-state case, however, which is what you're referring to here, the delay-sensing CC is likely to either end up pinned to the floor (adapt down to min or quit), or be forced to convert into loss-based, high delay mode.
The inverse is not at all true. It can be anticipated that TCP and other throughput maximizing protocols have the potential to cause unacceptable queueing delays for RTC web based application, unless one of two strategies are implemented: the traffic is segregated into separate queues such that delay sensitive traffic does not get queued behind throughput maximizing traffic, or alternatively that the AQM is tuned to maintain very short queues under all conditions.
I'm not certain this is the case; I can imagine algorithms that attempt to avoid being pinned to the floor by at least occasionally delaying reduction in bandwidth and accepting a short burst of delay in order to force TCP flows to back off (for a while). This may result in an unacceptable user experience, however, especially if the queue depth is long (some bufferbloat examples can result in 1/2-1s or longer delays before loss will be seen). I'm not certain that such algorithms could actually succeed, however, even with the periodic negative impacts. It *might* be effective in managing problems cause by bursts of competing short-term TCP flows (typically browser HTTP page-load behavior, for example).
We assume that the RTC web traffic is somehow protected from other throughput maximizing traffic. Enabling RTC web and throughput maximizing protocols to simultaneously meet their respective objectives in a single queue is beyond the scope of this work.
I'd state more that it's understood that long-term competing TCP flows (of sufficient potential bandwidth in total) will either out-compete the rtcweb traffic or force it into a loss-based, high-delay behavior, which may or may not be acceptable to the user, depending on the induced queuing delay in the bottleneck node/connection. -- Randell Jesup randell-ietf@jesup.org