
On 4/10/2012 1:54 AM, Michael Welzl wrote:
On Apr 10, 2012, at 7:13 AM, Harald Alvestrand wrote:
I would like to have at least some idea of the benefit we're gaining from a short reaction time in order to evaluate what the cost/benefit tradeoff is.
Well okay, my statement here was handwavery. Let's put it this way, I imagine you causing problems with competing TCPs if you react slower, and I'd at least like to see this potential problem investigated (and that's how you can figure out the trade-off).
Please see my discussion elsewhere which shows that except for the huge-sudden-congestion case a delay-sensing algorithm such as this should sense and react to imminent congestion earlier than a loss-based algorithm such as TCP would. (Otherwise it wouldn't be doing a very good job at avoiding delay!)
On the other hand, as I keep saying: if you constantly receive SCTP ACKs from a parallel RTCweb data transfer, you should make use of that other feedback.
While there are use-cases for rtcweb which involve "infinite" data sources (or at least close enough), by far most uses in rtcweb of the data channel are short, bursty, or paced at relatively low bandwidth. There are a few use-cases where large background transfers are done (largish file transfer), but most of those are typically a short burst in a longer call/connection. So it's a bad idea to plan on a "constant stream of SCTP ACKs"... If you have some SCTP traffic, then yes it would be nice if that helped your algorithm, but I think you'll find that the existing media streams will typically be far more frequent (and reliable). In a "normal" video chat, you'll have 10-30 frames/second of video (of 1 to (say) 6 packets per frame), plus typically 50 audio packets per second - in each direction, each carrying timing information. So in normal situations, there's plenty of timing traffic to use from media streams. There are cases where there may be one-way traffic, and knowledge of the idle direction's bandwidth will degrade - but that's ok, if the traffic starts up again is should see no delay (barring TCP causing standing queues). At most the algorithm should revert back to the starting state (faster adaptation, perhaps start-point, though I'd want the restart point to be based on our last good estimate, etc). So, does that answer your concerns? -- Randell Jesup randell-ietf@jesup.org