
And I think you are missing my big point: delays sensing does not work for general purpose congestion control, due to known stability and fairness problems. Unless the scope of the RTCweb problem is somewhat narrowed, it is guaranteed to fail, because the larger problem is already known to be intractable. The 50k meter view: Applications that are expecting to use RTC web need a bound on the queuing delay. We (the WG) have to assume some sort of bound on other traffic (delay or loss sensing) that might be sharing the same bottleneck queue. If we don't assume some bound on other traffic, it is self evident that RTCweb can not possibly guarantee the bound on the delay. Period. There is nothing complicated or subtle here. And the user will know very clearly when the network fails to meet it. It would help move the Internet forward for the RTCweb WG to point out to the rest of the community that QoS like technologies can solve the queue sharing problems. However, specifying any of these details are clearly out of scope for RTCweb. My earlier long message was an attempt at explicitly narrowing the scope to make the problem tractable. Thanks, --MM-- The best way to predict the future is to create it. - Alan Kay On Fri, Apr 6, 2012 at 1:25 AM, Michael Welzl <michawe@ifi.uio.no> wrote:
<snip>
To take a short point out of my previous, maybe too long email: depending on the queue length, which is not under your control, saying "I want 0 delay on top of the baseline" may mean that you'd only get a very small amount of bandwidth.
One possibility would be to make that trade-off a knob for the user. Another one is to let the "at least as much as the TCP equation dictates" rule in Stefan's proposal take care of that, but then you don't really know how much delay you'll get... e.g., maybe users could even live with less bandwidth than what the TCP equation dictates, as long as the delay is smaller? I think that's not an option with the currently proposed scheme.
Cheers, Michael