
On 04/10/2012 04:50 PM, Matt Mathis wrote:
Jim, For the record, my earlier "General thoughts" messages reflects my bet that better AQM will not be sufficient by itself, because the best that you can do with AQM will not in general be good enough for teleconferencing. Two reasons: - The optimal AQM setpoint for throughput maximization will consume too much of the RTCWEB end-to-end delay budget. - AQM is designed to allow transient long queues for TCP slowstart, etc.
I do not disagree that AQM helps a lot. Without AQM the situation is abysmal. But this is really a different problem than bufferbloat, and good solutions to bufferbloat are not likely to automatically solve the RTCWEB problem because RTCWEB's target delay is well below the target delays for nearly all other applications.
The exception might be the gaming community. Just last week somebody was telling me that all serious gaming hackers roll QoS on their home LAN, and that many of the large ISPs honor enough of the bits to make a difference.
I think we're in violent agreement here: while AQM is necessary, it is unlikely to be sufficient, particularly with entertainment like IW10, sharded web sites, and tcp offload engines doing really evil things when the packet trains go "splat" at the edge of the network. I'm planning on both AQM and classification/"fair" queueing. The problem that scares me most are these large packet trains leaving data centers... - Jim
Thanks, --MM-- The best way to predict the future is to create it. - Alan Kay
On Tue, Apr 10, 2012 at 12:14 PM, Jim Gettys <jg@freedesktop.org <mailto:jg@freedesktop.org>> wrote:
On 04/10/2012 02:58 PM, Randell Jesup wrote: > > 100ms is just bad, bad, bad for VoIP on the same links. The only case > where I'd say it's ok is where it knows it's competing with > significant TCP flows. If it reverted to 0 queuing delay or close > when the channel is not saturated by TCP, then we might be ok (not > sure). But I don't think it does that. > You aren't going to see delay under saturating load under 100ms unless the bottleneck link is running a working AQM; that's the property of tail drop, and the "rule of thumb" for sizing buffers has been of order 100ms. This is to ensure maximum bandwidth over continental paths of a single TCP flow.
Unfortunately, the bloat in the broadband edge is often/usually much, much higher than this, being best measured in seconds :-(. http://gettys.files.wordpress.com/2010/12/uplink_buffer_all.png http://gettys.files.wordpress.com/2010/12/downlink_buffer_all.png (thanks to the Netalyzr folks).
Worse yet, the broadband edge is typically a single queue today (even in technologies that may support multiple classifications. So your VOIP and other traffic is likely stuck behind other traffic. ISP's telephony services are typically bypassing these queues.
If there is AQM, then you'll get packet marking going on (drop or ECN), and decent latencies.
There is hope here for AQM algorithms that are self tuning: I now know of two of such beasts, though they are a long way from "running code" state at the moment.
So the direction I'm going to to get AQM that works..... (along with classification...). But the high order bit is AQM, to keep the end point's TCP's behaving, which you can't do solely by classification. - Jim
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no <mailto:Rtp-congestion@alvestrand.no> http://www.alvestrand.no/mailman/listinfo/rtp-congestion