
On 4/11/2012 2:38 PM, Wesley Eddy wrote:
On 4/11/2012 2:56 AM, Randell Jesup wrote:
Agreed. I assume the LEDBAT WG did VoIP tests? Did they measure MOS? (not that it's a great measure, but it's well-known and understood). Any links to results?
To my knowledge, there hasn't been specific test results like this presented to the working group. The working group is actually planning to close "real soon now", as the specification is on the way to the IESG and the rest of the discussion has really died off.
The link posted by Piers shows that a 25ms-target LEDBAT flow increased VoIP delay by 35ms. One should assume barring additional tests that a 100ms LEDBAT would increase VoIP delay by >100ms. Given a 150ms window for best quality (mouth-to-ear), you've already blown the window. At best you can avoid going *too* far down the slope. Given 50-150ms of capture, encoding, transmission, jitter buffer, decode, playback delay for a video call, adding 100ms+ of queuing delay will put you *well* down the curve, even with a good, local connection. From that data, I would not say 100ms LEDBAT is fair (as a scavenger protocol) with classic inflexible VoIP traffic, which includes our (rtcweb) traffic. Unfortunately, they also mention that bittorrent's equivalent to LEDBAT is deployed, and has 100ms as the target.
Such tests would be interesting and useful (maybe even necessary) in thinking about progressing from Experimental to Standards Track, in my opinion.
Since this will be part of the deployed base that RTCWEB is sharing links with, it will be good to think about in terms of how we evaluate candidate RTCWEB mechanisms/algorithms.
We can and should test against LEDBAT, but I think it will tell just tell us that LEDBAT doesn't play nicely with other delay-based algorithms, especially if they have low-delay targets. -- Randell Jesup randell-ietf@jesup.org