
On 10/11/2011 03:11 AM, Henrik Lundin wrote:
I do not agree with you here. When an over-use is detected, we propose to measure the /actual/ throughput (over the last 1 second), and set the target bitrate to beta times this throughput. Since the measured throughput is a rate that evidently was feasible (at least during that 1 second), any beta < 1 should assert that the buffers get drained, but of course at different rates depending on the magnitude of beta.
Take a look at the data from the ICSI netalyzr: you'll find scatter plots at: http://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-thr... Note the different coloured lines. They represent the amount of buffering measured in the broadband edge in *seconds*. Also note that for various reasons, the netalyzr data is actually likely underestimating the problem. Then realise that when congested, nothing you do can react faster than the RTT including the buffering. So if your congestion is in the broadband edge (where it often/usually is), you are in a world of hurt, and you can't use any algorithm that has fixed time constants, even one as long as 1 second. Wish this weren't so, but it is. Bufferbloat is a disaster... - Jim