
On 05/04/2012 03:02 AM, Harald Alvestrand wrote:
Now that the LEDBAT discussion has died down....
it's clear to me that we've got two scenarios where we HAVE to consider packet loss as an indicator that a congestion control algorithm based on delay will "have to do something":
- Packet loss because of queues filled by TCP (high delay, but no way to reduce it) - Packet loss because of AQM-handled congestion (low delay, but packets go AWOL anyway)
We also have a third category of loss that we should NOT consider, if we can avoid it:
- Packet loss due to stochastic events like wireless drops.
These may be less frequent than you might expect: in fact, we have had experience recently with the opposite problem, finding device drivers which would attempt to retransmit infinitely (inserting unbounded delay) in the face of problems in the wireless channel. At least we've now got an upper bound on retransmission attempts in the Linux Atheros 9k driver... There are bugs everywhere.... What is "normal" will be needing multiple attempts to transmit a packet in WiFi (and therefore additional jitter as those attempts take place). It turns out that it is often/usually a better strategy to *not* drop the transmission rate in the face of transmission errors, but to make multiple transmission attempts (remember, dropping the rate increases the time in which a packet may get damaged by noise). The Minstrel algorithm in Linux is quite sophisticated in this way. But such operation will generate jitter rather than loss. - Jim For amusement, see: http://www.rossolson.com/dwelling/2003/11/every-packet-is-sacred/ Unfortunately, some of the world takes this humour seriously, and the result has been bufferbloat...