
On 19. sep. 2012, at 22:23, Mirja Kühlewind wrote:
Hi,
provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck.
The original motivation for this was pretty simple. If you knew two flows with different elasticities were sharing a bottleneck, and seeing loss, you could make the congestion reaction on the more elastic flow, and leave the less elastic one untouched. Okay, I can understand this point. But if fact if you see congestion and you are elastic (up to a certain degree), you should lower your sending rate (or do whatever you can do) because there also might be other flows on the link that need the bandwidth even more urgent.
With combined congestion control, this (lowering the send rate) will happen. Consider having only one controller; then, "making the congestion reaction on the more elastic flow" translates into a change to the scheduler that takes data from application 1 and application 2 in addition to the standard congestion control reaction for what looks like one flow to the network. (in fact, the way I envision it, this behavior would be the result of information exchange between multiple controllers).
I guess the only case I really see is if there is not enough bandwidth for both transmissions (such that both of the respective services do not work proper) and you have to decide which one to kill. But that's the circuit-breaker case and thus out of scope.
This sounds like a misunderstanding. I hope what I wrote above and my previous email help. Else, maybe you'd want to take a look into the list archives for more information; this has been discussed at some depth in the past. BTW, Randell Jesup has, at some point, stated that Mozilla is already using such scheduling / combined congestion control.
- Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM.
This was related to the "Internet suckage" metric. I don't know of a concrete proposal, but obviously things like measuring the queue and the delay variation could be first cuts.
Basically, if you're failing to deliver interactive traffic because of bufferbloat and competing flows, this would help to point the finger at the guilty party, so maybe it can be fixed.
hm, isn't this quite a topic on its own and actually might simply belong in the IPPM working group at least as long if there is no very specific case why something is different for RT traffic? Maybe it's not so much about the measurement techniques but the way how the interpret the measured metrics (when sending RT traffic)?
I think that collaboration with IPPM is indeed the right choice of words here. The techniques would have to be specific to requirements of RTP based interactive real-time media. Cheers, Michael