Fwd: WG Review: RTP Media Congestion Avoidance Techniques (rmcat)

As you can see, the RMCAT charter is now out for review, and scheduled for approval on the next IESG telechat. The charter below is pretty much just the previous one that Michael sent to this list, with tweaks that I made in order to satisfy some of the blocking and non-blocking comments that it attracted during the IESG internal review. It does seem to be missing the milestones at the moment; which should be no real issue as they mimic the final set of deliverables bullets largely. However, I think this is an issue with our new chartering tool that RMCAT is making the first real use of, and I'll work to resolve it with the tool. -------- Original Message -------- Subject: WG Review: RTP Media Congestion Avoidance Techniques (rmcat) Date: Wed, 05 Sep 2012 14:46:52 -0700 From: The IESG <iesg-secretary@ietf.org> To: IETF-Announce <ietf-announce@ietf.org> A new IETF working group has been proposed in the Transport Area. The IESG has not made any determination yet. The following draft charter was submitted, and is provided for informational purposes only. Please send your comments to the IESG mailing list (iesg at ietf.org) by 2012-09-12. RTP Media Congestion Avoidance Techniques (rmcat) ------------------------------------------------ Current Status: Proposed Working Group Assigned Area Director: Wesley Eddy <wes@mti-systems.com> Charter of Working Group: Description of Working Group Today's Internet traffic includes interactive real-time media, which is often carried via sets of flows using RTP over UDP. There is no generally accepted congestion control mechanism for this kind of data flow. With the deployment of applications using the RTCWEB protocol suite, the number of such flows is likely to increase, especially non-fixed-rate flows such as video or adaptive audio. There is therefore some urgency in specifying one or more congestion control mechanisms that can find general acceptance. Congestion control algorithms for interactive real time media may need to be quite different from the congestion control of TCP: for example, some applications can be more tolerant to loss than delay and jitter. The set of requirements for such an algorithm includes, but is not limited to: - Low delay and low jitter for the case where there is no competing traffic using other algorithms - Reasonable share of bandwidth when competing with RMCAT traffic, other real-time media protocols, and ideally also TCP and other protocols. A 'reasonable share' means that no flow has a significantly negative impact [RFC5033] on other flows and at minimum that no flow starves. - Effective use of signals like packet loss and ECN markings to adapt to congestion The working group will: - Develop a clear understanding of the congestion control requirements for RTP flows, and document deficiencies of existing mechanisms such as TFRC with regards to these requirements. This must be completed prior to finishing any Experimental algorithm specifications. - Identify interactions between applications and RTP flows to enable conveying helpful cross-layer information such as per-packet priorities, flow elasticity, etc. This information might be used to populate an API, but the WG will not define a specific API itself. - Determine if extensions to RTP/RTCP are needed for carrying congestion control feedback, using DCCP as a model. If so, provide the requirements for such extensions to the AVTCORE working group for standardization there. - Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM. - Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck. - Define evaluation criteria for proposed congestion control mechanisms, and publish these as an Informational RFC. This must be completed prior to finishing any Proposed Standard algorithm specifications. - Find or develop candidate congestion control algorithms, verify that these can be tested on the Internet without significant risk, and publish one or more of these as Experimental RFCs. - Publish evaluation criteria and the result of experimentation with these Experimental algorithms on the Internet. This must be completed prior to finishing any Proposed Standard algorithm specifications. - Once an algorithm has been found or developed that meets the evaluation criteria, and has a satisfactory amount of documented experience on the Internet, publish this algorithm as a Standards Track RFC. There may be more than one such algorithm. - For each of the Experimental algorithms that have not been selected for the Standards Track, the working group will review the algorithm and determine whether the RFC should be moved to Historic status via a document that briefly describes the issues encountered. This step is particularly important for algorithms with significant flaws, such as ones that turn out to be harmful to flows using or competing with them. The work will be guided by the advice laid out in RFC 5405 (UDP Usage Guidelines), RFC 2914 (congestion control principles), and RFC5033 (Specifying New Congestion Control Algorithms). The following topics are out of scope of this working group, on the assumption that work on them will proceed elsewhere: - Circuit-breaker algorithms for stopping media flows when network conditions render them useless; this work is done in AVTCORE - Media flows for non-interactive purposes like stored video playback; those are not as delay sensitive as interactive traffic - Defining active queue management algorithms or modifications to TCP of any kind - Multicast congestion control; common control of multiple unicast flows is in scope - Topologies other than point-to-point connections; implications on multi-hop connections will be considered at a later stage The working group is expected to work closely with the RAI area, including the underlying technologies being worked on in the AVTCORE and AVTEXT WGs, and the applications/protocol suites being developed in the CLUE and RTCWEB working groups. It will also coordinate closely with other Transport area groups working on congestion control, and with the Internet Congestion Control Research Group of the IRTF. Deliverables: - Requirements for congestion control algorithms for interactive real time media as an Informational RFC - Evaluation criteria for congestion control algorithms for interactive real time media as an Informational RFC - RTCP extensions for use with congestion control algorithms as a Proposed Standard RFC - Interactions between applications and RTP flows as an Informational RFC - Identifying and controlling groups of flows as a Proposed Standard RFC - Techniques to detect, instrument or diagnose failing to meet RT schedules as either an Informational RFC or on the Standards Track if needed for interoperability or other aspects that would justify it. - Candidate congestion control algorithm for interactive real time media as Experimental RFCs (likely more than one) - Experimentation and evaluation results for candidate congestion control algorithms as an Informational RFC - One or more recommended congestion control algorithms for interactive real time media as Proposed Standard RFCs Milestones:

Hi, I know I'm late to comment, but after reading the charter again, I have to admit that two points are not absolutely clear to me. Maybe someone can provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck.
Why do I need to identify shared bottleneck? The task of congestion control is to shared the bottleneck capacity in some way. Usually I will end up with a different share if the same or a different congestion control algorithm is used by the competing flow(s). Is the idea here to detect a shared bottleneck and then change the congestion control behavior to achieve a different share than I would otherwise? I there a concrete scenario or approach how what to do this and why this is needed in case a shared bottleneck is detected?
- Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM.
I believe I got the idea behind this but I not really sure how to realize it. Are you expecting some explicit feedback from the receiver (or even other components)? Is there a concrete proposal already how to do this? What are you planning to do if you have detected an RT schedule failure? Change the congestion control in some way (to be more aggressive)? Thanks for clarification, Mirja On Thursday 06 September 2012 04:42:02 Wesley Eddy wrote:
As you can see, the RMCAT charter is now out for review, and scheduled for approval on the next IESG telechat.
The charter below is pretty much just the previous one that Michael sent to this list, with tweaks that I made in order to satisfy some of the blocking and non-blocking comments that it attracted during the IESG internal review.
It does seem to be missing the milestones at the moment; which should be no real issue as they mimic the final set of deliverables bullets largely. However, I think this is an issue with our new chartering tool that RMCAT is making the first real use of, and I'll work to resolve it with the tool.
-------- Original Message -------- Subject: WG Review: RTP Media Congestion Avoidance Techniques (rmcat) Date: Wed, 05 Sep 2012 14:46:52 -0700 From: The IESG <iesg-secretary@ietf.org> To: IETF-Announce <ietf-announce@ietf.org>
A new IETF working group has been proposed in the Transport Area. The IESG has not made any determination yet. The following draft charter was submitted, and is provided for informational purposes only. Please send your comments to the IESG mailing list (iesg at ietf.org) by 2012-09-12.
RTP Media Congestion Avoidance Techniques (rmcat) ------------------------------------------------ Current Status: Proposed Working Group
Assigned Area Director: Wesley Eddy <wes@mti-systems.com>
Charter of Working Group:
Description of Working Group
Today's Internet traffic includes interactive real-time media, which is often carried via sets of flows using RTP over UDP. There is no generally accepted congestion control mechanism for this kind of data flow. With the deployment of applications using the RTCWEB protocol suite, the number of such flows is likely to increase, especially non-fixed-rate flows such as video or adaptive audio. There is therefore some urgency in specifying one or more congestion control mechanisms that can find general acceptance.
Congestion control algorithms for interactive real time media may need to be quite different from the congestion control of TCP: for example, some applications can be more tolerant to loss than delay and jitter. The set of requirements for such an algorithm includes, but is not limited to: - Low delay and low jitter for the case where there is no competing traffic using other algorithms - Reasonable share of bandwidth when competing with RMCAT traffic, other real-time media protocols, and ideally also TCP and other protocols. A 'reasonable share' means that no flow has a significantly negative impact [RFC5033] on other flows and at minimum that no flow starves. - Effective use of signals like packet loss and ECN markings to adapt to congestion
The working group will: - Develop a clear understanding of the congestion control requirements for RTP flows, and document deficiencies of existing mechanisms such as TFRC with regards to these requirements. This must be completed prior to finishing any Experimental algorithm specifications. - Identify interactions between applications and RTP flows to enable conveying helpful cross-layer information such as per-packet priorities, flow elasticity, etc. This information might be used to populate an API, but the WG will not define a specific API itself. - Determine if extensions to RTP/RTCP are needed for carrying congestion control feedback, using DCCP as a model. If so, provide the requirements for such extensions to the AVTCORE working group for standardization there. - Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM. - Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck. - Define evaluation criteria for proposed congestion control mechanisms, and publish these as an Informational RFC. This must be completed prior to finishing any Proposed Standard algorithm specifications. - Find or develop candidate congestion control algorithms, verify that these can be tested on the Internet without significant risk, and publish one or more of these as Experimental RFCs. - Publish evaluation criteria and the result of experimentation with these Experimental algorithms on the Internet. This must be completed prior to finishing any Proposed Standard algorithm specifications. - Once an algorithm has been found or developed that meets the evaluation criteria, and has a satisfactory amount of documented experience on the Internet, publish this algorithm as a Standards Track RFC. There may be more than one such algorithm. - For each of the Experimental algorithms that have not been selected for the Standards Track, the working group will review the algorithm and determine whether the RFC should be moved to Historic status via a document that briefly describes the issues encountered. This step is particularly important for algorithms with significant flaws, such as ones that turn out to be harmful to flows using or competing with them.
The work will be guided by the advice laid out in RFC 5405 (UDP Usage Guidelines), RFC 2914 (congestion control principles), and RFC5033 (Specifying New Congestion Control Algorithms).
The following topics are out of scope of this working group, on the assumption that work on them will proceed elsewhere: - Circuit-breaker algorithms for stopping media flows when network conditions render them useless; this work is done in AVTCORE - Media flows for non-interactive purposes like stored video playback; those are not as delay sensitive as interactive traffic - Defining active queue management algorithms or modifications to TCP of any kind - Multicast congestion control; common control of multiple unicast flows is in scope - Topologies other than point-to-point connections; implications on multi-hop connections will be considered at a later stage
The working group is expected to work closely with the RAI area, including the underlying technologies being worked on in the AVTCORE and AVTEXT WGs, and the applications/protocol suites being developed in the CLUE and RTCWEB working groups. It will also coordinate closely with other Transport area groups working on congestion control, and with the Internet Congestion Control Research Group of the IRTF.
Deliverables: - Requirements for congestion control algorithms for interactive real time media as an Informational RFC - Evaluation criteria for congestion control algorithms for interactive real time media as an Informational RFC - RTCP extensions for use with congestion control algorithms as a Proposed Standard RFC - Interactions between applications and RTP flows as an Informational RFC - Identifying and controlling groups of flows as a Proposed Standard RFC - Techniques to detect, instrument or diagnose failing to meet RT schedules as either an Informational RFC or on the Standards Track if needed for interoperability or other aspects that would justify it. - Candidate congestion control algorithm for interactive real time media as Experimental RFCs (likely more than one) - Experimentation and evaluation results for candidate congestion control algorithms as an Informational RFC - One or more recommended congestion control algorithms for interactive real time media as Proposed Standard RFCs
Milestones:
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
-- ------------------------------------------------------------------- Dipl.-Ing. Mirja Kühlewind Institute of Communication Networks and Computer Engineering (IKR) University of Stuttgart, Germany Pfaffenwaldring 47, D-70569 Stuttgart tel: +49(0)711/685-67973 email: mirja.kuehlewind@ikr.uni-stuttgart.de web: www.ikr.uni-stuttgart.de -------------------------------------------------------------------

On 19. sep. 2012, at 14:30, Mirja Kuehlewind wrote:
Hi,
I know I'm late to comment, but after reading the charter again, I have to admit that two points are not absolutely clear to me. Maybe someone can provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck. Why do I need to identify shared bottleneck? The task of congestion control is to shared the bottleneck capacity in some way. Usually I will end up with a different share if the same or a different congestion control algorithm is used by the competing flow(s). Is the idea here to detect a shared bottleneck and then change the congestion control behavior to achieve a different share than I would otherwise? I there a concrete scenario or approach how what to do this and why this is needed in case a shared bottleneck is detected?
This is about combined congestion control for a group of flows in case they share the same bottleneck. Here's an example: Consider two hosts and two flows between them. If they share a bottleneck, you could use only a single congestion controller for both. The share between them is then totally under the sender's control - the result of scheduling, and not the result of "fighting it out" on the bottleneck. From the queue's point of view, you're dealing with one, not two flows, leading to a reduction in queue fluctuations at least. Examples of mechanisms designed to yield a benefit in such cases: Congestion Manager: http://nms.csail.mit.edu/cm/ TCP Control Block Interdependence: RFC2140. Example of a very prototypical implementation of a congestion-manager-like functionality via SCTP, yielding benefits, as a proof-of-concept: Michael Welzl, Florian Niederbacher, Stein Gjessing: "Beneficial Transparent Deployment of SCTP: the Missing Pieces", IEEE GlobeCom 2011, 5-9 December 2011, Houston, Texas. Example of a concrete scenario: in rtcweb, all traffic is (AFAIK) going to be multiplexed over the same UDP port-number pair. To be able to do any form of congestion control, it must therefore be assumed that all of these packets traverse the same path - which means that multiple flows multiplexed over this 5-tuple will share the same bottleneck. (and then, there are several measurement-based mechanisms for shared bottleneck detection out there; essentially, they look for correlations in some measured values such as one-way delay)
- Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM. I believe I got the idea behind this but I not really sure how to realize it. Are you expecting some explicit feedback from the receiver (or even other components)? Is there a concrete proposal already how to do this? What are you planning to do if you have detected an RT schedule failure? Change the congestion control in some way (to be more aggressive)?
Many approaches are possible. I don't think that there is a concrete proposal out there already, but I suspect that there is some more concrete stuff on Matt Mathis' mind. He would be the right person to answer this. Cheers, Michael

Hi, On Sep 19, 2012, at 14:58, Michael Welzl <michawe@ifi.uio.no> wrote:
Consider two hosts and two flows between them. If they share a bottleneck, you could use only a single congestion controller for both. The share between them is then totally under the sender's control - the result of scheduling, and not the result of "fighting it out" on the bottleneck. From the queue's point of view, you're dealing with one, not two flows, leading to a reduction in queue fluctuations at least.
how do you *know* two flows share a bottleneck? You could *assume* they do if, e.g., they share a path, i.e., run between the same two IP addresses concurrently in time. But even then - thanks to ECMP routing, datacenter load balancers, etc. - you don't really know. Or you could try and correlate loss events, but I don't know how robust that is.
Examples of mechanisms designed to yield a benefit in such cases: Congestion Manager: http://nms.csail.mit.edu/cm/ TCP Control Block Interdependence: RFC2140.
Yep, building a mechanism for this case is easy - *detecting* that a set of flows is limited by a shared bottleneck is harder.
Example of a concrete scenario: in rtcweb, all traffic is (AFAIK) going to be multiplexed over the same UDP port-number pair. To be able to do any form of congestion control, it must therefore be assumed that all of these packets traverse the same path - which means that multiple flows multiplexed over this 5-tuple will share the same bottleneck.
Ah, I see. We're not looking at the general problem here. Under the definition above, you don't actually have multiple flows. You have one flow (five-tuple) into which multiple senders are transmitting. That's a *much* simpler problem, because we actually don't need a combined congestion controller here, all we need is a priority scheme for which sender gets what fraction of the flow capacity.
(and then, there are several measurement-based mechanisms for shared bottleneck detection out there; essentially, they look for correlations in some measured values such as one-way delay)
Somebody needs to seriously look into how practically applicable they are, before we can use them as a basis for an IETF mechanism. I have seen a bunch of measurement papers, but I don't recall one that struck me as particularly real-world applicable. (This is probably my fault for not being up-to-date with what's out there now.) Lars

On 19. sep. 2012, at 15:14, Eggert, Lars wrote:
Hi,
On Sep 19, 2012, at 14:58, Michael Welzl <michawe@ifi.uio.no> wrote:
Consider two hosts and two flows between them. If they share a bottleneck, you could use only a single congestion controller for both. The share between them is then totally under the sender's control - the result of scheduling, and not the result of "fighting it out" on the bottleneck. From the queue's point of view, you're dealing with one, not two flows, leading to a reduction in queue fluctuations at least.
how do you *know* two flows share a bottleneck?
You could *assume* they do if, e.g., they share a path, i.e., run between the same two IP addresses concurrently in time. But even then - thanks to ECMP routing, datacenter load balancers, etc. - you don't really know.
Or you could try and correlate loss events, but I don't know how robust that is.
You don't know. You can measure, and estimate, leading to a probability; you can correlate delay measurements too. But I think the core of this discussion is at the end of the email.
Examples of mechanisms designed to yield a benefit in such cases: Congestion Manager: http://nms.csail.mit.edu/cm/ TCP Control Block Interdependence: RFC2140.
Yep, building a mechanism for this case is easy - *detecting* that a set of flows is limited by a shared bottleneck is harder.
Example of a concrete scenario: in rtcweb, all traffic is (AFAIK) going to be multiplexed over the same UDP port-number pair. To be able to do any form of congestion control, it must therefore be assumed that all of these packets traverse the same path - which means that multiple flows multiplexed over this 5-tuple will share the same bottleneck.
Ah, I see. We're not looking at the general problem here. Under the definition above, you don't actually have multiple flows. You have one flow (five-tuple) into which multiple senders are transmitting. That's a *much* simpler problem, because we actually don't need a combined congestion controller here, all we need is a priority scheme for which sender gets what fraction of the flow capacity.
Yes, I agree 100% (see my earlier postings to the list, one of them called "one congestion control to bind them all" :-) ). What makes it harder is that RMCAT isn't supposed to do congestion control for rtcweb *only*, and so it would be better to think of a more generic model, in which one could essentially implement what you describe here for rtcweb, but also install a shared bottleneck detection mechanism and use that for one's own benefit for multiple non-rtcweb flows. This can be addressed by designing an as-simple-as-possible element for exchanging the state of flows (a FSE), which I described earlier in messages to the list.
(and then, there are several measurement-based mechanisms for shared bottleneck detection out there; essentially, they look for correlations in some measured values such as one-way delay)
Somebody needs to seriously look into how practically applicable they are, before we can use them as a basis for an IETF mechanism. I have seen a bunch of measurement papers, but I don't recall one that struck me as particularly real-world applicable. (This is probably my fault for not being up-to-date with what's out there now.)
I think an up-to-date literature survey would be interesting. A relatively recent example, with application, is: Sofiane Hassayoun, Janardhan Iyengar and David Ros, “Dynamic Window Coupling for Multipath Congestion Control”. In Proceedings of IEEE ICNP 2011, Vancouver, October 2011. However, I don't think the IETF would need to standardize a shared bottleneck detection mechanism per se. Just the type of information exchanged, for flows that are identified to share the same bottleneck by whatever means (multiplexing over the same 5-tuple or some measurement based technique). Cheers, Michael

Hi, On Sep 19, 2012, at 15:27, Michael Welzl <michawe@ifi.uio.no> wrote:
What makes it harder is that RMCAT isn't supposed to do congestion control for rtcweb *only*,
actually, I would be *thrilled* if we came up with a good scheme that would work for rtcweb only. I think multi-flow congestion control (where flow means "different five-tuples") will be quite difficult, irrespective of whether some or all flows are rtcweb.
and so it would be better to think of a more generic model, in which one could essentially implement what you describe here for rtcweb, but also install a shared bottleneck detection mechanism and use that for one's own benefit for multiple non-rtcweb flows.
I'm kinda worried that the "congestion control for rtcweb" is ballooning into "internet congestion control 2.0". If there was something simple that could be done for integrated CC of multiple parallel flows, that'd be nice, but I don't really see it.
This can be addressed by designing an as-simple-as-possible element for exchanging the state of flows (a FSE), which I described earlier in messages to the list.
I think it is still too complex, because you envision this being a separate entity somewhere in the OS of the host. That'll take a long time to deploy and get used. I wonder if a similar exchange inside the *browser* would be a good enough first step. So the browser can be smart about prioritizing the aggregate traffic it sends and receives, but it will still compete with other apps on the same host. Tough luck. Even with an OS-based scheme, the host will still compete against other hosts in the same LAN, so it's not like the host based solution is the end-all anyway. (And yeah you could share this information around the LAN, but then we enter security hell.)
I think an up-to-date literature survey would be interesting. A relatively recent example, with application, is: Sofiane Hassayoun, Janardhan Iyengar and David Ros, “Dynamic Window Coupling for Multipath Congestion Control”. In Proceedings of IEEE ICNP 2011, Vancouver, October 2011.
However, I don't think the IETF would need to standardize a shared bottleneck detection mechanism per se. Just the type of information exchanged, for flows that are identified to share the same bottleneck by whatever means (multiplexing over the same 5-tuple or some measurement based technique).
I think we would need to at least informally describe some mechanism that gives a benefit, in order to demonstrate that the scheme is actually worth the cost. We don't need to mandate its use, but it would sure be nice if we had some sort of argument that this information exchange is actually beneficial. Lars

On 19. sep. 2012, at 15:38, Eggert, Lars wrote:
Hi,
On Sep 19, 2012, at 15:27, Michael Welzl <michawe@ifi.uio.no> wrote:
What makes it harder is that RMCAT isn't supposed to do congestion control for rtcweb *only*,
actually, I would be *thrilled* if we came up with a good scheme that would work for rtcweb only. I think multi-flow congestion control (where flow means "different five-tuples") will be quite difficult, irrespective of whether some or all flows are rtcweb.
FWIW, Google's current proposal would probably already work for rtcweb, if having one such mechanism for rtcweb is all we want and need. I don't understand what makes multiple flow (i.e. different five-tuples) congestion control so difficult? You have, from the LFE, the information which flows share a bottleneck. Say, not based on the 5-tuple-multiplexing, but based on heuristics. (in my opinion, this shouldn't make a difference, but okay) So then? For those flows, you can do the things in the style of RFC2140 (which is for TCP of course, but it gives the idea).
and so it would be better to think of a more generic model, in which one could essentially implement what you describe here for rtcweb, but also install a shared bottleneck detection mechanism and use that for one's own benefit for multiple non-rtcweb flows.
I'm kinda worried that the "congestion control for rtcweb" is ballooning into "internet congestion control 2.0". If there was something simple that could be done for integrated CC of multiple parallel flows, that'd be nice, but I don't really see it.
I don't see that balloon. What's your worry?
This can be addressed by designing an as-simple-as-possible element for exchanging the state of flows (a FSE), which I described earlier in messages to the list.
I think it is still too complex, because you envision this being a separate entity somewhere in the OS of the host. That'll take a long time to deploy and get used. I wonder if a similar exchange inside the *browser* would be a good enough first step. So the browser can be smart about prioritizing the aggregate traffic it sends and receives, but it will still compete with other apps on the same host. Tough luck. Even with an OS-based scheme, the host will still compete against other hosts in the same LAN, so it's not like the host based solution is the end-all anyway. (And yeah you could share this information around the LAN, but then we enter security hell.)
Indeed, in the browser if you want, as I wrote in the message in which I introduced the idea of an FSE: http://www.alvestrand.no/pipermail/rtp-congestion/2012-July/000436.html "An FSE is a *passive* entity on a host that receives congestion- control-relevant information from RTP flows and provides that information upon request. It resides on a host, and if we define it in the right way it probably doesn't matter at which layer: one could have an FSE inside the browser, and optionally have the browser talk to an additional system-wide FSE in the kernel. Note that this is not the Congestion Manager (RFC3124): the CM was an *active* entity, much more complicated to realize, and much closer to the "single congestion control instance" idea described above." I guess you never read that statement though, because even then you responded to this message by saying that this sounds a lot like RFC3124 :-) My point is: we've been through this discussion already. I *think* (would have to check) we also already have agreed to keep this on one host, as you suggest above. Anyway, I definitely agree, this is one-host-only.
I think an up-to-date literature survey would be interesting. A relatively recent example, with application, is: Sofiane Hassayoun, Janardhan Iyengar and David Ros, “Dynamic Window Coupling for Multipath Congestion Control”. In Proceedings of IEEE ICNP 2011, Vancouver, October 2011.
However, I don't think the IETF would need to standardize a shared bottleneck detection mechanism per se. Just the type of information exchanged, for flows that are identified to share the same bottleneck by whatever means (multiplexing over the same 5-tuple or some measurement based technique).
I think we would need to at least informally describe some mechanism that gives a benefit, in order to demonstrate that the scheme is actually worth the cost. We don't need to mandate its use, but it would sure be nice if we had some sort of argument that this information exchange is actually beneficial.
... and since I don't see a difference between feeding the FSE with a set of flows because of measurements, or because these flows happen to be multiplexed over the same 5-tuple, I felt that this is not necessary. "They share a bottleneck because they're multiplexed over the same 5-tuple" is just another shared bottleneck detection mechanism. Anyway, I have no problem with informally describing some mechanism that gives a benefit. Whether we'll be able to agree on a mechanism being good enough is a different story. Perhaps. We'd need to look at the literature. I'm not opposed to that. Cheers, Michael

On 9/19/2012 8:30 AM, Mirja Kuehlewind wrote:
Hi,
I know I'm late to comment, but after reading the charter again, I have to admit that two points are not absolutely clear to me. Maybe someone can provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck.
Why do I need to identify shared bottleneck? The task of congestion control is to shared the bottleneck capacity in some way. Usually I will end up with a different share if the same or a different congestion control algorithm is used by the competing flow(s). Is the idea here to detect a shared bottleneck and then change the congestion control behavior to achieve a different share than I would otherwise? I there a concrete scenario or approach how what to do this and why this is needed in case a shared bottleneck is detected?
The original motivation for this was pretty simple. If you knew two flows with different elasticities were sharing a bottleneck, and seeing loss, you could make the congestion reaction on the more elastic flow, and leave the less elastic one untouched.
- Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM.
I believe I got the idea behind this but I not really sure how to realize it. Are you expecting some explicit feedback from the receiver (or even other components)? Is there a concrete proposal already how to do this? What are you planning to do if you have detected an RT schedule failure? Change the congestion control in some way (to be more aggressive)?
This was related to the "Internet suckage" metric. I don't know of a concrete proposal, but obviously things like measuring the queue and the delay variation could be first cuts. Basically, if you're failing to deliver interactive traffic because of bufferbloat and competing flows, this would help to point the finger at the guilty party, so maybe it can be fixed. -- Wes Eddy MTI Systems

Hi,
provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck.
The original motivation for this was pretty simple. If you knew two flows with different elasticities were sharing a bottleneck, and seeing loss, you could make the congestion reaction on the more elastic flow, and leave the less elastic one untouched. Okay, I can understand this point. But if fact if you see congestion and you are elastic (up to a certain degree), you should lower your sending rate (or do whatever you can do) because there also might be other flows on the link that need the bandwidth even more urgent. I guess the only case I really see is if there is not enough bandwidth for both transmissions (such that both of the respective services do not work proper) and you have to decide which one to kill. But that's the circuit-breaker case and thus out of scope.
- Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM.
This was related to the "Internet suckage" metric. I don't know of a concrete proposal, but obviously things like measuring the queue and the delay variation could be first cuts.
Basically, if you're failing to deliver interactive traffic because of bufferbloat and competing flows, this would help to point the finger at the guilty party, so maybe it can be fixed.
hm, isn't this quite a topic on its own and actually might simply belong in the IPPM working group at least as long if there is no very specific case why something is different for RT traffic? Maybe it's not so much about the measurement techniques but the way how the interpret the measured metrics (when sending RT traffic)? Mirja

On 19. sep. 2012, at 22:23, Mirja Kühlewind wrote:
Hi,
provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck.
The original motivation for this was pretty simple. If you knew two flows with different elasticities were sharing a bottleneck, and seeing loss, you could make the congestion reaction on the more elastic flow, and leave the less elastic one untouched. Okay, I can understand this point. But if fact if you see congestion and you are elastic (up to a certain degree), you should lower your sending rate (or do whatever you can do) because there also might be other flows on the link that need the bandwidth even more urgent.
With combined congestion control, this (lowering the send rate) will happen. Consider having only one controller; then, "making the congestion reaction on the more elastic flow" translates into a change to the scheduler that takes data from application 1 and application 2 in addition to the standard congestion control reaction for what looks like one flow to the network. (in fact, the way I envision it, this behavior would be the result of information exchange between multiple controllers).
I guess the only case I really see is if there is not enough bandwidth for both transmissions (such that both of the respective services do not work proper) and you have to decide which one to kill. But that's the circuit-breaker case and thus out of scope.
This sounds like a misunderstanding. I hope what I wrote above and my previous email help. Else, maybe you'd want to take a look into the list archives for more information; this has been discussed at some depth in the past. BTW, Randell Jesup has, at some point, stated that Mozilla is already using such scheduling / combined congestion control.
- Develop techniques to detect, instrument or diagnose failing to meet RT schedules due to failures of components outside of the charter scope, possibly in collaboration with IPPM.
This was related to the "Internet suckage" metric. I don't know of a concrete proposal, but obviously things like measuring the queue and the delay variation could be first cuts.
Basically, if you're failing to deliver interactive traffic because of bufferbloat and competing flows, this would help to point the finger at the guilty party, so maybe it can be fixed.
hm, isn't this quite a topic on its own and actually might simply belong in the IPPM working group at least as long if there is no very specific case why something is different for RT traffic? Maybe it's not so much about the measurement techniques but the way how the interpret the measured metrics (when sending RT traffic)?
I think that collaboration with IPPM is indeed the right choice of words here. The techniques would have to be specific to requirements of RTP based interactive real-time media. Cheers, Michael

Moving discussion to RMCAT... On 9/20/2012 3:02 AM, Michael Welzl wrote:
On 19. sep. 2012, at 22:23, Mirja Kühlewind wrote:
Hi,
provide further explanations here:
- Develop a mechanism for identifying shared bottlenecks between groups of flows, and means to flexibly allocate their rates within the aggregate hitting the shared bottleneck.
The original motivation for this was pretty simple. If you knew two flows with different elasticities were sharing a bottleneck, and seeing loss, you could make the congestion reaction on the more elastic flow, and leave the less elastic one untouched. Okay, I can understand this point. But if fact if you see congestion and you are elastic (up to a certain degree), you should lower your sending rate (or do whatever you can do) because there also might be other flows on the link that need the bandwidth even more urgent. With combined congestion control, this (lowering the send rate) will happen. Consider having only one controller; then, "making the congestion reaction on the more elastic flow" translates into a change to the scheduler that takes data from application 1 and application 2 in addition to the standard congestion control reaction for what looks like one flow to the network.
You can also proportionally allocate the reduction, etc. This just allows for more direct interaction, potentially less 'hunting' and less chance of corner-case behaviors, and faster reaction (most algorithms, especially delay-sensing algorithms) need some number of inputs to ensure a datapoint isn't jitter or an outlier (i.e. filtering the data). More data points from parallel flows should lead to better/faster reaction time.
(in fact, the way I envision it, this behavior would be the result of information exchange between multiple controllers).
I guess the only case I really see is if there is not enough bandwidth for both transmissions (such that both of the respective services do not work proper) and you have to decide which one to kill. But that's the circuit-breaker case and thus out of scope. This sounds like a misunderstanding. I hope what I wrote above and my previous email help. Else, maybe you'd want to take a look into the list archives for more information; this has been discussed at some depth in the past. BTW, Randell Jesup has, at some point, stated that Mozilla is already using such scheduling / combined congestion control.
Already planning to use; not yet implemented. -- Randell Jesup randell-ietf@jesup.org
participants (6)
-
Eggert, Lars
-
Michael Welzl
-
Mirja Kuehlewind
-
Mirja Kühlewind
-
Randell Jesup
-
Wesley Eddy