
Hi, Sorry for bringing this up yet again - but my thoughts just keep returning to the question of sender-vs.-receiver based congestion control, because it's a seemingly simple problem for which we should be able to agree on a clear answer?! As most of you probably know by now, I'm critical of a receiver-based approach as in RRTCC. (In fact, as I will explain further below, I argue against a *combined* sender+receiver approach). Here is a plus/minus list as I see it: PLUS no. 1: it helps to reduce the amount of feedback. That's really the ONLY benefit. MINUS no. 1: it requires the same mechanism on the sender and receiver side, which means that, if RTP becomes a "framework" for congestion control mechanisms, we must signal which mechanism is used to the other side. In contrast, we don't have to do this with TCP's (implemented but not standardized) pluggable congestion control. This is, to me, the biggest problem, as it requires code updates to happen on both sides for a new mechanism to be deployed. MINUS no. 2: it could make combined congestion control between multiple flows via the FSE more complicated. Anyway it seems we'll need an FSE on the sender AND receiver side, but with receiver-based congestion control, we'll face the question: for two flows between the same two hosts, sharing the same bottleneck, should the receivers exchange information and coordinate their feedback, or should the senders exchange information and coordinate their behavior? => when congestion control mechanisms are split across sender and receiver, the right answer to this question becomes a per-mechanism decision. Please amend this list if I'm missing something. Regarding the one PLUS above, here's a critical look at the benefits of reducing feedback **via receiver-side congestion control**: - less feedback means a greater chance for the feedback to be dropped, which translates into a greater chance for undesirable sender behavior - we deal with interactive traffic, in which case a lot of the feedback could hopefully be piggybacked onto payload packets - in some cases there will be other parallel flows (e.g. the data channel in rtcweb) that already give us all the feedback that we need, this is another chance for feedback reduction - receiver-side-congestion-control-based feedback reduction as in RRTCC means to reduce the feedback whenever feedback isn't urgent for the sender. I'd argue that, in fact, we should reduce feedback only when the channel carrying it (the backwards channel) is congested, and that is a generic function, not bound to the sender-side congestion control behavior. So how urgent is it to reduce feedback, anyway? Some time ago, Bill Ver Steeg wrote: *** 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. *** As he states, multicast is out of scope. Plus, I think we should care more about technical reasons than what the RTP community considers to be a precious resource. As for the CableTV example, one can always find examples where people started doing useless things :-) but consider the following: many parallel TCP connections, each sending lots of feedback, work okay for maaaany end users that 1) surf the Internet or (bringing it closer to rtcweb thinking) 2) use P2P applications. And TCP doesn't even do any form of congestion control on ACKs (an RFC exists but noone uses it (so far)). I'm not saying ACKs never cause problems. I'm saying it's probably rare that they do, and reducing their number WHEN they cause trouble is good enough (and, I would argue, a better decision). Let's take a step back and summarize the situation: 1) congestion-control-based-feedback-reduction has at least one significant MINUS (no. 1), probably two 2) the benefits of doing it seem minimal 3) not being based on congestion along the backwards path, it misses the point somewhat... As a result of the above, I would suggest to move all congestion control functionality to the sender side and keep the receiver side dumb, as in TCP, but amended with some generic (i.e. not mechanism-specific) feedback-reduction functions (ACK congestion control; piggybacking; omitting feedback when it is known (e.g. via a signal from the sender) that we get feedback from another flow). Alternatively, we could still remove the MINUS no. 1 above by agreeing on a generic sender behavior. IIRC, in RRTCC, the sender is rather simple - it follows the rate dictated by the receiver and has a timeout behavior. If all congestion control mechanisms are installed on the receiver side only, with the same agreed generic sender behavior for everyone, we can also have pluggable congestion control without a need for signalling. It's just the idea of multiple proposals for congestion control involving newly described sender AND receiver behavior that strikes me as bad design. Thoughts? Cheers, Michael

On Wed, Sep 19, 2012 at 2:26 PM, Michael Welzl <michawe@ifi.uio.no> wrote:
Hi,
Sorry for bringing this up yet again - but my thoughts just keep returning to the question of sender-vs.-receiver based congestion control, because it's a seemingly simple problem for which we should be able to agree on a clear answer?!
As most of you probably know by now, I'm critical of a receiver-based approach as in RRTCC. (In fact, as I will explain further below, I argue against a *combined* sender+receiver approach). Here is a plus/minus list as I see it:
PLUS no. 1: it helps to reduce the amount of feedback. That's really the ONLY benefit.
It also has a small benefit in that it can more easily make use of new signals as they are introduced, without having to modify the feedback message, but I guess that argument is a bit hypothetical.
MINUS no. 1: it requires the same mechanism on the sender and receiver side, which means that, if RTP becomes a "framework" for congestion control mechanisms, we must signal which mechanism is used to the other side. In contrast, we don't have to do this with TCP's (implemented but not standardized) pluggable congestion control. This is, to me, the biggest problem, as it requires code updates to happen on both sides for a new mechanism to be deployed.
MINUS no. 2: it could make combined congestion control between multiple flows via the FSE more complicated. Anyway it seems we'll need an FSE on the sender AND receiver side, but with receiver-based congestion control, we'll face the question: for two flows between the same two hosts, sharing the same bottleneck, should the receivers exchange information and coordinate their feedback, or should the senders exchange information and coordinate their behavior? => when congestion control mechanisms are split across sender and receiver, the right answer to this question becomes a per-mechanism decision.
Please amend this list if I'm missing something. Regarding the one PLUS above, here's a critical look at the benefits of reducing feedback **via receiver-side congestion control**:
- less feedback means a greater chance for the feedback to be dropped, which translates into a greater chance for undesirable sender behavior - we deal with interactive traffic, in which case a lot of the feedback could hopefully be piggybacked onto payload packets - in some cases there will be other parallel flows (e.g. the data channel in rtcweb) that already give us all the feedback that we need, this is another chance for feedback reduction - receiver-side-congestion-control-based feedback reduction as in RRTCC means to reduce the feedback whenever feedback isn't urgent for the sender. I'd argue that, in fact, we should reduce feedback only when the channel carrying it (the backwards channel) is congested, and that is a generic function, not bound to the sender-side congestion control behavior.
So how urgent is it to reduce feedback, anyway? Some time ago, Bill Ver Steeg wrote:
*** 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. ***
As he states, multicast is out of scope. Plus, I think we should care more about technical reasons than what the RTP community considers to be a precious resource. As for the CableTV example, one can always find examples where people started doing useless things :-) but consider the following: many parallel TCP connections, each sending lots of feedback, work okay for maaaany end users that 1) surf the Internet or (bringing it closer to rtcweb thinking) 2) use P2P applications. And TCP doesn't even do any form of congestion control on ACKs (an RFC exists but noone uses it (so far)).
I'm not saying ACKs never cause problems. I'm saying it's probably rare that they do, and reducing their number WHEN they cause trouble is good enough (and, I would argue, a better decision).
Let's take a step back and summarize the situation: 1) congestion-control-based-feedback-reduction has at least one significant MINUS (no. 1), probably two 2) the benefits of doing it seem minimal 3) not being based on congestion along the backwards path, it misses the point somewhat...
As a result of the above, I would suggest to move all congestion control functionality to the sender side and keep the receiver side dumb, as in TCP, but amended with some generic (i.e. not mechanism-specific) feedback-reduction functions (ACK congestion control; piggybacking; omitting feedback when it is known (e.g. via a signal from the sender) that we get feedback from another flow).
Alternatively, we could still remove the MINUS no. 1 above by agreeing on a generic sender behavior. IIRC, in RRTCC, the sender is rather simple - it follows the rate dictated by the receiver and has a timeout behavior. If all congestion control mechanisms are installed on the receiver side only, with the same agreed generic sender behavior for everyone, we can also have pluggable congestion control without a need for signalling.
I agree that if we go for a receiver-based approach, the sender must be dumb. And the other way around if we go for a sender-based approach.
It's just the idea of multiple proposals for congestion control involving newly described sender AND receiver behavior that strikes me as bad design.
Thoughts?
One benefit I see with having the intelligence at the sender is if you want to do something more clever when probing the channel, which would require coordination between the sender and the one who's supposed to come to some conclusion from the incoming packets. I think Bill Ver Steeg touched upon something along those lines: "Basic senders will simply tell the receiver to gather the data every N seconds and send it every N seconds (plus when there are problems, as defined by the specification). More sophisticated senders could drive more complex relationships, asking the receiver to tabulate/send the data on demand in semi-real time. This would allow a sophisticated sender to send a varying data patterns and get the feedback they require, yet not require too much shared state between the sender and receiver. As long as the receiver is a simple state machine, I think we would be OK."
Cheers, Michael
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 09/19/2012 02:26 PM, Michael Welzl wrote:
Hi,
Sorry for bringing this up yet again - but my thoughts just keep returning to the question of sender-vs.-receiver based congestion control, because it's a seemingly simple problem for which we should be able to agree on a clear answer?!
As most of you probably know by now, I'm critical of a receiver-based approach as in RRTCC. (In fact, as I will explain further below, I argue against a *combined* sender+receiver approach). Here is a plus/minus list as I see it:
PLUS no. 1: it helps to reduce the amount of feedback. That's really the ONLY benefit.
MINUS no. 1: it requires the same mechanism on the sender and receiver side, which means that, if RTP becomes a "framework" for congestion control mechanisms, we must signal which mechanism is used to the other side. In contrast, we don't have to do this with TCP's (implemented but not standardized) pluggable congestion control. This is, to me, the biggest problem, as it requires code updates to happen on both sides for a new mechanism to be deployed. I don't see the logic here. The requirement is that both sides send signals that are responded to appropriately by the other side. So the two sides' mechanisms have to be compatible (in the RRTCC case, the demand is that the sender responds to REMB or TMMBR messages - this is hardly the "same mechanism" as the Kalman filter implemented on the receiver side).
In the TCP case, the feedback mechanism is the ack, which is needed for reasons beyond congestion control. In the RTP case, there isn't such a single, simple mechanism needed for other reasons.
MINUS no. 2: it could make combined congestion control between multiple flows via the FSE more complicated. Anyway it seems we'll need an FSE on the sender AND receiver side, but with receiver-based congestion control, we'll face the question: for two flows between the same two hosts, sharing the same bottleneck, should the receivers exchange information and coordinate their feedback, or should the senders exchange information and coordinate their behavior? => when congestion control mechanisms are split across sender and receiver, the right answer to this question becomes a per-mechanism decision.
FSE? I don't see the logic here either. Either the two flows are under a common controller on one side, they're under a common controlller on the other side, they're not under any common controller, or they are under common controllers on both sides. All four cases can happen with both sender side and receiver side implementations of the BWE computation.
Please amend this list if I'm missing something. Regarding the one PLUS above, here's a critical look at the benefits of reducing feedback **via receiver-side congestion control**:
- less feedback means a greater chance for the feedback to be dropped, which translates into a greater chance for undesirable sender behavior
Stickler: Amount of feedback has little correlation with packet drop. If feedback is lost due to congestion, more feedback will lead to more packet drops. What is true is that when feedback occurs more rarely, losing one feedback packet will lead to a greater gap between feedback packets.
- we deal with interactive traffic, in which case a lot of the feedback could hopefully be piggybacked onto payload packets - in some cases there will be other parallel flows (e.g. the data channel in rtcweb) that already give us all the feedback that we need, this is another chance for feedback reduction - receiver-side-congestion-control-based feedback reduction as in RRTCC means to reduce the feedback whenever feedback isn't urgent for the sender. I'd argue that, in fact, we should reduce feedback only when the channel carrying it (the backwards channel) is congested, and that is a generic function, not bound to the sender-side congestion control behavior.
So how urgent is it to reduce feedback, anyway? Some time ago, Bill Ver Steeg wrote:
*** 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. *** I asusme "asynchronous" is "asymmetric" above?
As he states, multicast is out of scope. Plus, I think we should care more about technical reasons than what the RTP community considers to be a precious resource. As for the CableTV example, one can always find examples where people started doing useless things :-) but consider the following: many parallel TCP connections, each sending lots of feedback, work okay for maaaany end users that 1) surf the Internet or (bringing it closer to rtcweb thinking) 2) use P2P applications. And TCP doesn't even do any form of congestion control on ACKs (an RFC exists but noone uses it (so far)).
I'm not saying ACKs never cause problems. I'm saying it's probably rare that they do, and reducing their number WHEN they cause trouble is good enough (and, I would argue, a better decision).
Let's take a step back and summarize the situation: 1) congestion-control-based-feedback-reduction has at least one significant MINUS (no. 1), probably two Both are debatable. 2) the benefits of doing it seem minimal 3) not being based on congestion along the backwards path, it misses the point somewhat... What that bullet was meant to say totally escaped me. RRTCC is all about congestion on the forward path, not the backwards path.
As a result of the above, I would suggest to move all congestion control functionality to the sender side and keep the receiver side dumb, as in TCP, but amended with some generic (i.e. not mechanism-specific) feedback-reduction functions (ACK congestion control; piggybacking; omitting feedback when it is known (e.g. via a signal from the sender) that we get feedback from another flow).
Alternatively, we could still remove the MINUS no. 1 above by agreeing on a generic sender behavior. IIRC, in RRTCC, the sender is rather simple - it follows the rate dictated by the receiver and has a timeout behavior. If all congestion control mechanisms are installed on the receiver side only, with the same agreed generic sender behavior for everyone, we can also have pluggable congestion control without a need for signalling.
It's just the idea of multiple proposals for congestion control involving newly described sender AND receiver behavior that strikes me as bad design.
Thoughts? As you can see, I tend to disagree with the validity of your arguments. You may still be right, but you're not convincing me with this set of arguments. (Another conversation I had last week was with one guy who argued that we should use a traceroute-like functionality to directly probe queues near the sender, rather than depending on whole-RTT signalling - that kind of mechanism would rely on packets that didn't even reach the receiver. Now, if we explore *that*, it's a real argument for putting control at the sender....)
Cheers, Michael
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 09/24/2012 10:46 AM, Harald Alvestrand wrote:
On 09/19/2012 02:26 PM, Michael Welzl wrote:
Hi,
Sorry for bringing this up yet again - but my thoughts just keep returning to the question of sender-vs.-receiver based congestion control, because it's a seemingly simple problem for which we should be able to agree on a clear answer?!
As most of you probably know by now, I'm critical of a receiver-based approach as in RRTCC. (In fact, as I will explain further below, I argue against a *combined* sender+receiver approach). Here is a plus/minus list as I see it:
PLUS no. 1: it helps to reduce the amount of feedback. That's really the ONLY benefit.
MINUS no. 1: it requires the same mechanism on the sender and receiver side, which means that, if RTP becomes a "framework" for congestion control mechanisms, we must signal which mechanism is used to the other side. In contrast, we don't have to do this with TCP's (implemented but not standardized) pluggable congestion control. This is, to me, the biggest problem, as it requires code updates to happen on both sides for a new mechanism to be deployed. I don't see the logic here. The requirement is that both sides send signals that are responded to appropriately by the other side. So the two sides' mechanisms have to be compatible (in the RRTCC case, the demand is that the sender responds to REMB or TMMBR messages - this is hardly the "same mechanism" as the Kalman filter implemented on the receiver side).
In the TCP case, the feedback mechanism is the ack, which is needed for reasons beyond congestion control. In the RTP case, there isn't such a single, simple mechanism needed for other reasons.
MINUS no. 2: it could make combined congestion control between multiple flows via the FSE more complicated. Anyway it seems we'll need an FSE on the sender AND receiver side, but with receiver-based congestion control, we'll face the question: for two flows between the same two hosts, sharing the same bottleneck, should the receivers exchange information and coordinate their feedback, or should the senders exchange information and coordinate their behavior? => when congestion control mechanisms are split across sender and receiver, the right answer to this question becomes a per-mechanism decision.
FSE? I don't see the logic here either. Either the two flows are under a common controller on one side, they're under a common controlller on the other side, they're not under any common controller, or they are under common controllers on both sides. All four cases can happen with both sender side and receiver side implementations of the BWE computation.
Please amend this list if I'm missing something. Regarding the one PLUS above, here's a critical look at the benefits of reducing feedback **via receiver-side congestion control**:
- less feedback means a greater chance for the feedback to be dropped, which translates into a greater chance for undesirable sender behavior
Stickler: Amount of feedback has little correlation with packet drop. If feedback is lost due to congestion, more feedback will lead to more packet drops. What is true is that when feedback occurs more rarely, losing one feedback packet will lead to a greater gap between feedback packets.
Even more fun; due to bufferbloat, combined with very asymmetric bandwidth in many paths, the feedback packets can be greatly delayed for reasons that have nothing to do with your transmission, which may have plenty of bandwidth available. So your servo loop responsiveness for adjustment of sending bandwidth will be poor :-(. In TCP's case, I gather that this adjustment of transmission rate is a quadratic, so the elephant flows that may be clogging the return path aren't going to be responding in reasonable time. Sigh, - Jim

Moving this sub-thread of discussion to the new list... On 9/24/2012 10:46 AM, Harald Alvestrand wrote:
On 09/19/2012 02:26 PM, Michael Welzl wrote:
Hi,
Sorry for bringing this up yet again - but my thoughts just keep returning to the question of sender-vs.-receiver based congestion control, because it's a seemingly simple problem for which we should be able to agree on a clear answer?!
As most of you probably know by now, I'm critical of a receiver-based approach as in RRTCC. (In fact, as I will explain further below, I argue against a *combined* sender+receiver approach). Here is a plus/minus list as I see it:
PLUS no. 1: it helps to reduce the amount of feedback. That's really the ONLY benefit.
MINUS no. 1: it requires the same mechanism on the sender and receiver side, which means that, if RTP becomes a "framework" for congestion control mechanisms, we must signal which mechanism is used to the other side. In contrast, we don't have to do this with TCP's (implemented but not standardized) pluggable congestion control. This is, to me, the biggest problem, as it requires code updates to happen on both sides for a new mechanism to be deployed. I don't see the logic here. The requirement is that both sides send signals that are responded to appropriately by the other side. So the two sides' mechanisms have to be compatible (in the RRTCC case, the demand is that the sender responds to REMB or TMMBR messages - this is hardly the "same mechanism" as the Kalman filter implemented on the receiver side).
In the TCP case, the feedback mechanism is the ack, which is needed for reasons beyond congestion control. In the RTP case, there isn't such a single, simple mechanism needed for other reasons.
Agreed, though Michael indicated that we can piggyback congestion info on reverse traffic. This may be possible and I've thought about it. Part of the problem is that if you want to signal data back quickly, you have to wait up to 20ms and in some cases longer to piggyback (and on low-upstream-bandwidth links, the uplink transit time if you're piggybacking on a 1500-byte video packet might be 20+ms). If you do piggyback, it has to be optional on a instance-by-instance basis (perhaps even estimating when the next chance will be, and the priority of the data - but that speaks against a dumb receiver). TCP can "get away" with high packet count feedback because flows are typically largely unidirectional or alternating flows, or on well-connected devices, not edge nodes. The PacketCable/etc issue with shared upstreams and slot allocation is an example of reacting to this - there even with the typically largely downstream flows, the upstream acks become a limiting factor.
MINUS no. 2: it could make combined congestion control between multiple flows via the FSE more complicated. Anyway it seems we'll need an FSE on the sender AND receiver side, but with receiver-based congestion control, we'll face the question: for two flows between the same two hosts, sharing the same bottleneck, should the receivers exchange information and coordinate their feedback, or should the senders exchange information and coordinate their behavior? => when congestion control mechanisms are split across sender and receiver, the right answer to this question becomes a per-mechanism decision.
FSE? I don't see the logic here either. Either the two flows are under a common controller on one side, they're under a common controlller on the other side, they're not under any common controller, or they are under common controllers on both sides. All four cases can happen with both sender side and receiver side implementations of the BWE computation.
Please amend this list if I'm missing something. Regarding the one PLUS above, here's a critical look at the benefits of reducing feedback **via receiver-side congestion control**:
- less feedback means a greater chance for the feedback to be dropped, which translates into a greater chance for undesirable sender behavior
Stickler: Amount of feedback has little correlation with packet drop. If feedback is lost due to congestion, more feedback will lead to more packet drops. What is true is that when feedback occurs more rarely, losing one feedback packet will lead to a greater gap between feedback packets.
I'd say less feedback means the odds that a drop will hit a feedback packet are *lower* (as routers typically either tail-drop or random-drop, but in either case in a static evaluation a lower-bandwidth/packet flow (i.e. the feedback packets) will get hit less often), but as Harald says the impact is higher. And in a dynamic evaluation this may shift some.
- we deal with interactive traffic, in which case a lot of the feedback could hopefully be piggybacked onto payload packets
See above. This may not be that useful in practice, or may degrade congestion response time.
- in some cases there will be other parallel flows (e.g. the data channel in rtcweb) that already give us all the feedback that we need, this is another chance for feedback reduction
Do not count on the data channel being active or substituting for media-channel feedback even if active. It may provide another source of feedback data.
- receiver-side-congestion-control-based feedback reduction as in RRTCC means to reduce the feedback whenever feedback isn't urgent for the sender. I'd argue that, in fact, we should reduce feedback only when the channel carrying it (the backwards channel) is congested, and that is a generic function, not bound to the sender-side congestion control behavior.
Personally, I assume both channels are always congested, or will be soon (if we're doing our job right and aren't on a huge link), so bandwidth reduction for feedback always allows more bandwidth for useful data. In fact, this is one of the better arguments for piggybacking - less bandwidth required to provide the same bandwidth, but at the cost of the feedback possibly being delayed (and in some cases perhaps more likely to be impacted if the drops are packet-size aware, not just packet-aware, or if they get piggybacked on a packet with a lower DSCP marking).
So how urgent is it to reduce feedback, anyway? Some time ago, Bill Ver Steeg wrote:
*** 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. ***
I asusme "asynchronous" is "asymmetric" above?
As he states, multicast is out of scope. Plus, I think we should care more about technical reasons than what the RTP community considers to be a precious resource. As for the CableTV example, one can always find examples where people started doing useless things :-) but consider the following: many parallel TCP connections, each sending lots of feedback, work okay for maaaany end users that 1) surf the Internet or (bringing it closer to rtcweb thinking) 2) use P2P applications. And TCP doesn't even do any form of congestion control on ACKs (an RFC exists but noone uses it (so far)).
I'm not saying ACKs never cause problems. I'm saying it's probably rare that they do, and reducing their number WHEN they cause trouble is good enough (and, I would argue, a better decision).
And my position is that any *good* two-way communication will run near-but-not-quite-over the bandwidth limit as much of the time as possible, so ACKs always have the potential to cause problems (and always will force you a little further down the quality curve).
Let's take a step back and summarize the situation: 1) congestion-control-based-feedback-reduction has at least one significant MINUS (no. 1), probably two
Both are debatable.
2) the benefits of doing it seem minimal 3) not being based on congestion along the backwards path, it misses the point somewhat... What that bullet was meant to say totally escaped me. RRTCC is all about congestion on the forward path, not the backwards path.
and in addition, per above, one should assume back-path congestion anyways.
As a result of the above, I would suggest to move all congestion control functionality to the sender side and keep the receiver side dumb, as in TCP, but amended with some generic (i.e. not mechanism-specific) feedback-reduction functions (ACK congestion control; piggybacking; omitting feedback when it is known (e.g. via a signal from the sender) that we get feedback from another flow).
Alternatively, we could still remove the MINUS no. 1 above by agreeing on a generic sender behavior. IIRC, in RRTCC, the sender is rather simple - it follows the rate dictated by the receiver and has a timeout behavior. If all congestion control mechanisms are installed on the receiver side only, with the same agreed generic sender behavior for everyone, we can also have pluggable congestion control without a need for signalling.
Correct.
It's just the idea of multiple proposals for congestion control involving newly described sender AND receiver behavior that strikes me as bad design.
I think if the sender (or receiver) have a simple design that can be interacted with by the (more complex) main algorithm would allow for improvement without requiring negotiation (and allow for asymmetric behaviors by different implementations/revisions or even asymmetry due to local network knowledge (wifi, etc).
Thoughts?
As you can see, I tend to disagree with the validity of your arguments. You may still be right, but you're not convincing me with this set of arguments. (Another conversation I had last week was with one guy who argued that we should use a traceroute-like functionality to directly probe queues near the sender, rather than depending on whole-RTT signalling - that kind of mechanism would rely on packets that didn't even reach the receiver. Now, if we explore *that*, it's a real argument for putting control at the sender....)
Such probing may be problematic... if you can determine local and nearby network configuration, that might be useful. In my old algorithm, I used a heuristic of establishing max-known-good-bandwidth early in the call, and then being very cautious at "dipping my toes over" the (likely upstream) limit. (If it did remain delay-stable for long enough after going over, I'd update the limit.) Obviously this heuristic is more useful in leaf-node cases, but this provided some of what I think that person may have been thinking about. -- Randell Jesup randell-ietf@jesup.org
participants (5)
-
Harald Alvestrand
-
Jim Gettys
-
Michael Welzl
-
Randell Jesup
-
Stefan Holmer