One congestion control to bind them all!

Dear all, First, let me apologize for giving you "Michael's view of how our transport should be built" in this piecemeal fashion... you're getting a glimpse at how gradual my brain operates :-( I just can't help it. I have now figured out that the best way to do transport for RTCWEB would involve using only ONE type of congestion control for everything (note that I'm not making any specific argument here about whether this is based on draft-alvestrand-rtcweb-congestion or LEDBAT or whatever). Why? Simply because your goal is to keep the delay low, by avoiding queue growth. TCP - and also SCTP's standard congestion control - will always try to push up the bottleneck queue until it sees a packet loss (or ECN), and therefore always be detrimental. Now, that's just a fact of life that we have to deal with - there may be other TCP traffic sharing the same bottleneck, and we want to do reasonably well in competition while trying to minimize queuing delay. This is not an ideal situation but it's what we have to live with. However, what we really don't want to do is to embed a standard-TCP-like congestion control mechanism WITHIN RTCWEB?! If an RTCWEB user wants to have a video phone conversation and at the same time sends a file, if I understand it correctly, that file would be transferred using SCTP and its standard TCP-like congestion control. So the file transfer would push up the queue at the bottleneck and thereby increase the delay experienced by the user - the user would shoot her/himself in her/his foot by sending the file. You really don't want that to happen, I guess? I think that the correct solution to that problem is to have only one type of congestion control, make it (mostly) delay-based to minimize queuing delay, and let that congestion control dictate the total rate available to the RTCWEB sender. Then, let the sender schedule packets across the video-, audio- and file-transfer streams based on some fairness policy. This gives you the best possible control over fairness as an extra - it's much better than trying to influence how the flows compete with each other on the bottleneck. Well, and again it seems to me that SCTP looks ideal for that type of usage. Theoretically, it could perhaps be okay to use multiple different types of delay-based (delay-minimizing) controls in parallel, e.g. using draft-alvestrand-rtcweb-congestion-01 for video / audio and LEDBAT for data transfer, but that just seems to make the situation more complicated and it would give you less control over the fairness among the streams. Curious to hear what you think :-) because I *know* I'm right :-D Cheers, Michael

On Thu, Apr 5, 2012 at 8:14 AM, Michael Welzl <michawe@ifi.uio.no> wrote:
Dear all,
First, let me apologize for giving you "Michael's view of how our transport should be built" in this piecemeal fashion... you're getting a glimpse at how gradual my brain operates :-( I just can't help it.
I have now figured out that the best way to do transport for RTCWEB would involve using only ONE type of congestion control for everything (note that I'm not making any specific argument here about whether this is based on draft-alvestrand-rtcweb-**congestion or LEDBAT or whatever).
Why?
Simply because your goal is to keep the delay low, by avoiding queue growth. TCP - and also SCTP's standard congestion control - will always try to push up the bottleneck queue until it sees a packet loss (or ECN), and therefore always be detrimental. Now, that's just a fact of life that we have to deal with - there may be other TCP traffic sharing the same bottleneck, and we want to do reasonably well in competition while trying to minimize queuing delay. This is not an ideal situation but it's what we have to live with. However, what we really don't want to do is to embed a standard-TCP-like congestion control mechanism WITHIN RTCWEB?!
If an RTCWEB user wants to have a video phone conversation and at the same time sends a file, if I understand it correctly, that file would be transferred using SCTP and its standard TCP-like congestion control. So the file transfer would push up the queue at the bottleneck and thereby increase the delay experienced by the user - the user would shoot her/himself in her/his foot by sending the file. You really don't want that to happen, I guess?
I think that the correct solution to that problem is to have only one type of congestion control, make it (mostly) delay-based to minimize queuing delay, and let that congestion control dictate the total rate available to the RTCWEB sender. Then, let the sender schedule packets across the video-, audio- and file-transfer streams based on some fairness policy. This gives you the best possible control over fairness as an extra - it's much better than trying to influence how the flows compete with each other on the bottleneck. Well, and again it seems to me that SCTP looks ideal for that type of usage.
Theoretically, it could perhaps be okay to use multiple different types of delay-based (delay-minimizing) controls in parallel, e.g. using draft-alvestrand-rtcweb-**congestion-01 for video / audio and LEDBAT for data transfer, but that just seems to make the situation more complicated and it would give you less control over the fairness among the streams.
Curious to hear what you think :-) because I *know* I'm right :-D
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.

On Apr 5, 2012, at 4:03 PM, Justin Uberti wrote:
On Thu, Apr 5, 2012 at 8:14 AM, Michael Welzl <michawe@ifi.uio.no> wrote: Dear all,
First, let me apologize for giving you "Michael's view of how our transport should be built" in this piecemeal fashion... you're getting a glimpse at how gradual my brain operates :-( I just can't help it.
I have now figured out that the best way to do transport for RTCWEB would involve using only ONE type of congestion control for everything (note that I'm not making any specific argument here about whether this is based on draft-alvestrand-rtcweb-congestion or LEDBAT or whatever).
Why?
Simply because your goal is to keep the delay low, by avoiding queue growth. TCP - and also SCTP's standard congestion control - will always try to push up the bottleneck queue until it sees a packet loss (or ECN), and therefore always be detrimental. Now, that's just a fact of life that we have to deal with - there may be other TCP traffic sharing the same bottleneck, and we want to do reasonably well in competition while trying to minimize queuing delay. This is not an ideal situation but it's what we have to live with. However, what we really don't want to do is to embed a standard-TCP-like congestion control mechanism WITHIN RTCWEB?!
If an RTCWEB user wants to have a video phone conversation and at the same time sends a file, if I understand it correctly, that file would be transferred using SCTP and its standard TCP-like congestion control. So the file transfer would push up the queue at the bottleneck and thereby increase the delay experienced by the user - the user would shoot her/himself in her/his foot by sending the file. You really don't want that to happen, I guess?
I think that the correct solution to that problem is to have only one type of congestion control, make it (mostly) delay-based to minimize queuing delay, and let that congestion control dictate the total rate available to the RTCWEB sender. Then, let the sender schedule packets across the video-, audio- and file-transfer streams based on some fairness policy. This gives you the best possible control over fairness as an extra - it's much better than trying to influence how the flows compete with each other on the bottleneck. Well, and again it seems to me that SCTP looks ideal for that type of usage.
Theoretically, it could perhaps be okay to use multiple different types of delay-based (delay-minimizing) controls in parallel, e.g. using draft-alvestrand-rtcweb-congestion-01 for video / audio and LEDBAT for data transfer, but that just seems to make the situation more complicated and it would give you less control over the fairness among the streams.
Curious to hear what you think :-) because I *know* I'm right :-D
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents... Cheers, Michael

On Apr 5, 2012, at 5:21 PM, Michael Welzl wrote:
On Apr 5, 2012, at 4:03 PM, Justin Uberti wrote:
On Thu, Apr 5, 2012 at 8:14 AM, Michael Welzl <michawe@ifi.uio.no> wrote: Dear all,
First, let me apologize for giving you "Michael's view of how our transport should be built" in this piecemeal fashion... you're getting a glimpse at how gradual my brain operates :-( I just can't help it.
I have now figured out that the best way to do transport for RTCWEB would involve using only ONE type of congestion control for everything (note that I'm not making any specific argument here about whether this is based on draft-alvestrand-rtcweb-congestion or LEDBAT or whatever).
Why?
Simply because your goal is to keep the delay low, by avoiding queue growth. TCP - and also SCTP's standard congestion control - will always try to push up the bottleneck queue until it sees a packet loss (or ECN), and therefore always be detrimental. Now, that's just a fact of life that we have to deal with - there may be other TCP traffic sharing the same bottleneck, and we want to do reasonably well in competition while trying to minimize queuing delay. This is not an ideal situation but it's what we have to live with. However, what we really don't want to do is to embed a standard-TCP-like congestion control mechanism WITHIN RTCWEB?!
If an RTCWEB user wants to have a video phone conversation and at the same time sends a file, if I understand it correctly, that file would be transferred using SCTP and its standard TCP-like congestion control. So the file transfer would push up the queue at the bottleneck and thereby increase the delay experienced by the user - the user would shoot her/himself in her/his foot by sending the file. You really don't want that to happen, I guess?
I think that the correct solution to that problem is to have only one type of congestion control, make it (mostly) delay-based to minimize queuing delay, and let that congestion control dictate the total rate available to the RTCWEB sender. Then, let the sender schedule packets across the video-, audio- and file-transfer streams based on some fairness policy. This gives you the best possible control over fairness as an extra - it's much better than trying to influence how the flows compete with each other on the bottleneck. Well, and again it seems to me that SCTP looks ideal for that type of usage.
Theoretically, it could perhaps be okay to use multiple different types of delay-based (delay-minimizing) controls in parallel, e.g. using draft-alvestrand-rtcweb-congestion-01 for video / audio and LEDBAT for data transfer, but that just seems to make the situation more complicated and it would give you less control over the fairness among the streams.
Curious to hear what you think :-) because I *know* I'm right :-D
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents...
I agree with Michael here... Any documents available? Best regards Michael
Cheers, Michael
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 4/5/2012 11:21 AM, Michael Welzl wrote:
On Apr 5, 2012, at 4:03 PM, Justin Uberti wrote:
On Thu, Apr 5, 2012 at 8:14 AM, Michael Welzl <michawe@ifi.uio.no> wrote:
I have now figured out that the best way to do transport for RTCWEB would involve using only ONE type of congestion control for everything (note that I'm not making any specific argument here about whether this is based on draft-alvestrand-rtcweb-congestion or LEDBAT or whatever).
[SNIP]
I think that the correct solution to that problem is to have only one type of congestion control, make it (mostly) delay-based to minimize queuing delay, and let that congestion control dictate the total rate available to the RTCWEB sender. Then, let the sender schedule packets across the video-, audio- and file-transfer streams based on some fairness policy. This gives you the best possible control over fairness as an extra - it's much better than trying to influence how the flows compete with each other on the bottleneck. Well, and again it seems to me that SCTP looks ideal for that type of usage.
Theoretically, it could perhaps be okay to use multiple different types of delay-based (delay-minimizing) controls in parallel, e.g. using draft-alvestrand-rtcweb-congestion-01 for video / audio and LEDBAT for data transfer, but that just seems to make the situation more complicated and it would give you less control over the fairness among the streams.
Curious to hear what you think :-) because I *know* I'm right :-D
Justin replied:
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents...
If you read the early archives of this list (and discussions predating it on rtcweb), you'll find that goal articulated a number of times. As one of those driving the congestion-control issue and also the data channel design, I've always spoken to wanting to in some manner merge the congestion control regimes of media and data flows, and barring that to at least provide feedback between them. The last fallback to avoiding problems would be to partially punt the problem up to the application (which also knows the relative priorities of the media and data, and of each data flow), by reporting back to the JS application the current automatic distribution of available bits to the different streams, and allow the application to change the distribution of those bits to the various media and data sub-flows (but not let it increase the total number of bits sent directly; it could decrease it directly or indirectly (by removing flows, such as dropping a video stream, etc)). We will very likely be doing that application-level reporting, and probably allowing the application to adjust the distribution. My proposal on the API for that is outstanding in the W3 side of this standardization effort. I spoke more directly to this at the rtcweb Interim meeting in Febuary; my slides are at: http://www.ietf.org/proceedings/interim/2012/01/31/rtcweb/slides/rtcweb-1.pd... The relevant slides are 10-13. While not stated on the slides, this discussion emanates from the goal to in some manner merge them or make them work together (and not fight), and this was I think re-iterated from the podium there. -- Randell Jesup randell-ietf@jesup.org

Justin replied:
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents...
If you read the early archives of this list (and discussions predating it on rtcweb), you'll find that goal articulated a number of times.
I think I have seen and heard about the wish to let the user prioritize flows, and somehow avoid that the flows would be totally ignorant about each other - but this is not the same as what I'm suggesting: a *single* congestion control instance for everything. Now I'm not sure if I get this right: 1) Justin's statement above means that there is a plan to have one single SCTP CC module for everything in the "data channel", but there will still be different congestion control for RTP/UDP transfer => then this is not what I mean, you still won't get the same efficiency as if you use one CC instance for *everything*, by which I really mean *everything*. 2) Justin's statement above means what I say here (one CC instance for *everything*) => then I have missed that and misunderstood the discussion, and just think it's good.
As one of those driving the congestion-control issue and also the data channel design, I've always spoken to wanting to in some manner merge the congestion control regimes of media and data flows, and barring that to at least provide feedback between them. The last fallback to avoiding problems would be to partially punt the problem up to the application (which also knows the relative priorities of the media and data, and of each data flow), by reporting back to the JS application the current automatic distribution of available bits to the different streams, and allow the application to change the distribution of those bits to the various media and data sub-flows (but not let it increase the total number of bits sent directly; it could decrease it directly or indirectly (by removing flows, such as dropping a video stream, etc)).
We will very likely be doing that application-level reporting, and probably allowing the application to adjust the distribution. My proposal on the API for that is outstanding in the W3 side of this standardization effort.
I spoke more directly to this at the rtcweb Interim meeting in Febuary; my slides are at: http://www.ietf.org/proceedings/interim/2012/01/31/rtcweb/slides/rtcweb-1.pd... The relevant slides are 10-13. While not stated on the slides, this discussion emanates from the goal to in some manner merge them or make them work together (and not fight), and this was I think re- iterated from the podium there.
I saw that presentation, but I had interpreted it as being only about the SCTP data channel, which would be case 1) above. Even if you extend that to incorporate other streams with cross-reporting as you describe above, it sounds like a complicated vehicle that will, I think, never be as efficient as simply having a single congestion control instance for everything. Cheers, Michael

On Sun, Apr 8, 2012 at 1:59 PM, Michael Welzl <michawe@ifi.uio.no> wrote:
Justin replied:
I think we agree - to avoid the exact problems you mention, the plan
has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents...
If you read the early archives of this list (and discussions predating it on rtcweb), you'll find that goal articulated a number of times.
I think I have seen and heard about the wish to let the user prioritize flows, and somehow avoid that the flows would be totally ignorant about each other - but this is not the same as what I'm suggesting: a *single* congestion control instance for everything.
Now I'm not sure if I get this right: 1) Justin's statement above means that there is a plan to have one single SCTP CC module for everything in the "data channel", but there will still be different congestion control for RTP/UDP transfer => then this is not what I mean, you still won't get the same efficiency as if you use one CC instance for *everything*, by which I really mean *everything*.
2) Justin's statement above means what I say here (one CC instance for *everything*) => then I have missed that and misunderstood the discussion, and just think it's good.
#2 is what I think we've had in mind - given that this is fairly greenfield territory, we should be able to have a mechanism that can handle both media and data. Said mechanism will still have to guess at non-RTP/SCTP flows, of course.
As one of those driving the congestion-control issue and also the data
channel design, I've always spoken to wanting to in some manner merge the congestion control regimes of media and data flows, and barring that to at least provide feedback between them. The last fallback to avoiding problems would be to partially punt the problem up to the application (which also knows the relative priorities of the media and data, and of each data flow), by reporting back to the JS application the current automatic distribution of available bits to the different streams, and allow the application to change the distribution of those bits to the various media and data sub-flows (but not let it increase the total number of bits sent directly; it could decrease it directly or indirectly (by removing flows, such as dropping a video stream, etc)).
We will very likely be doing that application-level reporting, and probably allowing the application to adjust the distribution. My proposal on the API for that is outstanding in the W3 side of this standardization effort.
I spoke more directly to this at the rtcweb Interim meeting in Febuary; my slides are at: http://www.ietf.org/**proceedings/interim/2012/01/** 31/rtcweb/slides/rtcweb-1.pdf<http://www.ietf.org/proceedings/interim/2012/01/31/rtcweb/slides/rtcweb-1.pdf> The relevant slides are 10-13. While not stated on the slides, this discussion emanates from the goal to in some manner merge them or make them work together (and not fight), and this was I think re-iterated from the podium there.
I saw that presentation, but I had interpreted it as being only about the SCTP data channel, which would be case 1) above. Even if you extend that to incorporate other streams with cross-reporting as you describe above, it sounds like a complicated vehicle that will, I think, never be as efficient as simply having a single congestion control instance for everything.
Cheers, Michael
______________________________**_________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/**mailman/listinfo/rtp-**congestion<http://www.alvestrand.no/mailman/listinfo/rtp-congestion>

On 4/8/2012 10:41 PM, Justin Uberti wrote:
On Sun, Apr 8, 2012 at 1:59 PM, Michael Welzl <michawe@ifi.uio.no <mailto:michawe@ifi.uio.no>> wrote:
> Justin replied:
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents...
If you read the early archives of this list (and discussions predating it on rtcweb), you'll find that goal articulated a number of times.
I think I have seen and heard about the wish to let the user prioritize flows, and somehow avoid that the flows would be totally ignorant about each other - but this is not the same as what I'm suggesting: a *single* congestion control instance for everything.
Now I'm not sure if I get this right: 1) Justin's statement above means that there is a plan to have one single SCTP CC module for everything in the "data channel", but there will still be different congestion control for RTP/UDP transfer => then this is not what I mean, you still won't get the same efficiency as if you use one CC instance for *everything*, by which I really mean *everything*.
2) Justin's statement above means what I say here (one CC instance for *everything*) => then I have missed that and misunderstood the discussion, and just think it's good.
#2 is what I think we've had in mind - given that this is fairly greenfield territory, we should be able to have a mechanism that can handle both media and data. Said mechanism will still have to guess at non-RTP/SCTP flows, of course.
Correct - that was the stated goal, but thus far we don't have a definitive answer about how to do so. Note that harald's draft generates available bandwidth estimates for all channels together if you let it include bytes received by the SCTP connection in the period in question. This would at minimum allow the sender side to decide (at the application layer) on a media/data split, and then use that to enforce an upper limit for the SCTP connection. Even if we run everything in one congestion algorithm, we still need to have a mechanism for splitting the bandwidth between media and data channels; "normal" SCTP mechanisms would not suffice for media streams (see some of Justin's recent comments on send timing for examples of why).
We will very likely be doing that application-level reporting, and probably allowing the application to adjust the distribution. My proposal on the API for that is outstanding in the W3 side of this standardization effort.
I spoke more directly to this at the rtcweb Interim meeting in Febuary; my slides are at: http://www.ietf.org/__proceedings/interim/2012/01/__31/rtcweb/slides/rtcweb-... <http://www.ietf.org/proceedings/interim/2012/01/31/rtcweb/slides/rtcweb-1.pdf> The relevant slides are 10-13. While not stated on the slides, this discussion emanates from the goal to in some manner merge them or make them work together (and not fight), and this was I think re-iterated from the podium there.
I saw that presentation, but I had interpreted it as being only about the SCTP data channel, which would be case 1) above. Even if you extend that to incorporate other streams with cross-reporting as you describe above, it sounds like a complicated vehicle that will, I think, never be as efficient as simply having a single congestion control instance for everything.
Perhaps true - but efficiency isn't necessarily the primary goal here; a quality experience for the user given the available bandwidth is, including "appropriate" splits of data and media. And the reporting is because the application has knowledge of state we can't have - for example if the 2nd channel of video is important, or to decide if any video at all is important, or that when extra bits are available they should go to higher resolution instead of faster data transfer, or higher-quality audio, etc. As discussed, the WebRTC system will apply an automatic distribution of available bandwidth according to static priorities given to it, but the app is given the choice to intervene. This activity by the app should not cause significant issues for the congestion control algorithm as mostly it's just redistributing the pie. -- Randell Jesup randell-ietf@jesup.org

On Apr 9, 2012, at 5:10 AM, Randell Jesup wrote:
On 4/8/2012 10:41 PM, Justin Uberti wrote:
On Sun, Apr 8, 2012 at 1:59 PM, Michael Welzl <michawe@ifi.uio.no <mailto:michawe@ifi.uio.no>> wrote:
> Justin replied:
I think we agree - to avoid the exact problems you mention, the plan has been to replace the SCTP CC module with a CC that is common across media and data.
Oh, cool! I wasn't aware of that - is that documented somewhere? Sorry if I missed it! I'm a bit new to the whole forest of RTCWEB documents...
If you read the early archives of this list (and discussions predating it on rtcweb), you'll find that goal articulated a number of times.
I think I have seen and heard about the wish to let the user prioritize flows, and somehow avoid that the flows would be totally ignorant about each other - but this is not the same as what I'm suggesting: a *single* congestion control instance for everything.
Now I'm not sure if I get this right: 1) Justin's statement above means that there is a plan to have one single SCTP CC module for everything in the "data channel", but there will still be different congestion control for RTP/UDP transfer => then this is not what I mean, you still won't get the same efficiency as if you use one CC instance for *everything*, by which I really mean *everything*.
2) Justin's statement above means what I say here (one CC instance for *everything*) => then I have missed that and misunderstood the discussion, and just think it's good.
#2 is what I think we've had in mind - given that this is fairly greenfield territory, we should be able to have a mechanism that can handle both media and data. Said mechanism will still have to guess at non-RTP/SCTP flows, of course.
Correct - that was the stated goal, but thus far we don't have a definitive answer about how to do so. Note that harald's draft generates available bandwidth estimates for all channels together if you let it include bytes received by the SCTP connection in the period in question. This would at minimum allow the sender side to decide (at the application layer) on a media/data split, and then use that to enforce an upper limit for the SCTP connection.
Justin said that too (that #2 was the goal); good! Sorry for having misunderstood this.
Even if we run everything in one congestion algorithm, we still need to have a mechanism for splitting the bandwidth between media and data channels; "normal" SCTP mechanisms would not suffice for media streams (see some of Justin's recent comments on send timing for examples of why).
... but it's really only a scheduling function... that can always do a better job than giving priorities to a CC. algorithm.
We will very likely be doing that application-level reporting, and probably allowing the application to adjust the distribution. My proposal on the API for that is outstanding in the W3 side of this standardization effort.
I spoke more directly to this at the rtcweb Interim meeting in Febuary; my slides are at: http://www.ietf.org/__proceedings/interim/2012/01/__31/rtcweb/slides/rtcweb-... <http://www.ietf.org/proceedings/interim/2012/01/31/rtcweb/slides/rtcweb-1.pd...
The relevant slides are 10-13. While not stated on the slides, this discussion emanates from the goal to in some manner merge them or make them work together (and not fight), and this was I think re-iterated from the podium there.
I saw that presentation, but I had interpreted it as being only about the SCTP data channel, which would be case 1) above. Even if you extend that to incorporate other streams with cross- reporting as you describe above, it sounds like a complicated vehicle that will, I think, never be as efficient as simply having a single congestion control instance for everything.
Perhaps true - but efficiency isn't necessarily the primary goal here; a quality experience for the user given the available bandwidth is, including "appropriate" splits of data and media. And the reporting is because the application has knowledge of state we can't have - for example if the 2nd channel of video is important, or to decide if any video at all is important, or that when extra bits are available they should go to higher resolution instead of faster data transfer, or higher-quality audio, etc. As discussed, the WebRTC system will apply an automatic distribution of available bandwidth according to static priorities given to it, but the app is given the choice to intervene.
This activity by the app should not cause significant issues for the congestion control algorithm as mostly it's just redistributing the pie.
Sure. I understand that this won't be easy, but again I'm sure that it's much easier to achieve when the thing you have to manipulate is just a scheduler that decides how the currently available rate has to be divided among application streams. Cheers, Michael
participants (4)
-
Justin Uberti
-
Michael Tuexen
-
Michael Welzl
-
Randell Jesup