
HI, I thought it would be useful to understand what people consider are the problems with TFRC, and why they mean a new congestion protocol would be better for RTCweb? Clearly TFRC does have some issues though it would be useful for the community to understand why TFRC needs to be superseded? Given that it is probably the most widely cited standard for real-time media flows and appears to have a growing number of 'TFRC based' deployments: e.g. GoogleTalk says it is 'based upon TFRC', Gnome's Empathy IM/videoconf client has recently put TFRC support into their Farsight library, and others (Magor etc) who mentioned their use of it on RTCweb. I guess I can start with a few well known issues with TFRC: - Problems with clocking the packets out for TFRC - Issues with using variable packet sizes - "Very low video bitrates at moderate loss rates" 3GPP tr26.114 - 'Oscillatory behaviour' Once a few of these are clear one would expect the new offering(s) would need to show they can demonstrate improvement in these areas. Piers O'Hanlon

On 11/8/2011 12:53 PM, Piers O'Hanlon wrote:
I thought it would be useful to understand what people consider are the problems with TFRC, and why they mean a new congestion protocol would be better for RTCweb?
In addition to the below, I'll note that the requirements when you have multiple streams between endpoints and the ability to adjust the split of bandwidth between them, and to adjust the parameters of the streams directly (resolution, framerate, etc) give more degrees of freedom and a more complex set of incoming data than TFRC was designed for. You could run each stream as an independent TFRC stream, but that would result in a bunch of sub-optimal use-cases (and they'd fight, which my memory indicates tends towards oscillation).
Clearly TFRC does have some issues though it would be useful for the community to understand why TFRC needs to be superseded? Given that it is probably the most widely cited standard for real-time media flows and appears to have a growing number of 'TFRC based' deployments: e.g. GoogleTalk says it is 'based upon TFRC'
My understanding is that Google Talk (and Hangouts) are based on the algorithm laid out in Harald's draft, which uses TFRC-equivalent as an upper bound more or less, but does not actually implement TFRC. I personally have seen no uses of TFRC, though on the other hand I haven't really looked outside of hardware/software videophones and VoIP devices.
Gnome's Empathy IM/videoconf client has recently put TFRC support into their Farsight library, and others (Magor etc) who mentioned their use of it on RTCweb.
I've implemented videophones and associated congestion-control regimes in the past, and followed the definition of TFRC in the IETF (and early-on decided it was not of any utility for my old company). TFRC, as with all loss-based solutions, fails miserably in bufferbloat situations, as *seconds* of delay can build up before any loss occurs. The problem is probably worse now than when I first ran into it in early 2004. I do not consider TFRC a viable candidate for robust realtime communication. Many proprietary solutions have been created in the meantime like the one I came up with in 2004, what GIPS (now Google) came up with a short while later I believe, what Radvision has recently laid out, etc. Until recently everyone considered these delay-based solutions a proprietary value-add; they've only recently "come out of the closet". Also, the 1/RTT reporting requirement was a stumbling block, though perhaps a solvable one. In reality, algorithms need *up to* 1/RTT reporting depending on the situation (and sometimes faster than 1/RTT is helpful), but at other times (stability) need very little feedback. This requires active support at both ends, not just the sender.
I guess I can start with a few well known issues with TFRC: - Problems with clocking the packets out for TFRC - Issues with using variable packet sizes
Which is the norm for video.
- "Very low video bitrates at moderate loss rates" 3GPP tr26.114
A real issue for TFRC I imagine - though delay-based algorithms should avoid almost any congestion-based loss except on extreme swings with low-buffer devices.
- 'Oscillatory behaviour'
Also a major issue as my memory indicates, though some oscillation (below significant effect on user-noticeable quality) is ok. But honestly, none of those matter if TFRC doesn't deal with bufferbloat and minimize delay, and it doesn't. It's TCP-friendly UDP, it's not delay-sensitive. -- Randell Jesup randell-ietf@jesup.org

On Tue, Nov 8, 2011 at 1:52 PM, Randell Jesup <randell-ietf@jesup.org>wrote:
On 11/8/2011 12:53 PM, Piers O'Hanlon wrote:
I thought it would be useful to understand what people consider are the problems with TFRC, and why they mean a new congestion protocol would be better for RTCweb?
In addition to the below, I'll note that the requirements when you have multiple streams between endpoints and the ability to adjust the split of bandwidth between them, and to adjust the parameters of the streams directly (resolution, framerate, etc) give more degrees of freedom and a more complex set of incoming data than TFRC was designed for. You could run each stream as an independent TFRC stream, but that would result in a bunch of sub-optimal use-cases (and they'd fight, which my memory indicates tends towards oscillation).
Clearly TFRC does have some issues though it would be useful for the
community to understand why TFRC needs to be superseded? Given that it is probably the most widely cited standard for real-time media flows and appears to have a growing number of 'TFRC based' deployments: e.g. GoogleTalk says it is 'based upon TFRC'
My understanding is that Google Talk (and Hangouts) are based on the algorithm laid out in Harald's draft, which uses TFRC-equivalent as an upper bound more or less, but does not actually implement TFRC.
I personally have seen no uses of TFRC, though on the other hand I haven't really looked outside of hardware/software videophones and VoIP devices.
There is no standard spec for doing RTP over TFRC currently, although there is a draft floating around. And while Talk and Hangouts currently use a TFRC-based algorithm, this algorithm has been tweaked substantially to use delay-based congestion control, instead of the typical loss-based congestion control from TFRC (since losses can cause bad things in real-time applications). As Randell says, since what we want is TCP-friendly UDP, it probably makes more sense to come up with a new mechanism that can provide this friendliness, instead of reworking TFRC to fit our needs. Harald's draft proposes such a mechanism.
Gnome's Empathy IM/videoconf
client has recently put TFRC support into their Farsight library, and others (Magor etc) who mentioned their use of it on RTCweb.
I've implemented videophones and associated congestion-control regimes in the past, and followed the definition of TFRC in the IETF (and early-on decided it was not of any utility for my old company). TFRC, as with all loss-based solutions, fails miserably in bufferbloat situations, as *seconds* of delay can build up before any loss occurs. The problem is probably worse now than when I first ran into it in early 2004. I do not consider TFRC a viable candidate for robust realtime communication. Many proprietary solutions have been created in the meantime like the one I came up with in 2004, what GIPS (now Google) came up with a short while later I believe, what Radvision has recently laid out, etc. Until recently everyone considered these delay-based solutions a proprietary value-add; they've only recently "come out of the closet".
Also, the 1/RTT reporting requirement was a stumbling block, though perhaps a solvable one. In reality, algorithms need *up to* 1/RTT reporting depending on the situation (and sometimes faster than 1/RTT is helpful), but at other times (stability) need very little feedback. This requires active support at both ends, not just the sender.
I guess I can start with a few well known issues with TFRC:
- Problems with clocking the packets out for TFRC - Issues with using variable packet sizes
Which is the norm for video.
- "Very low video bitrates at moderate loss rates" 3GPP tr26.114
A real issue for TFRC I imagine - though delay-based algorithms should avoid almost any congestion-based loss except on extreme swings with low-buffer devices.
- 'Oscillatory behaviour'
Also a major issue as my memory indicates, though some oscillation (below significant effect on user-noticeable quality) is ok.
But honestly, none of those matter if TFRC doesn't deal with bufferbloat and minimize delay, and it doesn't. It's TCP-friendly UDP, it's not delay-sensitive.
-- Randell Jesup randell-ietf@jesup.org
______________________________**_________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/**mailman/listinfo/rtp-**congestion<http://www.alvestrand.no/mailman/listinfo/rtp-congestion>

On 8 November 2011 18:52, Randell Jesup <randell-ietf@jesup.org> wrote:
On 11/8/2011 12:53 PM, Piers O'Hanlon wrote:
I thought it would be useful to understand what people consider are the problems with TFRC, and why they mean a new congestion protocol would be better for RTCweb?
In addition to the below, I'll note that the requirements when you have multiple streams between endpoints and the ability to adjust the split of bandwidth between them, and to adjust the parameters of the streams directly (resolution, framerate, etc) give more degrees of freedom and a more complex set of incoming data than TFRC was designed for. You could run each stream as an independent TFRC stream, but that would result in a bunch of sub-optimal use-cases (and they'd fight, which my memory indicates tends towards oscillation).
As regards cross-flow congestion behaviours there is the ongoing work on mulTFRC within ICCRG which may provide some solutions, though the issue here is somewhat different as the multi-flow behaviour is working with different trade-offs and the transport path probably isn't different, though of course it could be. I guess the other multi-path adaptation work going elsewhere like MTCP and SCTP would could help with some aspects - one may be able adapt the resource sharing algorithms...
Clearly TFRC does have some issues though it would be useful for the community to understand why TFRC needs to be superseded? Given that it is probably the most widely cited standard for real-time media flows and appears to have a growing number of 'TFRC based' deployments: e.g. GoogleTalk says it is 'based upon TFRC'
My understanding is that Google Talk (and Hangouts) are based on the algorithm laid out in Harald's draft, which uses TFRC-equivalent as an upper bound more or less, but does not actually implement TFRC.
Yes I was going on Google's own description: http://code.google.com/apis/talk/call_signaling.html#Video_Rate_Control "We use a mechanism based on TCP Friendly Rate Control (TFRC) to determine the available bandwidth and adjust the rate accordingly." But as I mentioned it was "based on TFRC" which could mean one of few things... Though they also mention that they used some of the old TFRC on RTP. As you're probably aware the draft has recently been reissued (though I imagine it won't change vastly): http://tools.ietf.org/html/draft-gharai-avtcore-rtp-tfrc-01
I personally have seen no uses of TFRC, though on the other hand I haven't really looked outside of hardware/software videophones and VoIP devices.
Gnome's Empathy IM/videoconf client has recently put TFRC support into their Farsight library, and others (Magor etc) who mentioned their use of it on RTCweb.
I've implemented videophones and associated congestion-control regimes in the past, and followed the definition of TFRC in the IETF (and early-on decided it was not of any utility for my old company). TFRC, as with all loss-based solutions, fails miserably in bufferbloat situations, as *seconds* of delay can build up before any loss occurs. The problem is probably worse now than when I first ran into it in early 2004. I do not consider TFRC a viable candidate for robust realtime communication. Many proprietary solutions have been created in the meantime like the one I came up with in 2004, what GIPS (now Google) came up with a short while later I believe, what Radvision has recently laid out, etc. Until recently everyone considered these delay-based solutions a proprietary value-add; they've only recently "come out of the closet".
True bufferbloat can be a major issue with realtime apps.Though TFRC is not entirely loss based - it does have some mechanisms in it to backoff as the RTT increases - presumably you found these inadequate? http://tools.ietf.org/html/rfc5348#section-4.5
Also, the 1/RTT reporting requirement was a stumbling block, though perhaps a solvable one. In reality, algorithms need *up to* 1/RTT reporting depending on the situation (and sometimes faster than 1/RTT is helpful), but at other times (stability) need very little feedback. This requires active support at both ends, not just the sender.
I guess I can start with a few well known issues with TFRC: - Problems with clocking the packets out for TFRC - Issues with using variable packet sizes
Which is the norm for video.
I'm not sure that packet sizes actually vary that much. Video frames typically consist of a number of MSS sizes frames followed by a 'random' sized packet (to make up the the frame size from NxMSS packets). I guess if the video is low rate then the packets can be more variable sized overall. And audio is usually bunch of small packets of the same [small] size. Though I guess Skype and other have developed variable sized packet based audio codecs...
- "Very low video bitrates at moderate loss rates" 3GPP tr26.114
A real issue for TFRC I imagine - though delay-based algorithms should avoid almost any congestion-based loss except on extreme swings with low-buffer devices.
It's not clear why the rates should be considered as 'very low' - given that you're usually competing with some if not alot of TCP traffic that should using approximately the same rate (if you're going to be TCP friendly) determined by the loss rate (ie according to the TCP equation).
- 'Oscillatory behaviour'
Also a major issue as my memory indicates, though some oscillation (below significant effect on user-noticeable quality) is ok.
But honestly, none of those matter if TFRC doesn't deal with bufferbloat and minimize delay, and it doesn't. It's TCP-friendly UDP, it's not delay-sensitive.
The oscillatory behaviour seems to be general problem in this area...
-- Randell Jesup randell-ietf@jesup.org _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 11/8/2011 4:44 PM, Piers O'Hanlon wrote:
On 8 November 2011 18:52, Randell Jesup<randell-ietf@jesup.org> wrote:
I've implemented videophones and associated congestion-control regimes in the past, and followed the definition of TFRC in the IETF (and early-on decided it was not of any utility for my old company). TFRC, as with all loss-based solutions, fails miserably in bufferbloat situations, as *seconds* of delay can build up before any loss occurs. The problem is probably worse now than when I first ran into it in early 2004. I do not consider TFRC a viable candidate for robust realtime communication. Many proprietary solutions have been created in the meantime like the one I came up with in 2004, what GIPS (now Google) came up with a short while later I believe, what Radvision has recently laid out, etc. Until recently everyone considered these delay-based solutions a proprietary value-add; they've only recently "come out of the closet".
True bufferbloat can be a major issue with realtime apps.Though TFRC is not entirely loss based - it does have some mechanisms in it to backoff as the RTT increases - presumably you found these inadequate? http://tools.ietf.org/html/rfc5348#section-4.5
I haven't tested them myself; but their purpose is to minimize oscillations in a particular case, not minimize RTT, and the overall smoothing is simplistic. It's still a loss-based solution, so while it might back off some, delay can still wander up until you hit loss - which is when you may have seconds of delay. The algorithm simply didn't have delay as a design goal, so no surprise, it doesn't handle it well.
- Issues with using variable packet sizes
Which is the norm for video.
I'm not sure that packet sizes actually vary that much. Video frames typically consist of a number of MSS sizes frames followed by a 'random' sized packet (to make up the the frame size from NxMSS packets). I guess if the video is low rate then the packets can be more variable sized overall. And audio is usually bunch of small packets of the same [small] size. Though I guess Skype and other have developed variable sized packet based audio codecs...
We do have variable audio (within a much smaller range), and some video streams average << MTU. My case the nominal "normal" video bitrate was 110-200K, so packet size varied a LOT. And IDR/iframes lead to a burst of MTU-size packets.
- "Very low video bitrates at moderate loss rates" 3GPP tr26.114
A real issue for TFRC I imagine - though delay-based algorithms should avoid almost any congestion-based loss except on extreme swings with low-buffer devices.
It's not clear why the rates should be considered as 'very low' - given that you're usually competing with some if not alot of TCP traffic that should using approximately the same rate (if you're going to be TCP friendly) determined by the loss rate (ie according to the TCP equation).
Except we need to control delay, so we *can't* let it get to loss if there's any deep buffering. There's also the question of "what is fair" if we replace 4 streams with one "combined" stream - it shouldn't have its share drop to 1/4 of what it was.
- 'Oscillatory behaviour'
Also a major issue as my memory indicates, though some oscillation (below significant effect on user-noticeable quality) is ok.
But honestly, none of those matter if TFRC doesn't deal with bufferbloat and minimize delay, and it doesn't. It's TCP-friendly UDP, it's not delay-sensitive.
The oscillatory behaviour seems to be general problem in this area...
It's hard to avoid all oscillations, but we need to make sure they're damped. -- Randell Jesup randell-ietf@jesup.org

On 8 November 2011 22:46, Randell Jesup <randell-ietf@jesup.org> wrote:
On 11/8/2011 4:44 PM, Piers O'Hanlon wrote:
On 8 November 2011 18:52, Randell Jesup<randell-ietf@jesup.org> wrote:
I've implemented videophones and associated congestion-control regimes in the past, and followed the definition of TFRC in the IETF (and early-on decided it was not of any utility for my old company). TFRC, as with all loss-based solutions, fails miserably in bufferbloat situations, as *seconds* of delay can build up before any loss occurs. The problem is probably worse now than when I first ran into it in early 2004. I do not consider TFRC a viable candidate for robust realtime communication. Many proprietary solutions have been created in the meantime like the one I came up with in 2004, what GIPS (now Google) came up with a short while later I believe, what Radvision has recently laid out, etc. Until recently everyone considered these delay-based solutions a proprietary value-add; they've only recently "come out of the closet".
True bufferbloat can be a major issue with realtime apps.Though TFRC is not entirely loss based - it does have some mechanisms in it to backoff as the RTT increases - presumably you found these inadequate? http://tools.ietf.org/html/rfc5348#section-4.5
I haven't tested them myself; but their purpose is to minimize oscillations in a particular case, not minimize RTT, and the overall smoothing is simplistic. It's still a loss-based solution, so while it might back off some, delay can still wander up until you hit loss - which is when you may have seconds of delay.
The algorithm simply didn't have delay as a design goal, so no surprise, it doesn't handle it well.
I think you're right there - I was curious if anyone had tried to play with it all - as it does seem that some people are attempting use TFRC.
- Issues with using variable packet sizes
Which is the norm for video.
I'm not sure that packet sizes actually vary that much. Video frames typically consist of a number of MSS sizes frames followed by a 'random' sized packet (to make up the the frame size from NxMSS packets). I guess if the video is low rate then the packets can be more variable sized overall. And audio is usually bunch of small packets of the same [small] size. Though I guess Skype and other have developed variable sized packet based audio codecs...
We do have variable audio (within a much smaller range), and some video streams average << MTU. My case the nominal "normal" video bitrate was 110-200K, so packet size varied a LOT. And IDR/iframes lead to a burst of MTU-size packets.
Ok.
- "Very low video bitrates at moderate loss rates" 3GPP tr26.114
A real issue for TFRC I imagine - though delay-based algorithms should avoid almost any congestion-based loss except on extreme swings with low-buffer devices.
It's not clear why the rates should be considered as 'very low' - given that you're usually competing with some if not alot of TCP traffic that should using approximately the same rate (if you're going to be TCP friendly) determined by the loss rate (ie according to the TCP equation).
Except we need to control delay, so we *can't* let it get to loss if there's any deep buffering. There's also the question of "what is fair" if we replace 4 streams with one "combined" stream - it shouldn't have its share drop to 1/4 of what it was.
Sure though I guess we're probably in a lucky position if we're the only flow on the link that might have control over whether there is actually delay or not? I'd have thought that a lot of links are going to have share with at least the odd TCP flow that has already brought the delay up. The fairness issues can be tricky - I guess there's lots of tricks that have been played with trying to multiple streams to gain on single streams. It depends how fair you want to be and on what terms. Should we go Conex or just flow fair...?
- 'Oscillatory behaviour'
Also a major issue as my memory indicates, though some oscillation (below significant effect on user-noticeable quality) is ok.
But honestly, none of those matter if TFRC doesn't deal with bufferbloat and minimize delay, and it doesn't. It's TCP-friendly UDP, it's not delay-sensitive.
The oscillatory behaviour seems to be general problem in this area...
It's hard to avoid all oscillations, but we need to make sure they're damped.
-- Randell Jesup randell-ietf@jesup.org _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 11/8/2011 6:41 PM, Piers O'Hanlon wrote:
On 8 November 2011 22:46, Randell Jesup<randell-ietf@jesup.org> wrote:
Except we need to control delay, so we *can't* let it get to loss if there's any deep buffering. There's also the question of "what is fair" if we replace 4 streams with one "combined" stream - it shouldn't have its share drop to 1/4 of what it was.
Sure though I guess we're probably in a lucky position if we're the only flow on the link that might have control over whether there is actually delay or not? I'd have thought that a lot of links are going to have share with at least the odd TCP flow that has already brought the delay up.
The problems will come from either pageloads or other TCP traffic (email fetch/IMAP check, etc) from the same machine, or either established flows or pageload bursts on other devices in the same home or same WiFi link.
The fairness issues can be tricky - I guess there's lots of tricks that have been played with trying to multiple streams to gain on single streams. It depends how fair you want to be and on what terms. Should we go Conex or just flow fair...?
Witness the "2 connections" limit for HTTP.... and how both browsers and websites bend things (browsers by using more than 2 or pre-warming extra connections; websites by sharding their servers to encourage browsers to generate more connections at once). -- Randell Jesup randell-ietf@jesup.org

On 9 November 2011 05:48, Randell Jesup <randell-ietf@jesup.org> wrote:
On 11/8/2011 6:41 PM, Piers O'Hanlon wrote:
On 8 November 2011 22:46, Randell Jesup<randell-ietf@jesup.org> wrote:
Except we need to control delay, so we *can't* let it get to loss if there's any deep buffering. There's also the question of "what is fair" if we replace 4 streams with one "combined" stream - it shouldn't have its share drop to 1/4 of what it was.
Sure though I guess we're probably in a lucky position if we're the only flow on the link that might have control over whether there is actually delay or not? I'd have thought that a lot of links are going to have share with at least the odd TCP flow that has already brought the delay up.
The problems will come from either pageloads or other TCP traffic (email fetch/IMAP check, etc) from the same machine, or either established flows or pageload bursts on other devices in the same home or same WiFi link.
Yes that's what I was thinking so I wonder how often it is that a delay based algorithm has the chance to actually control the delay much as the parallel TCP sessions will take up the slack. Even on today's home networks there's at a least a couple of devices which may be sharing the same link with longer lasting flows (e.g. iPlayer/Hulu over TCP, downloads etc).
The fairness issues can be tricky - I guess there's lots of tricks that have been played with trying to multiple streams to gain on single streams. It depends how fair you want to be and on what terms. Should we go Conex or just flow fair...?
Witness the "2 connections" limit for HTTP.... and how both browsers and websites bend things (browsers by using more than 2 or pre-warming extra connections; websites by sharding their servers to encourage browsers to generate more connections at once).
Yeah and unfortunately the work to reduce HTTP latency can have the opposite effect on realtime flows - when things like Initial Window=10 is used, especially in conjunction with multiple flows. Though potentially with delayed based bulk transfer it may be ok - I'm curious whether anyone has seen any benefits in this area when competing with Microsoft's Compound TCP (though as you pointed out with TFRC the delay aspect of CTCP may also not be about reducing latency). Piers.
-- Randell Jesup randell-ietf@jesup.org _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 11/8/2011 12:53 PM, Piers O'Hanlon wrote:
HI,
I thought it would be useful to understand what people consider are the problems with TFRC, and why they mean a new congestion protocol would be better for RTCweb?
Clearly TFRC does have some issues though it would be useful for the community to understand why TFRC needs to be superseded? Given that it is probably the most widely cited standard for real-time media flows and appears to have a growing number of 'TFRC based' deployments: e.g. GoogleTalk says it is 'based upon TFRC', Gnome's Empathy IM/videoconf client has recently put TFRC support into their Farsight library, and others (Magor etc) who mentioned their use of it on RTCweb.
I guess I can start with a few well known issues with TFRC: - Problems with clocking the packets out for TFRC - Issues with using variable packet sizes - "Very low video bitrates at moderate loss rates" 3GPP tr26.114 - 'Oscillatory behaviour'
Once a few of these are clear one would expect the new offering(s) would need to show they can demonstrate improvement in these areas.
It's possible to use TFRC in order to compute a bound, and then use a 2nd algorithm with a focus on delay, operating below the bound set by TFRC. The TFRC feedback and equation can be computed based on the aggregate of media flows rather than per-flow, while adjustments can be made per flow in order to make the new aggregate conform to the TFRC limit. There would be some tailoring involved, but overall, I think there could be a strong basis on TFRC. Other experimental algorithms could (and should!) be developed that might be more aggressive, but at the moment, if there is some great urgency, TFRC seems to be the closest Standards Track tool for the job. I think the protocols could be tooled in order to support swap-out of CC mechanisms over time, with some reasonable expectations of what the CC mechanism needs as inputs (e.g. delays, loss events, etc) and what types of levers it needs to use as outputs (e.g. adjusting rates, packet sizes, etc.). -- Wes Eddy MTI Systems

On 11/8/2011 8:37 PM, Wesley Eddy wrote:
It's possible to use TFRC in order to compute a bound, and then use a 2nd algorithm with a focus on delay, operating below the bound set by TFRC.
How would TFRC produce a good bound if the delay-based algorithm keeps delay low and avoids loss? And if so, what is using TFRC as a bound doing for you? As far as I can see, just acting as a stopgap against the primary algorithm going haywire, and maybe reacting when there's a sudden bandwidth restriction - but the delay algorithm is likely to be more aggressive in reacting to it. TFRC would see no loss, so it will sit thinking the bandwidth available is 2x the current send rate, roughly. Also, if for some reason you're not trying to pump the channel full of data, TFRC will reduce it's bandwidth estimate, since it's based on X_recv, which is the rate data was received during the last RTT. With codec data, you don't have a queue of data waiting to be sent, so with a short RTT you may get no packets or just an audio packet in the previous RTT. In TFRC, the sender's X_recv_set is "typically" only two entries, or 2 RTT. If the devices are on the same LAN, RTT may be 10ms. Even if you extend X-recv_set, you won't generally have a good bound on bandwidth other than "2x the instantaneous bandwidth most recently seen". But instantaneous bandwidth can be misleading, especially with rate-limited codecs and if you're far from the bottleneck - which is the norm, as the bottleneck is usually the first upstream link, and you're measuring at the other side of the far-end downstream link - dispersed packets can re-aggregate on the faster downstream link. It also makes it impossible to cleanly support bursty transmissions, since bandwidth ramps up (and down) slowly. Bursts after idleness are now allowed in RFC 5348, but only one RTT's worth at the current bandwidth estimate. Since the bandwidth estimate degrades over time if the client is idle for even moderate periods (think chat or in a game when it's not exchanging data with a particular other player, or only at low BW), the value of this burst declines as well - and the value is especially low on LANs and low-RTT settings. (Chat with your neighbor, or with another coworker over WiFi). Realize that delay-based CC is almost by definition more conservative in general than TFRC, so using TFRC as a bound doesn't buy you much if anything, but may cause major problems with bursty use, for example. Think push-to-talk type applications in a game with video - if the bandwidth is there, you want to start using it from the start of the communication, and very quickly adapt if it's not there (adapt both up and down faster than TFRC, and maybe start higher than TFRC, especially if we have history about the connection). The start of a video communication is especially sensitive to low bandwidth and slow ramping/convergence, since a "good" baseline image needs to be received for high-quality efficient pframes to produce a good experience. <IETF contributor hat on> If the spec requires TFRC, implementers will likely use another algorithm underneath it or instead of it (delay-based) and largely or completely ignore formal TFRC even if they offically support it, since actually using TFRC would not produce a sufficiently reliable low-delay, high-quality connection. And that will invite incompatibilities, depending on the choices they make to "improve on" or override TFRC. Note bene: I'm also an implementer. -- Randell Jesup randell-ietf@jesup.org

Just one detail..... On 11/09/2011 03:59 PM, Randell Jesup wrote:
In TFRC, the sender's X_recv_set is "typically" only two entries, or 2 RTT. If the devices are on the same LAN, RTT may be 10ms.
In my LAN just now: PING RTT for 64 bytes: 0.1 ms on wired, 0.7 ms on wireless-to-wired PING RTT for 1400 bytes: 0.3 ms on wired, 1.3 ms on wireless-to-wired Our algorithms have to function well when RTT is 0.1 ms, as well as when it's 1000 ms.
participants (5)
-
Harald Alvestrand
-
Justin Uberti
-
Piers O'Hanlon
-
Randell Jesup
-
Wesley Eddy