
So, there are a few possible feedback mechanisms (regardless of the exact algorithm): 1) TMMBR Has issues in that it's SSRC-specific Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Already exists, has some chance of being useful in interop situations (iffy) Con: In extreme congestion, can fail to be delivered. If congestion information is merged between multiple streams between endpoints, it's confusing what should be sent for a specific SSRC - requires the receiver to make decisions about inter-stream tradeoffs without knowing the parameters, or requires revising the TMMBR spec. 2) REMB Multiple SSRCs (good) Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Eases merging congestion information for streams Lower "extra" packet rate than explicit ack Can be suppressed when nothing interesting is happening - Lower bandwidth usage Con: If RTCP fraction is too low, can be blocked (so don't make it too low) In extreme congestion, can fail to be delivered. 3) Explicit RTCP C-ACK packets (Congestion-ACK) 1 packet per incoming packet (or short queuing time before sending to cut packet rate) Algorithm runs mostly in the sender Travels over RTCP and is subject to AVPF rules Pros: Sender can trigger on failure to receive C-ACKs - though we don't know if there's congestion in the sending direction, just in the feedback direction Cons: Extra bandwidth and packet traffic - increases congestion and uses queue slots If RTCP fraction is too low, can be blocked (and this uses a lot more than 1 or 2) I'm going to propose a fourth mechanism: 4) CHEX (Congestion Header EXtension) - ok, come up with a better name! Multiple SSRCs - multiple 'acks' in the extension Allows most of the algorithm to run in the sender Piggybacks in header extensions, can only send with an RTP packet (i.e. may be delayed) Pros: Extends existing support for actual send time header extensions (XXXX) Sender can trigger on failure to receive - though we don't know if there's congestion in the sending direction, just in the feedback direction Minimal additional bandwidth compared to C-ACK (3) Cons: Anything that strips header extensions causes failover to basic Requires waiting for an outgoing packet - fallback to sending via RTCP if we need to send "now" or if we've waited too long for a media packet (or anticipate waiting too long). Only one header extension per packet may be a small issue. Cuts max payload some (not a lot), but that may limit the number of updates in a single packet. Noticeably more bandwidth than REMB (2) Basically, it would be a combination of the actual send time (like XXXX), and reception data (equivalent to TCP ACKs, but including the time received or the inter-packet delta difference from the send time stamp (i.e. the input to the Kalman filter). I'm not saying this is my preference (I currently believe it makes more sense to run at least the Kalman filter in the receiver, and probably omit some , but it makes sense to explore the tradeoffs between sender and receiver running the algorithm. If we feel we want to use explicit acknowledgment or seriously consider it, we'll need to talk more about this and the tradeoffs and make this more fleshed-out. -- Randell Jesup randell-ietf@jesup.org

Nice summary, some comments inline. On Wed, Aug 8, 2012 at 6:35 PM, Randell Jesup <randell-ietf@jesup.org>wrote:
So, there are a few possible feedback mechanisms (regardless of the exact algorithm):
1) TMMBR Has issues in that it's SSRC-specific Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Already exists, has some chance of being useful in interop situations (iffy) Con: In extreme congestion, can fail to be delivered. If congestion information is merged between multiple streams between endpoints, it's confusing what should be sent for a specific SSRC - requires the receiver to make decisions about inter-stream tradeoffs without knowing the parameters, or requires revising the TMMBR spec.
2) REMB Multiple SSRCs (good) Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Eases merging congestion information for streams Lower "extra" packet rate than explicit ack Can be suppressed when nothing interesting is happening - Lower bandwidth usage Con: If RTCP fraction is too low, can be blocked (so don't make it too low) In extreme congestion, can fail to be delivered.
You could sent one REMB for each "congested" packet received and thereby further reduce the chance of losing important REMBs due to congestion on the feedback channel.
3) Explicit RTCP C-ACK packets (Congestion-ACK) 1 packet per incoming packet (or short queuing time before sending to cut packet rate) Algorithm runs mostly in the sender Travels over RTCP and is subject to AVPF rules Pros: Sender can trigger on failure to receive C-ACKs - though we don't know if there's congestion in the sending direction, just in the feedback direction Cons: Extra bandwidth and packet traffic - increases congestion and uses queue slots If RTCP fraction is too low, can be blocked (and this uses a lot more than 1 or 2)
I'm going to propose a fourth mechanism: 4) CHEX (Congestion Header EXtension) - ok, come up with a better name! Multiple SSRCs - multiple 'acks' in the extension Allows most of the algorithm to run in the sender Piggybacks in header extensions, can only send with an RTP packet (i.e. may be delayed) Pros: Extends existing support for actual send time header extensions (XXXX) Sender can trigger on failure to receive - though we don't know if there's congestion in the sending direction, just in the feedback direction Minimal additional bandwidth compared to C-ACK (3) Cons: Anything that strips header extensions causes failover to basic Requires waiting for an outgoing packet - fallback to sending via RTCP if we need to send "now" or if we've waited too long for a media packet (or anticipate waiting too long). Only one header extension per packet may be a small issue. Cuts max payload some (not a lot), but that may limit the number of updates in a single packet. Noticeably more bandwidth than REMB (2)
The fallback to RTCP is particularly important for receive-only clients.
Basically, it would be a combination of the actual send time (like XXXX), and reception data (equivalent to TCP ACKs, but including the time received or the inter-packet delta difference from the send time stamp (i.e. the input to the Kalman filter).
I'm not saying this is my preference (I currently believe it makes more sense to run at least the Kalman filter in the receiver, and probably omit some , but it makes sense to explore the tradeoffs between sender and receiver running the algorithm.
If we feel we want to use explicit acknowledgment or seriously consider it, we'll need to talk more about this and the tradeoffs and make this more fleshed-out.
-- Randell Jesup randell-ietf@jesup.org
______________________________**_________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/**mailman/listinfo/rtp-**congestion<http://www.alvestrand.no/mailman/listinfo/rtp-congestion>

Hi, Nice summary indeed! Very interesting for a RTP-newbie like me... A few observations: - I think (not sure though, maybe it was *only* Randell :-) well, I like it too! ) that I heard several people talk about the idea of piggybacking ACKs on RTP payload whenever that's possible. That just seems to be a good idea. - I think that the RTCP feedback limitation rules grew out of thinking of multicast, which we decided to avoid here. Given the fact that we're dealing with congestion information and want to react to growing queues as fast as we can, I really don't think that we should be limited in that way for simple, short packets (which *might* translate into: RTCP is not the right tool for this. Don't know...). Just as a thought model, I think it's perfectly "legal" to run RTP over TCP, and then you get all these TCP ACKs... why would that be appropriate, but sending that amount of feedback for something that's running over UDP wouldn't? - Nothing's bad about sending a lot of feedback traffic as long as there is no backwards congestion, and backwards congestion can be detected and reacted upon, in rather simple ways - see RFC 5690, or DCCP for examples. It's early enough to reduce feedback when backwards congestion is detected in my opinion. - It seems that a sender should be able to parse a multitude of feedback packets... not sure if we can cleanly standardize such a thing, but in case of RTCWEB for instance, you may face incoming RTCP TMMBR messages, some new RTCP message and SCTP ACKs too, all from the same path... I think that the FSE could help here. But first things first, I suppose. Cheers, Michael On 8. aug. 2012, at 18:35, Randell Jesup wrote:
So, there are a few possible feedback mechanisms (regardless of the exact algorithm):
1) TMMBR Has issues in that it's SSRC-specific Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Already exists, has some chance of being useful in interop situations (iffy) Con: In extreme congestion, can fail to be delivered. If congestion information is merged between multiple streams between endpoints, it's confusing what should be sent for a specific SSRC - requires the receiver to make decisions about inter-stream tradeoffs without knowing the parameters, or requires revising the TMMBR spec.
2) REMB Multiple SSRCs (good) Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Eases merging congestion information for streams Lower "extra" packet rate than explicit ack Can be suppressed when nothing interesting is happening - Lower bandwidth usage Con: If RTCP fraction is too low, can be blocked (so don't make it too low) In extreme congestion, can fail to be delivered.
3) Explicit RTCP C-ACK packets (Congestion-ACK) 1 packet per incoming packet (or short queuing time before sending to cut packet rate) Algorithm runs mostly in the sender Travels over RTCP and is subject to AVPF rules Pros: Sender can trigger on failure to receive C-ACKs - though we don't know if there's congestion in the sending direction, just in the feedback direction Cons: Extra bandwidth and packet traffic - increases congestion and uses queue slots If RTCP fraction is too low, can be blocked (and this uses a lot more than 1 or 2)
I'm going to propose a fourth mechanism: 4) CHEX (Congestion Header EXtension) - ok, come up with a better name! Multiple SSRCs - multiple 'acks' in the extension Allows most of the algorithm to run in the sender Piggybacks in header extensions, can only send with an RTP packet (i.e. may be delayed) Pros: Extends existing support for actual send time header extensions (XXXX) Sender can trigger on failure to receive - though we don't know if there's congestion in the sending direction, just in the feedback direction Minimal additional bandwidth compared to C-ACK (3) Cons: Anything that strips header extensions causes failover to basic Requires waiting for an outgoing packet - fallback to sending via RTCP if we need to send "now" or if we've waited too long for a media packet (or anticipate waiting too long). Only one header extension per packet may be a small issue. Cuts max payload some (not a lot), but that may limit the number of updates in a single packet. Noticeably more bandwidth than REMB (2)
Basically, it would be a combination of the actual send time (like XXXX), and reception data (equivalent to TCP ACKs, but including the time received or the inter-packet delta difference from the send time stamp (i.e. the input to the Kalman filter).
I'm not saying this is my preference (I currently believe it makes more sense to run at least the Kalman filter in the receiver, and probably omit some , but it makes sense to explore the tradeoffs between sender and receiver running the algorithm.
If we feel we want to use explicit acknowledgment or seriously consider it, we'll need to talk more about this and the tradeoffs and make this more fleshed-out.
-- Randell Jesup randell-ietf@jesup.org
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

Hi, On 9 Aug 2012, at 08:43, Michael Welzl wrote:
Hi,
Nice summary indeed! Very interesting for a RTP-newbie like me...
A few observations:
- I think (not sure though, maybe it was *only* Randell :-) well, I like it too! ) that I heard several people talk about the idea of piggybacking ACKs on RTP payload whenever that's possible. That just seems to be a good idea.
Yes - from my brief analysis of Facetime it would appear that is how they are signalling congestion as there's not much sign of RTCP packets that I can see.... When using RTP header based feedback one could envisage a case where video may be unidirectional but audio is maintained thus keeping a two way feedback channel open.
- I think that the RTCP feedback limitation rules grew out of thinking of multicast, which we decided to avoid here. Given the fact that we're dealing with congestion information and want to react to growing queues as fast as we can, I really don't think that we should be limited in that way for simple, short packets (which *might* translate into: RTCP is not the right tool for this. Don't know...). Just as a thought model, I think it's perfectly "legal" to run RTP over TCP, and then you get all these TCP ACKs... why would that be appropriate, but sending that amount of feedback for something that's running over UDP wouldn't?
It does also cover cases where you have multicast-like distribution scenarios - such as mixers and relays. But as the group size diminishes the imposed delay is reduced so it can allow for what AVPF term's 'immediate' feedback (with some caveats). But since the goal of limiting feedback is largely to avoid congestion it might seem that it one could make an exception if we're actually using them for congestion avoidance... Piers.
- Nothing's bad about sending a lot of feedback traffic as long as there is no backwards congestion, and backwards congestion can be detected and reacted upon, in rather simple ways - see RFC 5690, or DCCP for examples. It's early enough to reduce feedback when backwards congestion is detected in my opinion.
- It seems that a sender should be able to parse a multitude of feedback packets... not sure if we can cleanly standardize such a thing, but in case of RTCWEB for instance, you may face incoming RTCP TMMBR messages, some new RTCP message and SCTP ACKs too, all from the same path... I think that the FSE could help here. But first things first, I suppose.
Cheers, Michael
On 8. aug. 2012, at 18:35, Randell Jesup wrote:
So, there are a few possible feedback mechanisms (regardless of the exact algorithm):
1) TMMBR Has issues in that it's SSRC-specific Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Already exists, has some chance of being useful in interop situations (iffy) Con: In extreme congestion, can fail to be delivered. If congestion information is merged between multiple streams between endpoints, it's confusing what should be sent for a specific SSRC - requires the receiver to make decisions about inter-stream tradeoffs without knowing the parameters, or requires revising the TMMBR spec.
2) REMB Multiple SSRCs (good) Requires most of the algorithm to run in the receiver Travels over RTCP and is subject to AVPF rules, otherwise can send whenever needed Pro: Eases merging congestion information for streams Lower "extra" packet rate than explicit ack Can be suppressed when nothing interesting is happening - Lower bandwidth usage Con: If RTCP fraction is too low, can be blocked (so don't make it too low) In extreme congestion, can fail to be delivered.
3) Explicit RTCP C-ACK packets (Congestion-ACK) 1 packet per incoming packet (or short queuing time before sending to cut packet rate) Algorithm runs mostly in the sender Travels over RTCP and is subject to AVPF rules Pros: Sender can trigger on failure to receive C-ACKs - though we don't know if there's congestion in the sending direction, just in the feedback direction Cons: Extra bandwidth and packet traffic - increases congestion and uses queue slots If RTCP fraction is too low, can be blocked (and this uses a lot more than 1 or 2)
I'm going to propose a fourth mechanism: 4) CHEX (Congestion Header EXtension) - ok, come up with a better name! Multiple SSRCs - multiple 'acks' in the extension Allows most of the algorithm to run in the sender Piggybacks in header extensions, can only send with an RTP packet (i.e. may be delayed) Pros: Extends existing support for actual send time header extensions (XXXX) Sender can trigger on failure to receive - though we don't know if there's congestion in the sending direction, just in the feedback direction Minimal additional bandwidth compared to C-ACK (3) Cons: Anything that strips header extensions causes failover to basic Requires waiting for an outgoing packet - fallback to sending via RTCP if we need to send "now" or if we've waited too long for a media packet (or anticipate waiting too long). Only one header extension per packet may be a small issue. Cuts max payload some (not a lot), but that may limit the number of updates in a single packet. Noticeably more bandwidth than REMB (2)
Basically, it would be a combination of the actual send time (like XXXX), and reception data (equivalent to TCP ACKs, but including the time received or the inter-packet delta difference from the send time stamp (i.e. the input to the Kalman filter).
I'm not saying this is my preference (I currently believe it makes more sense to run at least the Kalman filter in the receiver, and probably omit some , but it makes sense to explore the tradeoffs between sender and receiver running the algorithm.
If we feel we want to use explicit acknowledgment or seriously consider it, we'll need to talk more about this and the tradeoffs and make this more fleshed-out.
-- Randell Jesup randell-ietf@jesup.org
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 8/9/2012 8:38 AM, Piers O'Hanlon wrote:
Hi,
On 9 Aug 2012, at 08:43, Michael Welzl wrote:
Hi,
Nice summary indeed! Very interesting for a RTP-newbie like me...
A few observations:
- I think (not sure though, maybe it was *only* Randell :-) well, I like it too! ) that I heard several people talk about the idea of piggybacking ACKs on RTP payload whenever that's possible. That just seems to be a good idea.
Yes - from my brief analysis of Facetime it would appear that is how they are signalling congestion as there's not much sign of RTCP packets that I can see....
I had a hack in our videophone to use header extensions as an alternate channel for RTCP due to a Canadian ISP/telecom not forwarding RTCP packets. (Grrrr). Facetime may be doing it as combo cheap rtcp-mux and to reduce bandwidth/packet rate, and maybe to avoid differential handling by mobile carriers.
When using RTP header based feedback one could envisage a case where video may be unidirectional but audio is maintained thus keeping a two way feedback channel open.
Yes, and audio typically runs a shorter frame rate.
- I think that the RTCP feedback limitation rules grew out of thinking of multicast, which we decided to avoid here. Given the fact that we're dealing with congestion information and want to react to growing queues as fast as we can, I really don't think that we should be limited in that way for simple, short packets (which *might* translate into: RTCP is not the right tool for this. Don't know...). Just as a thought model, I think it's perfectly "legal" to run RTP over TCP, and then you get all these TCP ACKs... why would that be appropriate, but sending that amount of feedback for something that's running over UDP wouldn't?
It does also cover cases where you have multicast-like distribution scenarios - such as mixers and relays. But as the group size diminishes the imposed delay is reduced so it can allow for what AVPF term's 'immediate' feedback (with some caveats).
But since the goal of limiting feedback is largely to avoid congestion it might seem that it one could make an exception if we're actually using them for congestion avoidance...
True. Note that sending rates should take into account RTCP usage (actual, not maximum), I believe. -- Randell Jesup randell-ietf@jesup.org

Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload...
Randell Jesup <randell-ietf@jesup.org> wrote:
...use header extensions as an alternate channel for RTCP...
Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT. Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary. Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs. Mo

On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote:
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload... Randell Jesup <randell-ietf@jesup.org> wrote: ...use header extensions as an alternate channel for RTCP... Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
I *may* agree, though I also agree it may be unpopular. I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux. Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge. I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet. I assume this would fall in CORE.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait. -- Randell Jesup randell-ietf@jesup.org

Is there value in going completely extreme and define a generic header extension for piggybacking RTCP in RTP packets? That way, whatever we define in RTCP can also be carried as a header extension, if it makes sense to do so. Of course, it may rarely make sense. On 08/12/2012 06:26 AM, Randell Jesup wrote:
On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote:
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload... Randell Jesup <randell-ietf@jesup.org> wrote: ...use header extensions as an alternate channel for RTCP... Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
I *may* agree, though I also agree it may be unpopular.
I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux.
Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge.
I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet.
I assume this would fall in CORE.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait.

It might be more productive to consider this sort of optimisation later, once we've defined the congestion control algorithm, and seen if it's needed. Optimising the header overheads on a not-yet-defined protocol seems premature, and it's not clear that we need enough feedback to require this sort of optimisation. Colin On 13 Aug 2012, at 02:03, Harald Alvestrand wrote:
Is there value in going completely extreme and define a generic header extension for piggybacking RTCP in RTP packets?
That way, whatever we define in RTCP can also be carried as a header extension, if it makes sense to do so.
Of course, it may rarely make sense.
On 08/12/2012 06:26 AM, Randell Jesup wrote:
On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote:
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload... Randell Jesup <randell-ietf@jesup.org> wrote: ...use header extensions as an alternate channel for RTCP... Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
I *may* agree, though I also agree it may be unpopular.
I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux.
Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge.
I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet.
I assume this would fall in CORE.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait.
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

Hm, I think I don't agree with considering such optimizations later: the way I read the charter, we're providing a framework for cc. algorithms, i.e. *the* algorithm doesn't exist. It's not hard to construct cases where limiting feedback may be a bad idea. For example: the queue grows => before the sender can notice it or react, you lose just the few feedback packets that are sent, the sender has to resort to a timeout => even more queue growth due to slow reaction; this may seem a bit artificial, but do we already want to make a decision to live with this situation? If decisions follow the sequence: 1) first algorithm: has limited feedback 2) define feedback just for that algorithm, assume it works for all 3) more algorithms... ... then we may be putting ourselves in a corner that we can't get out of, making it impossible to define a mechanism that would solve problems like the one above. Cheers, Michael On 13. aug. 2012, at 07:00, Colin Perkins wrote:
It might be more productive to consider this sort of optimisation later, once we've defined the congestion control algorithm, and seen if it's needed. Optimising the header overheads on a not-yet-defined protocol seems premature, and it's not clear that we need enough feedback to require this sort of optimisation.
Colin
On 13 Aug 2012, at 02:03, Harald Alvestrand wrote:
Is there value in going completely extreme and define a generic header extension for piggybacking RTCP in RTP packets?
That way, whatever we define in RTCP can also be carried as a header extension, if it makes sense to do so.
Of course, it may rarely make sense.
On 08/12/2012 06:26 AM, Randell Jesup wrote:
On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote:
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload... Randell Jesup <randell-ietf@jesup.org> wrote: ...use header extensions as an alternate channel for RTCP... Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
I *may* agree, though I also agree it may be unpopular.
I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux.
Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge.
I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet.
I assume this would fall in CORE.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait.
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 8/13/2012 3:28 AM, Michael Welzl wrote:
Hm, I think I don't agree with considering such optimizations later: the way I read the charter, we're providing a framework for cc. algorithms, i.e. *the* algorithm doesn't exist.
I believe that we don't need to decide *now* on whether to ask for such a mechanism or to assume it's available. I do think that decision weighs into how we evaluate certain fundamental aspects of answering the questions posed to the assumed Working Group, such as on whether the algorithm runs (primarily) on the sender or receiver sides. We can base an algorithm option on this proposal, with the assumption that we can either rev RTCP/AVPF to allow use to send RTCP feedback packets every RTT (or so) and live with this limiting the algorithm to somewhat higher-bandwidth connections (if you prefer, to have lower quality at the same bandwidth/link), or we can define a header extension to limit the cost of such an algorithm. I think we have a pretty clear idea what the general high-feedback options are. Note my previous bandwidth comparison assumed RTP/RTCP, not SRTP/SRTCP: without spending a bunch of time, it's roughly 32Kbps base overhead for 30fps low-bandwidth SRTP, around 68Kbps for low-RTT low-bandwidth SRTP+feedback, and somewhere around 45Kbps for SRTP+header-extension-feedback. (This makes many assumptions about frequency and size of feedback.)
It's not hard to construct cases where limiting feedback may be a bad idea. For example: the queue grows => before the sender can notice it or react, you lose just the few feedback packets that are sent, the sender has to resort to a timeout => even more queue growth due to slow reaction; this may seem a bit artificial, but do we already want to make a decision to live with this situation?
You may, but I wouldn't want to pre-answer that question. Quality overall (especially in normal situations) and overall stability (including edge/corner-cases) is (likely) a significant measurement for this protocol. Note that we may well be able to have poor performance in corner cases if normal-case quality is higher. -- Randell Jesup randell-ietf@jesup.org

Hi, On 13. aug. 2012, at 09:50, Randell Jesup wrote:
On 8/13/2012 3:28 AM, Michael Welzl wrote:
Hm, I think I don't agree with considering such optimizations later: the way I read the charter, we're providing a framework for cc. algorithms, i.e. *the* algorithm doesn't exist.
I believe that we don't need to decide *now* on whether to ask for such a mechanism or to assume it's available. I do think that decision weighs into how we evaluate certain fundamental aspects of answering the questions posed to the assumed Working Group, such as on whether the algorithm runs (primarily) on the sender or receiver sides.
Just to be clear about the sender / receiver side thing, the amount of feedback is intrinsically bound to that decision - using the generic information that cc. mechanisms build upon (timers, sequence numbers, i.e. whether packets made it or not), anything that can be done on the receiver side can also be moved to the sender side, with the possible disadvantage of having more feedback. If we define a generic framework for feedback and put the cc. mechanism on the sender side, we have exactly what pluggable congestion control in the Linux or FreeBSD TCP implementation already gives us: we can replace the sender side as we wish, without requiring any change on the receiver side. That is, if we define a generic feedback method, whatever is done at the sender side will work with any receiver that implements our method. This game could also be played the other way 'round. Any congestion control mechanism eventually decides about a send rate. We could generically define to send the rate to the sender, and define for the sender to use that rate upon getting it, or else have a timeout after some predefined interval. Then, any mechanism could be placed on the receiver side, possibly limiting the feedback to the sender, and it's all going to work fine as long as the sender follows our generic rules. I still think that we should allow for that feedback to be very frequent (whenever that's acceptable), and for that case it would be good to have very small ACKs. Yup, I think these are the two options we have, and I think we can and should decide which route we're going to take irrespective of the mechanism proposals on the table. All mechanisms can be implemented both ways. The receiver-based method seems more flexible to me, as we CAN better limit the feedback with it if we want to. I'm however not sure if a CM-like functionality (e.g. my FSE proposal) for controlling multiple streams would work well with a receiver-based design. Cheers, Michael

Maybe I wasn't clear. I wasn't saying limit the feedback; rather let's not worry about optimising transport of the feedback until we know what feedback we want. Colin On 13 Aug 2012, at 10:28, Michael Welzl wrote:
Hm, I think I don't agree with considering such optimizations later: the way I read the charter, we're providing a framework for cc. algorithms, i.e. *the* algorithm doesn't exist.
It's not hard to construct cases where limiting feedback may be a bad idea. For example: the queue grows => before the sender can notice it or react, you lose just the few feedback packets that are sent, the sender has to resort to a timeout => even more queue growth due to slow reaction; this may seem a bit artificial, but do we already want to make a decision to live with this situation?
If decisions follow the sequence: 1) first algorithm: has limited feedback 2) define feedback just for that algorithm, assume it works for all 3) more algorithms...
... then we may be putting ourselves in a corner that we can't get out of, making it impossible to define a mechanism that would solve problems like the one above.
Cheers, Michael
On 13. aug. 2012, at 07:00, Colin Perkins wrote:
It might be more productive to consider this sort of optimisation later, once we've defined the congestion control algorithm, and seen if it's needed. Optimising the header overheads on a not-yet-defined protocol seems premature, and it's not clear that we need enough feedback to require this sort of optimisation.
Colin
On 13 Aug 2012, at 02:03, Harald Alvestrand wrote:
Is there value in going completely extreme and define a generic header extension for piggybacking RTCP in RTP packets?
That way, whatever we define in RTCP can also be carried as a header extension, if it makes sense to do so.
Of course, it may rarely make sense.
On 08/12/2012 06:26 AM, Randell Jesup wrote:
On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote:
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload... Randell Jesup <randell-ietf@jesup.org> wrote: ...use header extensions as an alternate channel for RTCP... Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
I *may* agree, though I also agree it may be unpopular.
I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux.
Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge.
I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet.
I assume this would fall in CORE.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait.
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
-- Colin Perkins http://csperkins.org/

Hm, hm, hm. I'm not sure about that, either. Let me try to sketch what I think would be good design and see what you people say - if there is a chance to get something like that to actually work: First of all, I think that, to keep queues low, the sender needs as much feedback as it can possibly get. Say, one ACK for every packet. The amount of that feedback can be too much, but then the right thing to do would be to use some form of ACK congestion control, to reduce the amount of this feedback *only* when needed but generally send as much as possible. There are working methods for doing that, and they're not too complicated. Clearly, since we're talking about a lot of feedback here, these packets should be as small as possible. Maybe there are forms of RTP/RTCP header compression that we could use for that? Or, do we really need to even require a RTP/RTCP header for them? The content of these packets is general congestion control relevant feedback about the path - we know quite well from experience what sort of information should go in them: not much more than a sequence number, and possibly a timer. Anyway, I think it's clear that this can be generically designed. Secondly, I think that mechanism-specific RTCP feedback is welcome and necessary, for higher-level optimizations (giving information to codecs that they should react upon). Maybe also for more - but these could be larger packets, and sent much less frequently. Now I wonder: are these very small high frequency ACK packets just a designer's dream, or are they something that could possibly be made to work in some way? Cheers, Michael On 13. aug. 2012, at 10:12, Colin Perkins wrote:
Maybe I wasn't clear. I wasn't saying limit the feedback; rather let's not worry about optimising transport of the feedback until we know what feedback we want.
Colin
On 13 Aug 2012, at 10:28, Michael Welzl wrote:
Hm, I think I don't agree with considering such optimizations later: the way I read the charter, we're providing a framework for cc. algorithms, i.e. *the* algorithm doesn't exist.
It's not hard to construct cases where limiting feedback may be a bad idea. For example: the queue grows => before the sender can notice it or react, you lose just the few feedback packets that are sent, the sender has to resort to a timeout => even more queue growth due to slow reaction; this may seem a bit artificial, but do we already want to make a decision to live with this situation?
If decisions follow the sequence: 1) first algorithm: has limited feedback 2) define feedback just for that algorithm, assume it works for all 3) more algorithms...
... then we may be putting ourselves in a corner that we can't get out of, making it impossible to define a mechanism that would solve problems like the one above.
Cheers, Michael
On 13. aug. 2012, at 07:00, Colin Perkins wrote:
It might be more productive to consider this sort of optimisation later, once we've defined the congestion control algorithm, and seen if it's needed. Optimising the header overheads on a not-yet-defined protocol seems premature, and it's not clear that we need enough feedback to require this sort of optimisation.
Colin
On 13 Aug 2012, at 02:03, Harald Alvestrand wrote:
Is there value in going completely extreme and define a generic header extension for piggybacking RTCP in RTP packets?
That way, whatever we define in RTCP can also be carried as a header extension, if it makes sense to do so.
Of course, it may rarely make sense.
On 08/12/2012 06:26 AM, Randell Jesup wrote:
On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote:
Michael Welzl <michawe@ifi.uio.no> wrote: > ...piggybacking ACKs on RTP payload... Randell Jesup <randell-ietf@jesup.org> wrote: > ...use header extensions as an alternate channel for RTCP... Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
I *may* agree, though I also agree it may be unpopular.
I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux.
Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge.
I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet.
I assume this would fall in CORE.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait.
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
-- Colin Perkins http://csperkins.org/

On 08/13/2012 10:34 AM, Michael Welzl wrote:
Hm, hm, hm. I'm not sure about that, either.
Let me try to sketch what I think would be good design and see what you people say - if there is a chance to get something like that to actually work:
First of all, I think that, to keep queues low, the sender needs as much feedback as it can possibly get. Say, one ACK for every packet. The amount of that feedback can be too much, but then the right thing to do would be to use some form of ACK congestion control, to reduce the amount of this feedback *only* when needed but generally send as much as possible. There are working methods for doing that, and they're not too complicated. I've already opted out at this point. The algorithm needs all the information it can get, but the idea that the algorithm is only executed at the sender is something I don't see as a given.
Data reduction at the receiver and considering the feedback volume / result tradeoff is not unusual.
Clearly, since we're talking about a lot of feedback here, these packets should be as small as possible. Maybe there are forms of RTP/RTCP header compression that we could use for that? Or, do we really need to even require a RTP/RTCP header for them?
The content of these packets is general congestion control relevant feedback about the path - we know quite well from experience what sort of information should go in them: not much more than a sequence number, and possibly a timer. Anyway, I think it's clear that this can be generically designed.
Secondly, I think that mechanism-specific RTCP feedback is welcome and necessary, for higher-level optimizations (giving information to codecs that they should react upon). Maybe also for more - but these could be larger packets, and sent much less frequently.
Now I wonder: are these very small high frequency ACK packets just a designer's dream, or are they something that could possibly be made to work in some way?
Cheers, Michael
On 13. aug. 2012, at 10:12, Colin Perkins wrote:
Maybe I wasn't clear. I wasn't saying limit the feedback; rather let's not worry about optimising transport of the feedback until we know what feedback we want.
Colin
On 13 Aug 2012, at 10:28, Michael Welzl wrote:
Hm, I think I don't agree with considering such optimizations later: the way I read the charter, we're providing a framework for cc. algorithms, i.e. *the* algorithm doesn't exist.
It's not hard to construct cases where limiting feedback may be a bad idea. For example: the queue grows => before the sender can notice it or react, you lose just the few feedback packets that are sent, the sender has to resort to a timeout => even more queue growth due to slow reaction; this may seem a bit artificial, but do we already want to make a decision to live with this situation?
If decisions follow the sequence: 1) first algorithm: has limited feedback 2) define feedback just for that algorithm, assume it works for all 3) more algorithms...
... then we may be putting ourselves in a corner that we can't get out of, making it impossible to define a mechanism that would solve problems like the one above.
Cheers, Michael
On 13. aug. 2012, at 07:00, Colin Perkins wrote:
It might be more productive to consider this sort of optimisation later, once we've defined the congestion control algorithm, and seen if it's needed. Optimising the header overheads on a not-yet-defined protocol seems premature, and it's not clear that we need enough feedback to require this sort of optimisation.
Colin
On 13 Aug 2012, at 02:03, Harald Alvestrand wrote:
Is there value in going completely extreme and define a generic header extension for piggybacking RTCP in RTP packets?
That way, whatever we define in RTCP can also be carried as a header extension, if it makes sense to do so.
Of course, it may rarely make sense.
On 08/12/2012 06:26 AM, Randell Jesup wrote:
On 8/11/2012 6:50 PM, Mo Zanaty (mzanaty) wrote: > Michael Welzl <michawe@ifi.uio.no> wrote: >> ...piggybacking ACKs on RTP payload... > Randell Jesup <randell-ietf@jesup.org> wrote: >> ...use header extensions as an alternate channel for RTCP... > Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT. I *may* agree, though I also agree it may be unpopular.
I've had to do it because of using SBCs configured not to pass RTCP at all and didn't like RTCP on the RTP port, and I needed it for PLI/etc. But that was a rare homegrown (though ISP-wide) SBC; even misconfigured SBCs typically support RTCP-mux I imagine. And I did it before RTCP-mux was defined, and was part of why I supported RTCP-mux.
Once you start pushing enough data to double the normal packet rate, the per-packet overhead can get to be a problem, especially if you're on a low-bandwidth link. Instead of (say) 30 video + 50 audio packets per second, you have an additional 80. Sending them in header extensions would save around 20Kbps, perhaps a little more - marginal if you have 250Kbps or up, but significant if you have (say) 128Kbps, and already are losing around 25Kbps to overhead. My old phones would run 30fps down to total bandwidth of as low as 60-80kbps, though it was running QCIF down that low (plus iLBC 20ms - 15Kbps+overhead, so video payload got as low as 40, even 30Kbps). At those bitrates, 20Kbps is huge.
I'll admit given RTCP-mux (and assuming nothing interferes with negotiation of it!), the argument is largely one of efficiency in bandwidth and packet-rate. And this is much less of an issue if you're not sending circa 1 feedback message per packet.
I assume this would fall in CORE.
> Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary. I think there was a generic ACK in an earlier draft, but it was removed (I think a hole in the code list was left to avoid breaking people working with the draft).
> Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs. Quite true. The toughest part is to define when such a thing should be used over RTCP-mux. It would make using other header extensions tricky (though possible), and it would imply a circa up to 20, 30, perhaps in a few cases 100ms delay in sending feedback (avg circa 1/2 that; less if they can piggyback on other RTP streams). If there's "important" data you could send a direct RTCP and not wait.
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
-- Colin Perkins http://csperkins.org/

On 13. aug. 2012, at 11:49, Harald Alvestrand wrote:
On 08/13/2012 10:34 AM, Michael Welzl wrote:
Hm, hm, hm. I'm not sure about that, either.
Let me try to sketch what I think would be good design and see what you people say - if there is a chance to get something like that to actually work:
First of all, I think that, to keep queues low, the sender needs as much feedback as it can possibly get. Say, one ACK for every packet. The amount of that feedback can be too much, but then the right thing to do would be to use some form of ACK congestion control, to reduce the amount of this feedback *only* when needed but generally send as much as possible. There are working methods for doing that, and they're not too complicated. I've already opted out at this point. The algorithm needs all the information it can get, but the idea that the algorithm is only executed at the sender is something I don't see as a given.
Data reduction at the receiver and considering the feedback volume / result tradeoff is not unusual.
Sorry if I created a confusion here: my second email, which answers Colin's, has a much clearer overview of what I think the design space is regarding sender-side or receiver-side control. What I describe in that email as "receiver-side" has pretty much the same sender behavior as the RRTCC proposal, BTW, IIRC (haven't looked at the proposal for a while now). Yes, this looks more flexible regarding the amount of feedback, but it may be problematic regarding the control of multiple streams. However, whether you put the control in the sender or receiver doesn't change what this first email suggests: that you may need a lot of feedback, whenever that's possible, to avoid queues, and that we may need very small ACKs for that if we can define them. Cheers, Michael

ARGH!
Sorry if I created a confusion here: my second email, which answers Colin's, has a much clearer overview of what I think the design space is regarding sender-side or receiver-side control.
I meant: Randell's !! Michael PS: I meant THAT text: Just to be clear about the sender / receiver side thing, the amount of feedback is intrinsically bound to that decision - using the generic information that cc. mechanisms build upon (timers, sequence numbers, i.e. whether packets made it or not), anything that can be done on the receiver side can also be moved to the sender side, with the possible disadvantage of having more feedback. If we define a generic framework for feedback and put the cc. mechanism on the sender side, we have exactly what pluggable congestion control in the Linux or FreeBSD TCP implementation already gives us: we can replace the sender side as we wish, without requiring any change on the receiver side. That is, if we define a generic feedback method, whatever is done at the sender side will work with any receiver that implements our method. This game could also be played the other way 'round. Any congestion control mechanism eventually decides about a send rate. We could generically define to send the rate to the sender, and define for the sender to use that rate upon getting it, or else have a timeout after some predefined interval. Then, any mechanism could be placed on the receiver side, possibly limiting the feedback to the sender, and it's all going to work fine as long as the sender follows our generic rules. I still think that we should allow for that feedback to be very frequent (whenever that's acceptable), and for that case it would be good to have very small ACKs. Yup, I think these are the two options we have, and I think we can and should decide which route we're going to take irrespective of the mechanism proposals on the table. All mechanisms can be implemented both ways. The receiver-based method seems more flexible to me, as we CAN better limit the feedback with it if we want to. I'm however not sure if a CM-like functionality (e.g. my FSE proposal) for controlling multiple streams would work well with a receiver-based design.

On Mon, Aug 13, 2012 at 12:20 PM, Michael Welzl <michawe@ifi.uio.no> wrote:
ARGH!
Sorry if I created a confusion here: my second email, which answers Colin's, has a much clearer overview of what I think the design space is regarding sender-side or receiver-side control.
I meant: Randell's !!
Michael
PS: I meant THAT text:
Just to be clear about the sender / receiver side thing, the amount of feedback is intrinsically bound to that decision - using the generic information that cc. mechanisms build upon (timers, sequence numbers, i.e. whether packets made it or not), anything that can be done on the receiver side can also be moved to the sender side, with the possible disadvantage of having more feedback.
If we define a generic framework for feedback and put the cc. mechanism on the sender side, we have exactly what pluggable congestion control in the Linux or FreeBSD TCP implementation already gives us: we can replace the sender side as we wish, without requiring any change on the receiver side. That is, if we define a generic feedback method, whatever is done at the sender side will work with any receiver that implements our method.
This game could also be played the other way 'round. Any congestion control mechanism eventually decides about a send rate. We could generically define to send the rate to the sender, and define for the sender to use that rate upon getting it, or else have a timeout after some predefined interval. Then, any mechanism could be placed on the receiver side, possibly limiting the feedback to the sender, and it's all going to work fine as long as the sender follows our generic rules. I still think that we should allow for that feedback to be very frequent (whenever that's acceptable), and for that case it would be good to have very small ACKs.
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
Yup, I think these are the two options we have, and I think we can and should decide which route we're going to take irrespective of the mechanism proposals on the table. All mechanisms can be implemented both ways.
The receiver-based method seems more flexible to me, as we CAN better limit the feedback with it if we want to. I'm however not sure if a CM-like functionality (e.g. my FSE proposal) for controlling multiple streams would work well with a receiver-based design.
Might be simpler to have the FSE at the receiver as well in that case?
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 13. aug. 2012, at 13:43, Stefan Holmer wrote:
On Mon, Aug 13, 2012 at 12:20 PM, Michael Welzl <michawe@ifi.uio.no> wrote: ARGH!
Sorry if I created a confusion here: my second email, which answers Colin's, has a much clearer overview of what I think the design space is regarding sender-side or receiver-side control.
I meant: Randell's !!
Michael
PS: I meant THAT text:
Just to be clear about the sender / receiver side thing, the amount of feedback is intrinsically bound to that decision - using the generic information that cc. mechanisms build upon (timers, sequence numbers, i.e. whether packets made it or not), anything that can be done on the receiver side can also be moved to the sender side, with the possible disadvantage of having more feedback.
If we define a generic framework for feedback and put the cc. mechanism on the sender side, we have exactly what pluggable congestion control in the Linux or FreeBSD TCP implementation already gives us: we can replace the sender side as we wish, without requiring any change on the receiver side. That is, if we define a generic feedback method, whatever is done at the sender side will work with any receiver that implements our method.
This game could also be played the other way 'round. Any congestion control mechanism eventually decides about a send rate. We could generically define to send the rate to the sender, and define for the sender to use that rate upon getting it, or else have a timeout after some predefined interval. Then, any mechanism could be placed on the receiver side, possibly limiting the feedback to the sender, and it's all going to work fine as long as the sender follows our generic rules. I still think that we should allow for that feedback to be very frequent (whenever that's acceptable), and for that case it would be good to have very small ACKs.
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
I agree, that sounds like a good method to me.
Yup, I think these are the two options we have, and I think we can and should decide which route we're going to take irrespective of the mechanism proposals on the table. All mechanisms can be implemented both ways.
The receiver-based method seems more flexible to me, as we CAN better limit the feedback with it if we want to. I'm however not sure if a CM-like functionality (e.g. my FSE proposal) for controlling multiple streams would work well with a receiver-based design.
Might be simpler to have the FSE at the receiver as well in that case?
Yes, I guess that's a good idea anyway. The assumption for the FSE is that it gives congestion controllers information about all flows that share the same bottleneck. In RTCWEB, such flows can be trivially identified by knowing that they use the same UDP port number pair, which would lead to identifying the same set of flows on the sender- and receiver side. However, we should not exclude the possibility of implementing a measurement-based shared bottleneck detection scheme which would feed information into the FSE... In that case, a sender may be in control of multiple streams that go to various destinations yet share the same bottleneck. Similarly, a receiver may have incoming streams that come from various sources yet share the same bottleneck. So yes, this calls for having an FSE in the sender AND receiver I think. Cheers, Michael

On Mon, Aug 13, 2012 at 1:54 PM, Michael Welzl <michawe@ifi.uio.no> wrote:
On 13. aug. 2012, at 13:43, Stefan Holmer wrote:
On Mon, Aug 13, 2012 at 12:20 PM, Michael Welzl <michawe@ifi.uio.no>wrote:
ARGH!
Sorry if I created a confusion here: my second email, which answers Colin's, has a much clearer overview of what I think the design space is regarding sender-side or receiver-side control.
I meant: Randell's !!
Michael
PS: I meant THAT text:
Just to be clear about the sender / receiver side thing, the amount of feedback is intrinsically bound to that decision - using the generic information that cc. mechanisms build upon (timers, sequence numbers, i.e. whether packets made it or not), anything that can be done on the receiver side can also be moved to the sender side, with the possible disadvantage of having more feedback.
If we define a generic framework for feedback and put the cc. mechanism on the sender side, we have exactly what pluggable congestion control in the Linux or FreeBSD TCP implementation already gives us: we can replace the sender side as we wish, without requiring any change on the receiver side. That is, if we define a generic feedback method, whatever is done at the sender side will work with any receiver that implements our method.
This game could also be played the other way 'round. Any congestion control mechanism eventually decides about a send rate. We could generically define to send the rate to the sender, and define for the sender to use that rate upon getting it, or else have a timeout after some predefined interval. Then, any mechanism could be placed on the receiver side, possibly limiting the feedback to the sender, and it's all going to work fine as long as the sender follows our generic rules. I still think that we should allow for that feedback to be very frequent (whenever that's acceptable), and for that case it would be good to have very small ACKs.
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
I agree, that sounds like a good method to me.
Yup, I think these are the two options we have, and I think we can and should decide which route we're going to take irrespective of the mechanism proposals on the table. All mechanisms can be implemented both ways.
The receiver-based method seems more flexible to me, as we CAN better limit the feedback with it if we want to. I'm however not sure if a CM-like functionality (e.g. my FSE proposal) for controlling multiple streams would work well with a receiver-based design.
Might be simpler to have the FSE at the receiver as well in that case?
Yes, I guess that's a good idea anyway. The assumption for the FSE is that it gives congestion controllers information about all flows that share the same bottleneck. In RTCWEB, such flows can be trivially identified by knowing that they use the same UDP port number pair, which would lead to identifying the same set of flows on the sender- and receiver side. However, we should not exclude the possibility of implementing a measurement-based shared bottleneck detection scheme which would feed information into the FSE...
In that case, a sender may be in control of multiple streams that go to various destinations yet share the same bottleneck. Similarly, a receiver may have incoming streams that come from various sources yet share the same bottleneck.
So yes, this calls for having an FSE in the sender AND receiver I think.
Agreed.
Cheers, Michael

Stefan Holmer <stefan@webrtc.org> wrote:
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
This, alas, is optimizing the wrong thing. We can, indeed, send very-short feedback when no congestion is evident to the receiver. But in the absence of a heartbeat, we're at the mercy of Murphy's law whether the signal of congestion detected reaches the sender. (Indeed, one of the likely sources of congestion is a wireless link which is likely to show congestion in both directions.) Thus, I recommend continuous feedback on a heartbeat schedule, so that the absence of feedback could start a backing-off of send rate. We're working in unreliable IP packets, not reliable "circuits". The feedback in the absence of congestion could be essentially zero length -- and I hope we will consider folding it into a reverse path media packet (where the "receiver" detecting congestion is "sender" of a corresponding media stream in the other direction). This _will_ be a common case, and quite possibly the most-common case. -- John Leslie <john@jlc.net>

On 13. aug. 2012, at 16:07, John Leslie wrote:
Stefan Holmer <stefan@webrtc.org> wrote:
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
This, alas, is optimizing the wrong thing.
We can, indeed, send very-short feedback when no congestion is evident to the receiver. But in the absence of a heartbeat, we're at the mercy of Murphy's law whether the signal of congestion detected reaches the sender. (Indeed, one of the likely sources of congestion is a wireless link which is likely to show congestion in both directions.)
Thus, I recommend continuous feedback on a heartbeat schedule, so that the absence of feedback could start a backing-off of send rate. We're working in unreliable IP packets, not reliable "circuits".
I agree, but Stefan already wrote "The rest of the time we can simply trigger the feedback periodically", which covers what you call a "heartbeat schedule", I guess?
The feedback in the absence of congestion could be essentially zero length -- and I hope we will consider folding it into a reverse path media packet (where the "receiver" detecting congestion is "sender" of a corresponding media stream in the other direction). This _will_ be a common case, and quite possibly the most-common case.
I do think that people are considering that - I have the impression that there is mostly agreement with the idea of piggybacking feedback onto RTP packets in some (but which, is not yet clear) way whenever that's possible. Cheers, Michael

Sorry for the spasm of email--- I am getting caught up correspondence. Note that the periodic feedback can convey more than a single temporal datapoint. As I discussed in a previous email, the receiver can inform the sender about trends or discontinuities in the data. This is particularly useful if the sender and receiver can both recognize/signal discontinuities in the data. So, I am arguing for periodic feedback that has a structure that allows a rich information flow back to the sender. Based on such data flows, the sender would like to glean "I am OK when I send at 2 Mbps or 3 Mbps, but the buffers are slowly building when I burst at 4 Mbps and catastrophically congested when I burst at 5 Mbps". It could also glean "I am sending at a constant rate, but I seem to be getting sporadic cross traffic (or fade) that occasionally drives delay". It obviously can also glean "Based on increasing delay and the onset of loss, the channel can no longer support this rate, I need to downshift" and "no loss and constant delay --- I am fine" with some very short, infrequent messages. Knowing the statistical/temporal detail about delay is important, and worth some resources. Note that in addition to periodic feedback, timely feedback when anomalous conditions occur is probably desirable. So --- IMHO, this is not a "do we do congestion control in the sender or the receiver?" question. It is a question of cooperation between the two to detect and set the correct rate. Knowing when to upshift is as important as knowing when to downshift. bvs -----Original Message----- From: rtp-congestion-bounces@alvestrand.no [mailto:rtp-congestion-bounces@alvestrand.no] On Behalf Of John Leslie Sent: Monday, August 13, 2012 10:08 AM To: Stefan Holmer Cc: rtp-congestion@alvestrand.no Subject: [MARKETING] Re: [R-C] Feedback mechanisms Stefan Holmer <stefan@webrtc.org> wrote:
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
This, alas, is optimizing the wrong thing. We can, indeed, send very-short feedback when no congestion is evident to the receiver. But in the absence of a heartbeat, we're at the mercy of Murphy's law whether the signal of congestion detected reaches the sender. (Indeed, one of the likely sources of congestion is a wireless link which is likely to show congestion in both directions.) Thus, I recommend continuous feedback on a heartbeat schedule, so that the absence of feedback could start a backing-off of send rate. We're working in unreliable IP packets, not reliable "circuits". The feedback in the absence of congestion could be essentially zero length -- and I hope we will consider folding it into a reverse path media packet (where the "receiver" detecting congestion is "sender" of a corresponding media stream in the other direction). This _will_ be a common case, and quite possibly the most-common case. -- John Leslie <john@jlc.net> _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On Aug 14, 2012, at 5:36 PM, Bill Ver Steeg (versteb) wrote:
Sorry for the spasm of email--- I am getting caught up correspondence.
Note that the periodic feedback can convey more than a single temporal datapoint. As I discussed in a previous email, the receiver can inform the sender about trends or discontinuities in the data. This is particularly useful if the sender and receiver can both recognize/signal discontinuities in the data.
So, I am arguing for periodic feedback that has a structure that allows a rich information flow back to the sender. Based on such data flows, the sender would like to glean "I am OK when I send at 2 Mbps or 3 Mbps, but the buffers are slowly building when I burst at 4 Mbps and catastrophically congested when I burst at 5 Mbps". It could also glean "I am sending at a constant rate, but I seem to be getting sporadic cross traffic (or fade) that occasionally drives delay". It obviously can also glean "Based on increasing delay and the onset of loss, the channel can no longer support this rate, I need to downshift" and "no loss and constant delay --- I am fine" with some very short, infrequent messages. Knowing the statistical/ temporal detail about delay is important, and worth some resources.
Note that in addition to periodic feedback, timely feedback when anomalous conditions occur is probably desirable.
So --- IMHO, this is not a "do we do congestion control in the sender or the receiver?" question. It is a question of cooperation between the two to detect and set the correct rate. Knowing when to upshift is as important as knowing when to downshift.
The tricky bit will be to avoid making it all too complex... Cheers, Michael

Michael said---- The tricky bit will be to avoid making it all too complex... Bill responds --- Indeed. Perhaps we can come up with a simple (extensible) data structure that contains the relevant information, and a way to have the sender specify when it is gathered/stored and when it is sent. Basic senders will simply tell the receiver to gather the data every N seconds and send it every N seconds (plus when there are problems, as defined by the specification). More sophisticated senders could drive more complex relationships, asking the receiver to tabulate/send the data on demand in semi-real time. This would allow a sophisticated sender to send a varying data patterns and get the feedback they require, yet not require too much shared state between the sender and receiver. As long as the receiver is a simple state machine, I think we would be OK. bvs

I also like the direction this is heading. A few detailed comments- There are two competing requirements - 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. 2- The receiver/sender would like to be able to get enough information about loss and delay to sense congestion. In the RTP case, the receiver knows the one-way delay and loss rate every time it gets a packet. In an overly-simple model, the receiver would just ack very single packet with a short packet (containing the a summary of the last few packets to protect against loss), and the sender would then know all of the gory details of the downstream delay/loss and the upstream delay/loss. This is clearly overkill. So, our challenge is to design a feedback mechanism that is timely and information rich, yet does not break the bandwidth budget. I suggest that we think along the lines of a message that conveys something like " I got N1 packets over T1 time with L1 loss and D1 delay and J1 jitter, then N2 packets over T2 time with L2 loss and D2 delay and J2 jitter, yada, yada, yada". In the simplest case, there is constant delay and no drop, and the packet is infrequently sent. In the more complex cases, a more verbose packet is sent - probably at a more frequent interval. If we have a way for the sender to define the grouping of these semantics, it COULD send a series of shaped bursts that probe the network. For instance, it could send large packets at a certain rate for the steady state, then occasionally send back-to-back packets to probe for additional BW (this is the hard part of any rate adaptive algorithm....). It does not need the timing of every packet, but if it can define the feedback summary intervals, a sophisticated sender could do some pretty advanced things. A simple sender could choose to do simple things. Note that the time intervals can overlap between successive feedback reports, thus providing some protection against loss of feedback packets. Also note that we need to be crisp in our definition of these "delay", "jitter", etc in such summarized report. Are they max, min, average, mean? For the purposes of this email, I gloss over this detail, but it is important and will need to be addressed Bill VerSteeg -----Original Message----- From: rtp-congestion-bounces@alvestrand.no [mailto:rtp-congestion-bounces@alvestrand.no] On Behalf Of Mo Zanaty (mzanaty) Sent: Saturday, August 11, 2012 6:51 PM To: Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload...
Randell Jesup <randell-ietf@jesup.org> wrote:
...use header extensions as an alternate channel for RTCP...
Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT. Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary. Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs. Mo _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On Aug 14, 2012, at 4:33 PM, Bill Ver Steeg (versteb) wrote:
I also like the direction this is heading. A few detailed comments-
There are two competing requirements - 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble.
But there's a solution for that: ACK congestion control (RFC5690 for TCP, not really used up to now, and a part of DCCP)
2- The receiver/sender would like to be able to get enough information about loss and delay to sense congestion. In the RTP case, the receiver knows the one-way delay and loss rate every time it gets a packet. In an overly-simple model, the receiver would just ack very single packet with a short packet (containing the a summary of the last few packets to protect against loss), and the sender would then know all of the gory details of the downstream delay/loss and the upstream delay/loss. This is clearly overkill.
So, our challenge is to design a feedback mechanism that is timely and information rich, yet does not break the bandwidth budget.
I like Stefan's statement, which - I think - fits in that context:
Worth noting that we have the option of only sending frequent feedback (e.g. once per packet) when we experience congestion. The rest of the time we can simply trigger the feedback periodically.
Cheers, Michael

Hi Bill, In reference to point your (1) - I'd have thought the RMCAT flows would differ from streaming/TCP community as they are generally going to be sending flows in both directions simultaneously so we're often (with ADSL, Cable) going to be limited by that upstream capacity for actually sending media in most cases - so the sender's [b]ack-channel is going to be using the 'downstream' channel which won't be limiting feedback much. Piers On 14 Aug 2012, at 15:33, Bill Ver Steeg (versteb) wrote:
I also like the direction this is heading. A few detailed comments-
There are two competing requirements - 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. 2- The receiver/sender would like to be able to get enough information about loss and delay to sense congestion. In the RTP case, the receiver knows the one-way delay and loss rate every time it gets a packet. In an overly-simple model, the receiver would just ack very single packet with a short packet (containing the a summary of the last few packets to protect against loss), and the sender would then know all of the gory details of the downstream delay/loss and the upstream delay/loss. This is clearly overkill.
So, our challenge is to design a feedback mechanism that is timely and information rich, yet does not break the bandwidth budget.
I suggest that we think along the lines of a message that conveys something like " I got N1 packets over T1 time with L1 loss and D1 delay and J1 jitter, then N2 packets over T2 time with L2 loss and D2 delay and J2 jitter, yada, yada, yada". In the simplest case, there is constant delay and no drop, and the packet is infrequently sent. In the more complex cases, a more verbose packet is sent - probably at a more frequent interval.
If we have a way for the sender to define the grouping of these semantics, it COULD send a series of shaped bursts that probe the network. For instance, it could send large packets at a certain rate for the steady state, then occasionally send back-to-back packets to probe for additional BW (this is the hard part of any rate adaptive algorithm....). It does not need the timing of every packet, but if it can define the feedback summary intervals, a sophisticated sender could do some pretty advanced things. A simple sender could choose to do simple things.
Note that the time intervals can overlap between successive feedback reports, thus providing some protection against loss of feedback packets.
Also note that we need to be crisp in our definition of these "delay", "jitter", etc in such summarized report. Are they max, min, average, mean? For the purposes of this email, I gloss over this detail, but it is important and will need to be addressed
Bill VerSteeg
-----Original Message----- From: rtp-congestion-bounces@alvestrand.no [mailto:rtp-congestion-bounces@alvestrand.no] On Behalf Of Mo Zanaty (mzanaty) Sent: Saturday, August 11, 2012 6:51 PM To: Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload...
Randell Jesup <randell-ietf@jesup.org> wrote:
...use header extensions as an alternate channel for RTCP...
Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Mo
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

Piers- This is somewhat true. For the general cases, there will be AV data flowing in both directions. However, for a significant portion of the time, the use cases will be driven by the available BW and quite asymmetric. For instance, if a remote worker is having a conference with somebody in a traditional office, the downstream flows may be HD and the upstream flow may be a thumbnail, or even just voice. Another common case is a multi-party conference in which there are N downstream AV flows and just 1 upstream AV flow. If we were to ack every packet, the upstream acks of the downstream flows could actually consume more BW than the upstream audio/video flows. Some simple math follows for a worst case scenario (note that I am not recommending we do this, as it is simply the wrong way to proceed)- Downstream flow D = 6 Mbps ~= 500 packets/sec. If we were to do something onerous and ack every packet with 50 bytes of data, that would be ~200 Kbps upstream. We can play games with ack sizes and piggybacking acks on data, but IMHO a per-packet ack is simply not the right way to start the design. bvs -----Original Message----- From: Piers O'Hanlon [mailto:p.ohanlon@gmail.com] Sent: Wednesday, August 15, 2012 7:08 AM To: Bill Ver Steeg (versteb) Cc: Mo Zanaty (mzanaty); Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms Hi Bill, In reference to point your (1) - I'd have thought the RMCAT flows would differ from streaming/TCP community as they are generally going to be sending flows in both directions simultaneously so we're often (with ADSL, Cable) going to be limited by that upstream capacity for actually sending media in most cases - so the sender's [b]ack-channel is going to be using the 'downstream' channel which won't be limiting feedback much. Piers On 14 Aug 2012, at 15:33, Bill Ver Steeg (versteb) wrote:
I also like the direction this is heading. A few detailed comments-
There are two competing requirements - 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. 2- The receiver/sender would like to be able to get enough information about loss and delay to sense congestion. In the RTP case, the receiver knows the one-way delay and loss rate every time it gets a packet. In an overly-simple model, the receiver would just ack very single packet with a short packet (containing the a summary of the last few packets to protect against loss), and the sender would then know all of the gory details of the downstream delay/loss and the upstream delay/loss. This is clearly overkill.
So, our challenge is to design a feedback mechanism that is timely and information rich, yet does not break the bandwidth budget.
I suggest that we think along the lines of a message that conveys something like " I got N1 packets over T1 time with L1 loss and D1 delay and J1 jitter, then N2 packets over T2 time with L2 loss and D2 delay and J2 jitter, yada, yada, yada". In the simplest case, there is constant delay and no drop, and the packet is infrequently sent. In the more complex cases, a more verbose packet is sent - probably at a more frequent interval.
If we have a way for the sender to define the grouping of these semantics, it COULD send a series of shaped bursts that probe the network. For instance, it could send large packets at a certain rate for the steady state, then occasionally send back-to-back packets to probe for additional BW (this is the hard part of any rate adaptive algorithm....). It does not need the timing of every packet, but if it can define the feedback summary intervals, a sophisticated sender could do some pretty advanced things. A simple sender could choose to do simple things.
Note that the time intervals can overlap between successive feedback reports, thus providing some protection against loss of feedback packets.
Also note that we need to be crisp in our definition of these "delay", "jitter", etc in such summarized report. Are they max, min, average, mean? For the purposes of this email, I gloss over this detail, but it is important and will need to be addressed
Bill VerSteeg
-----Original Message----- From: rtp-congestion-bounces@alvestrand.no [mailto:rtp-congestion-bounces@alvestrand.no] On Behalf Of Mo Zanaty (mzanaty) Sent: Saturday, August 11, 2012 6:51 PM To: Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload...
Randell Jesup <randell-ietf@jesup.org> wrote:
...use header extensions as an alternate channel for RTCP...
Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Mo
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

Bill, Agreed - yes I guess I was concerned that 'traditional' apps model might have been implied in general. But you're right there's plenty of situations which can lead to congestion control signalling capacity/rate issues. I also agree that 1-to-1 Acking is probably not the way to go. I think that one needs a minimum periodic signal about the network conditions but that can probably be satisfied in an RTT (or more) - though on low RTT links one might consider other options. A related issue is also that whilst we may get periodic congestion signals these are dependent upon the amount of media data that is flowing as it is this that effectively signals to other sources that path is in use (by occupying the queue or god-forbid causing loss) - potentially requiring appropriate data-limited and idle behaviour sets. Piers On 15 Aug 2012, at 13:43, Bill Ver Steeg (versteb) wrote:
Piers-
This is somewhat true. For the general cases, there will be AV data flowing in both directions.
However, for a significant portion of the time, the use cases will be driven by the available BW and quite asymmetric. For instance, if a remote worker is having a conference with somebody in a traditional office, the downstream flows may be HD and the upstream flow may be a thumbnail, or even just voice. Another common case is a multi-party conference in which there are N downstream AV flows and just 1 upstream AV flow. If we were to ack every packet, the upstream acks of the downstream flows could actually consume more BW than the upstream audio/video flows.
Some simple math follows for a worst case scenario (note that I am not recommending we do this, as it is simply the wrong way to proceed)- Downstream flow D = 6 Mbps ~= 500 packets/sec. If we were to do something onerous and ack every packet with 50 bytes of data, that would be ~200 Kbps upstream. We can play games with ack sizes and piggybacking acks on data, but IMHO a per-packet ack is simply not the right way to start the design.
bvs
-----Original Message----- From: Piers O'Hanlon [mailto:p.ohanlon@gmail.com] Sent: Wednesday, August 15, 2012 7:08 AM To: Bill Ver Steeg (versteb) Cc: Mo Zanaty (mzanaty); Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms
Hi Bill,
In reference to point your (1) - I'd have thought the RMCAT flows would differ from streaming/TCP community as they are generally going to be sending flows in both directions simultaneously so we're often (with ADSL, Cable) going to be limited by that upstream capacity for actually sending media in most cases - so the sender's [b]ack-channel is going to be using the 'downstream' channel which won't be limiting feedback much.
Piers
On 14 Aug 2012, at 15:33, Bill Ver Steeg (versteb) wrote:
I also like the direction this is heading. A few detailed comments-
There are two competing requirements - 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. 2- The receiver/sender would like to be able to get enough information about loss and delay to sense congestion. In the RTP case, the receiver knows the one-way delay and loss rate every time it gets a packet. In an overly-simple model, the receiver would just ack very single packet with a short packet (containing the a summary of the last few packets to protect against loss), and the sender would then know all of the gory details of the downstream delay/loss and the upstream delay/loss. This is clearly overkill.
So, our challenge is to design a feedback mechanism that is timely and information rich, yet does not break the bandwidth budget.
I suggest that we think along the lines of a message that conveys something like " I got N1 packets over T1 time with L1 loss and D1 delay and J1 jitter, then N2 packets over T2 time with L2 loss and D2 delay and J2 jitter, yada, yada, yada". In the simplest case, there is constant delay and no drop, and the packet is infrequently sent. In the more complex cases, a more verbose packet is sent - probably at a more frequent interval.
If we have a way for the sender to define the grouping of these semantics, it COULD send a series of shaped bursts that probe the network. For instance, it could send large packets at a certain rate for the steady state, then occasionally send back-to-back packets to probe for additional BW (this is the hard part of any rate adaptive algorithm....). It does not need the timing of every packet, but if it can define the feedback summary intervals, a sophisticated sender could do some pretty advanced things. A simple sender could choose to do simple things.
Note that the time intervals can overlap between successive feedback reports, thus providing some protection against loss of feedback packets.
Also note that we need to be crisp in our definition of these "delay", "jitter", etc in such summarized report. Are they max, min, average, mean? For the purposes of this email, I gloss over this detail, but it is important and will need to be addressed
Bill VerSteeg
-----Original Message----- From: rtp-congestion-bounces@alvestrand.no [mailto:rtp-congestion-bounces@alvestrand.no] On Behalf Of Mo Zanaty (mzanaty) Sent: Saturday, August 11, 2012 6:51 PM To: Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload...
Randell Jesup <randell-ietf@jesup.org> wrote:
...use header extensions as an alternate channel for RTCP...
Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Mo
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

What you're bringing up is interesting, and I agree that it would be beneficial if we had some way for the sender to ask for more detailed feedback from the receiver. Even a fairly short-term average of inter-arrival times isn't enough in all situations. First the averaging interval is likely important for the sender, and if the sender is very clever the actual timing of each of its probing packets may be important. It's important to note though that such feedback is not as time critical as congestion signals, and therefore several probe ACKs can be accumulated in one RTCP. It is of course possible to do the same computation at the receiver, but that requires the probing method to be predefined, which is a disadvantage. /Stefan On Wed, Aug 15, 2012 at 2:43 PM, Bill Ver Steeg (versteb) <versteb@cisco.com
wrote:
Piers-
This is somewhat true. For the general cases, there will be AV data flowing in both directions.
However, for a significant portion of the time, the use cases will be driven by the available BW and quite asymmetric. For instance, if a remote worker is having a conference with somebody in a traditional office, the downstream flows may be HD and the upstream flow may be a thumbnail, or even just voice. Another common case is a multi-party conference in which there are N downstream AV flows and just 1 upstream AV flow. If we were to ack every packet, the upstream acks of the downstream flows could actually consume more BW than the upstream audio/video flows.
Some simple math follows for a worst case scenario (note that I am not recommending we do this, as it is simply the wrong way to proceed)- Downstream flow D = 6 Mbps ~= 500 packets/sec. If we were to do something onerous and ack every packet with 50 bytes of data, that would be ~200 Kbps upstream. We can play games with ack sizes and piggybacking acks on data, but IMHO a per-packet ack is simply not the right way to start the design.
bvs
-----Original Message----- From: Piers O'Hanlon [mailto:p.ohanlon@gmail.com] Sent: Wednesday, August 15, 2012 7:08 AM To: Bill Ver Steeg (versteb) Cc: Mo Zanaty (mzanaty); Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms
Hi Bill,
In reference to point your (1) - I'd have thought the RMCAT flows would differ from streaming/TCP community as they are generally going to be sending flows in both directions simultaneously so we're often (with ADSL, Cable) going to be limited by that upstream capacity for actually sending media in most cases - so the sender's [b]ack-channel is going to be using the 'downstream' channel which won't be limiting feedback much.
Piers
On 14 Aug 2012, at 15:33, Bill Ver Steeg (versteb) wrote:
I also like the direction this is heading. A few detailed comments-
There are two competing requirements - 1- The upstream/return traffic is (generally) assumed to be a precious resource by the RTP community. This is partially due to multicast considerations (which we can mostly ignore here) and partially due to asynchronous networks. Note that the CableTV community has found it necessary to play games with TCP acks in Cable Modems because of upstream bandwidth limitations, even for unicast TCP traffic. This is an ugly hack that I will not defend here, but it is indicative of the problem space. The net-net is that we have to be somewhat conservative in our sending of acks, or we will end up in similar trouble. 2- The receiver/sender would like to be able to get enough information about loss and delay to sense congestion. In the RTP case, the receiver knows the one-way delay and loss rate every time it gets a packet. In an overly-simple model, the receiver would just ack very single packet with a short packet (containing the a summary of the last few packets to protect against loss), and the sender would then know all of the gory details of the downstream delay/loss and the upstream delay/loss. This is clearly overkill.
So, our challenge is to design a feedback mechanism that is timely and information rich, yet does not break the bandwidth budget.
I suggest that we think along the lines of a message that conveys something like " I got N1 packets over T1 time with L1 loss and D1 delay and J1 jitter, then N2 packets over T2 time with L2 loss and D2 delay and J2 jitter, yada, yada, yada". In the simplest case, there is constant delay and no drop, and the packet is infrequently sent. In the more complex cases, a more verbose packet is sent - probably at a more frequent interval.
If we have a way for the sender to define the grouping of these semantics, it COULD send a series of shaped bursts that probe the network. For instance, it could send large packets at a certain rate for the steady state, then occasionally send back-to-back packets to probe for additional BW (this is the hard part of any rate adaptive algorithm....). It does not need the timing of every packet, but if it can define the feedback summary intervals, a sophisticated sender could do some pretty advanced things. A simple sender could choose to do simple things.
Note that the time intervals can overlap between successive feedback reports, thus providing some protection against loss of feedback packets.
Also note that we need to be crisp in our definition of these "delay", "jitter", etc in such summarized report. Are they max, min, average, mean? For the purposes of this email, I gloss over this detail, but it is important and will need to be addressed
Bill VerSteeg
-----Original Message----- From: rtp-congestion-bounces@alvestrand.no [mailto: rtp-congestion-bounces@alvestrand.no] On Behalf Of Mo Zanaty (mzanaty) Sent: Saturday, August 11, 2012 6:51 PM To: Michael Welzl (michawe@ifi.uio.no); Randell Jesup; rtp-congestion@alvestrand.no Subject: Re: [R-C] Feedback mechanisms
Michael Welzl <michawe@ifi.uio.no> wrote:
...piggybacking ACKs on RTP payload...
Randell Jesup <randell-ietf@jesup.org> wrote:
...use header extensions as an alternate channel for RTCP...
Several implementations do either or both of these things, in proprietary ways obviously. I think both should be standardized, although this may be an unpopular view in AVT.
Note that there is no standardized generic ACK. I just submitted an errata to the RFC 4585 ABNF to clarify that "ack" (without parameters) is invalid since there is no generic ACK. There is only "ack rpsi" which is specific to H.263 Annex N, and "ack app" which is proprietary.
Beyond ACK, many other types of RTCP feedback would benefit from the efficiency of piggybacking in RTP header extensions. (NACK, ECN, FIR, TMMBR, PLI, SLI, etc.) So a general mechanism for all RTCP feedback may be more useful than a single mechanism for ACKs.
Mo
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion _______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion
_______________________________________________ Rtp-congestion mailing list Rtp-congestion@alvestrand.no http://www.alvestrand.no/mailman/listinfo/rtp-congestion

On 8/15/2012 7:08 AM, Piers O'Hanlon wrote:
Hi Bill,
In reference to point your (1) - I'd have thought the RMCAT flows would differ from streaming/TCP community as they are generally going to be sending flows in both directions simultaneously so we're often (with ADSL, Cable) going to be limited by that upstream capacity for actually sending media in most cases - so the sender's [b]ack-channel is going to be using the 'downstream' channel which won't be limiting feedback much.
Two-way flows will be the norm, though not the only use-case for RMCAT. (For example, watching/controling remote security cameras). But since they're both the norm and the harder case, it makes sense to optimize for them. In steady-state, both flows should be living near the congestion point. Typically (though not always), the bottlenecks will be either access links (typically upstreams) or wifi links. Both will have large amounts of traffic (on the order of say 40-300 packets per second). Adding more packets for per-packet acks would certainly add to congestion and reduce available bandwidth. It would add much more to the packet rate than to the bandwidth used, which on an access link may not make a large difference, but on a WiFi link (in line with the comments by Jim Gettys and Van Jacobson at IETF) may cause significant reduction in throughput. This WiFi issue (if confirmed) might be a reason to consider either piggybacking feedback and/or reducing feedback frequency, at least in stable situations. -- Randell Jesup randell-ietf@jesup.org

Randell Jesup <randell-ietf@jesup.org> wrote:
Two-way flows will be the norm, though not the only use-case for RMCAT.
Yes.
(For example, watching/controling remote security cameras).
Indeed, that will probably be a use-case (as well as baby-monitors which send mostly audio). Actually, both of those are a weird case where buffering frames at the sender for later media-on-demand is at least somewhat desirable...
But since they're both the norm and the harder case, it makes sense to optimize for them.
Agreed.
In steady-state, both flows should be living near the congestion point. Typically (though not always), the bottlenecks will be either access links (typically upstreams) or wifi links.
(Both being potential bottlenecks, of course; and passing through both unfortunately being a too-common case.)
Both will have large amounts of traffic (on the order of say 40-300 packets per second). Adding more packets for per-packet acks would certainly add to congestion and reduce available bandwidth.
Quite true. :^( But we should recall that adding more packets _during_ congestion is what we really want to avoid. Thus, rather than optimize for the fewest packets when there is no congestion, we'd do better to optimize for the fewest packets when there _is_ congestion. Recalling the ARPAnet "RFNM" (request for next message): there was no such thing as "congestion collapse" in the original ARPAnet design, because senders waited for the (reliable) return of the RFNM before sending more traffic. (We can't reproduce that model with unreliable packets, but the principle of reducing the sending rate whenever feedback is _not_ received can quite completely avoid "congestion collapse".)
It would add much more to the packet rate than to the bandwidth used,
Indeed -- if we use separate packets for the feedback...
which on an access link may not make a large difference, but on a WiFi link (in line with the comments by Jim Gettys and Van Jacobson at IETF) may cause significant reduction in throughput.
(Fundamentally, both WiFi and DOCSYS upstream must wait for a sending window. The window has a minimum size: thus for "small" packets, it approaches "pure per-packet" overhead.)
This WiFi issue (if confirmed) might be a reason to consider either piggybacking feedback and/or reducing feedback frequency, at least in stable situations.
Piggybacking on existing audio packets is essentially free. In the absence of media packets in the opposite direction, a "heartbeat" of, say, half an average RTT is probably appropriate. -- John Leslie <john@jlc.net>

On 08/15/2012 07:30 PM, John Leslie wrote:
Randell Jesup<randell-ietf@jesup.org> wrote:
Two-way flows will be the norm, though not the only use-case for RMCAT. Yes.
(For example, watching/controling remote security cameras). Indeed, that will probably be a use-case (as well as baby-monitors which send mostly audio). Actually, both of those are a weird case where buffering frames at the sender for later media-on-demand is at least somewhat desirable... Baby monitors are probably the canonical example of wanting media delivered in only one direction most of the time.....
participants (9)
-
Bill Ver Steeg (versteb)
-
Colin Perkins
-
Harald Alvestrand
-
John Leslie
-
Michael Welzl
-
Mo Zanaty (mzanaty)
-
Piers O'Hanlon
-
Randell Jesup
-
Stefan Holmer