
Hi, Comments inline, apologies for the delay in response. On Fri, Oct 28, 2011 at 16:18, Randell Jesup <randell-ietf@jesup.org> wrote:
On 10/28/2011 6:00 AM, Varun Singh wrote:
Just to confirm: the receiver calculates the rate per stream and then adds them up: CT_combined = R_audio + R_video + R_data and the audio and video channels calculate the rate as described in the proposal.
Or it calculates a total rate directly - or it makes a wild-assed-guess. :-) We're not specifying how it gets these numbers here (in the normative wordings). And if running them separately, one will 'notice' congestion before the others, always, so you need to think about how you'd merge individual reports - doing as you mention (with the straight algorithm in Harald's draft) would likely lead to not going far enough down until the other channels noticed the congestion, if they did. If as I suggested you use the 'slope' of the incoming packet rate to estimate the amount under/over bandwidth, even with Harald's algorithm, you could apply that slope correction factor across all the estimates. Without it, you could apply a correction factor based on recent state changes in the individual readings; more complex than just adding them.
The sender uses the receiver CT_combined and the current sending rate of each channel to re-allocate the distribution. Is the aim to try and get the distribution similar to what the receiver envisioned or is the sender free to do whatever?
I this case the sender has no idea what the receiver envisioned - but even if it did, I'd say it's free to do whatever. If the receiver wants to control individual channels more (no guarantees), it should use individual TMMBR reports.
So is using individual TMMBR reports acceptable? if it is possible then the above text should reflect that. If not then the above text is okay.
TMMBR, which requests a sending rate for a single SSRC flow. [ This is roughly equivalent to b=CT:xxx ]
We may want to give the estimation algorithm the option to not include or exclude the data-channel bandwidth, but it SHOULD include that.
Is the data sent using the same congestion control mechanism?
We would like it to be included or at least interactive with the algorithm, but that's not currently required and we can't guarantee it will happen. SCTP does allow for variant CC algorithms, so we certainly have the option if we can work it out.
Alright.
5. The receiver SHOULD attempt to minimize the number of bandwidth reports when there is little or no change, while reporting quickly when there is a significant change.
Will we propose an algorithm or lower or upper-bound for this? like: not quicker than once per RTT even if the 5% rule allows it. Or send an early report when loss, inter packet delay, etc exceeds a given threshold.
My inclination would be to not put any limit on it. Even the RTT thing isn't necessarily a good idea; I may have a more accurate estimate shortly after sending a preliminary report. I wouldn't (as a sender) increase my sending rate more than once per RTT (actually probably slower than that), but I might decrease it in less than an RTT - think satellite with 1s RTT...
I agree that to update the preliminary report, the endpoint will have to send it less than RTT.
6. Congestion control MUST work even if there are no media channels, or if the media channels are inactive in one or both directions.
What does "work" mean if there is no data?
MUST be enabled?
6. Data channels MUST be congestion-controlled even if there are no media channels, or if the media channels are inactive in one or both directions.
Ok.
7. The congestion control algorithm SHOULD attempt to keep the total bandwidth controlled so as to minimize the media- stream end-to-end delays between the participants.
Not sure I understand this. If I understand it, suggest to rewrite as
7. The congestion control algorithm SHOULD attempt to minimize the media-stream end-to-end delays between the participants, by controlling bandwidth appropriately.
The receiver doesn't know the end-to-end delay, the RTT is calculated at the sender. So is the sender making this decision or the receiver?
Almost by definition the sender. The receiver is reporting data for the sender to use. (You can consider the receiver part of the algorithm, of course.)
Probably, adding "sending-side" before congestion control would clarify it.
8. When receiving a [ insert new AVPF message here ], the sender shall attempt to comply with the overall bandwidth requirements by adjusting parameters it can control, such as codec bitrates and modes, and how much data is sent on the data channels.
Suggest "shall" -> "may".
For any CC algorithm to work, it really *should* try. It may not be able to comply, the algorithm may include some smoothing, and the sender may have additional information, so it's not MUST, but wouldn't MAY be too weak?
Does the application know apriori how much data would be sent on the data channel? because it is likely that the sender would want to use all of the signaled bandwidth for audio and video. (Especially if the data channel is used for IM, which may be used sporadically).
It may or may not know ahead of time. It controls how much data IS sent, however.
Okay. -- http://www.netlab.tkk.fi/~varun/