
Hi all, I've been giving this some more thought, and now think that it is a mistake to try to build a RTP-UDP-based congestion control mechanism with logic at the receiver, to plan when to send feedback and minimize it accordingly. The reason is that you want to sample the path often enough anyway, and that you may have SCTP operating on the data stream in parallel too. So that can give you a situation where you get a lot of ACKs all the time in SCTP, which will be ignored by the UDP-based algorithm, which is meanwhile struggling to reduce its own feedback. All this feedback is really only about the congestion status on the path anyway, i.e. SCTP ACKs give you all the information you need. So my proposal would be: - use SCTP for everything - add per-stream congestion control to it, all on the sender side - use some other RTCWeb based signalling to negotiate the congestion control mechanism, in the style of DCCP - let all streams benefit from SCTP's ACKs; if we anyhow need to reduce the amount of feedback, we could use an appropriate means that's related to transport, and not the RTCP rule which has nothing to do with congestion control: ACK-CC, RFC 5690 would give a good basis for that. (on a side note, when you run RTP over SCTP, you don't break any RTP rules regarding the amount of feedback I think...) This way, with all the congestion control logic happening on the sender side, it would be much easier to manage the joint congestion control behavior across streams - to control fairness among them, as desired by this group. Note that this can yield MUCH more benefits than meets the eye: e.g., if an ongoing transfer is already in Congestion Avoidance, a newly starting transfer across the same bottleneck (which you'll have to - correctly or not - assume for RTCWeb anyway) can skip Slow Start. In a prototypical example documented in: Michael Welzl, Florian Niederbacher, Stein Gjessing: "Beneficial Transparent Deployment of SCTP: the Missing Pieces", IEEE GlobeCom 2011, 5-9 December 2011, Houston, Texas (fig.7) ...we transferred two files across a testbed where we used the (at least then, very large) default Linux queue length (txqueuelen). The transfer time of the shorter file was reduced by almost 4 seconds, at no significant cost for the other transfer. In such a setup, a mechanism based on LEDBAT could perhaps be used (in a similar style as in the current proposal, using the TCP equation as a lower limit, and maybe tuning some parameters to be a bit more aggressive?), giving the benefit that LEDBAT has already been tested. What do y'all think? Cheers, Michael