Network Working Group | X. Zhu |
Internet-Draft | Cisco Systems |
Intended status: Experimental | R. Pan |
Expires: June 7, 2018 | Pensando Systems |
M. Ramalho | |
S. Mena | |
P. Jones | |
J. Fu | |
Cisco Systems | |
S. D'Aronco | |
EPFL | |
December 4, 2017 |
NADA: A Unified Congestion Control Scheme for Real-Time Media
draft-ietf-rmcat-nada-06
This document describes NADA (network-assisted dynamic adaptation), a novel congestion control scheme for interactive real-time media applications, such as video conferencing. In the proposed scheme, the sender regulates its sending rate based on either implicit or explicit congestion signaling, in a unified approach. The scheme can benefit from explicit congestion notification (ECN) markings from network nodes. It also maintains consistent sender behavior in the absence of such markings, by reacting to queuing delays and packet losses instead.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on June 7, 2018.
Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Interactive real-time media applications introduce a unique set of challenges for congestion control. Unlike TCP, the mechanism used for real-time media needs to adapt quickly to instantaneous bandwidth changes, accommodate fluctuations in the output of video encoder rate control, and cause low queuing delay over the network. An ideal scheme should also make effective use of all types of congestion signals, including packet loss, queuing delay, and explicit congestion notification (ECN) [RFC3168] markings. The requirements for the congestion control algorithm are outlined in [I-D.ietf-rmcat-cc-requirements].
This document describes an experimental congestion control scheme called network-assisted dynamic adaptation (NADA). The NADA design benefits from explicit congestion control signals (e.g., ECN markings) from the network, yet also operates when only implicit congestion indicators (delay and/or loss) are available. Such a unified sender behavior distinguishes NADA from other congestion control schemes for real-time media. In addition, its core congestion control algorithm is designed to guarantee stability for path round-trip-times (RTTs) below a prescribed bound (e.g., 250ms with default parameter choices). It further supports weighted bandwidth sharing among competing video flows with different priorities. The signaling mechanism consists of standard RTP timestamp [RFC3550] and RTCP feedback reports with non-standard messages.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described [RFC2119].
Figure 1 shows the end-to-end system for real-time media transport that NADA operates in. Note that there also exist network nodes along the reverse (potentially uncongested) path that the RTCP feedback reports traverse. Those network nodes are not shown in the figure for sake of abrevity.
+---------+ r_vin +--------+ +--------+ +----------+ | Media |<--------| RTP | |Network | | RTP | | Encoder |========>| Sender |=======>| Node |====>| Receiver | +---------+ r_vout +--------+ r_send +--------+ +----------+ /|\ | | | +---------------------------------+ RTCP Feedback Report
Figure 1: System Overview
Like TCP-Friendly Rate Control (TFRC) [Floyd-CCR00] [RFC5348], NADA is a rate-based congestion control algorithm. In its simplest form, the sender reacts to the collection of network congestion indicators in the form of an aggregated congestion signal, and operates in one of two modes:
This section introduces the list of mathematical notations and describes the core congestion control algorithm at the sender and receiver, respectively. Additional details on recommended practical implementations are described in Section 5.1 and Section 5.2.
This section summarizes the list of variables and parameters used in the NADA algorithm.
+--------------+-------------------------------------------------+ | Notation | Variable Name | +--------------+-------------------------------------------------+ | t_curr | Current timestamp | | t_last | Last time sending/receiving a feedback | | delta | Observed interval between current and previous | | | feedback reports: delta = t_curr-t_last | | r_ref | Reference rate based on network congestion | | r_send | Sending rate | | r_recv | Receiving rate | | r_vin | Target rate for video encoder | | r_vout | Output rate from video encoder | | d_base | Estimated baseline delay | | d_fwd | Measured and filtered one-way delay | | d_queue | Estimated queueing delay | | d_tilde | Equivalent delay after non-linear warping | | p_mark | Estimated packet ECN marking ratio | | p_loss | Estimated packet loss ratio | | x_curr | Aggregate congestion signal | | x_prev | Previous value of aggregate congestion signal | | x_diff | Change in aggregate congestion signal w.r.t. | | | its previous value: x_diff = x_curr - x_prev | | rmode | Rate update mode: (0 = accelerated ramp-up; | | | 1 = gradual update) | | gamma | Rate increase multiplier in accelerated ramp-up | | | mode | | loss_int | Measured average loss interval in packet count | | loss_exp | Threshold value for setting the last observed | | | packet loss to expiration | | rtt | Estimated round-trip-time at sender | | buffer_len | Rate shaping buffer occupancy measured in bytes | +--------------+-------------------------------------------------+
Figure 2: List of variables.
+--------------+----------------------------------+----------------+ | Notation | Parameter Name | Default Value | +--------------+----------------------------------+----------------+ | PRIO | Weight of priority of the flow | 1.0 | RMIN | Minimum rate of application | 150 Kbps | | | supported by media encoder | | | RMAX | Maximum rate of application | 1.5 Mbps | | | supported by media encoder | | | XREF | Reference congestion level | 10ms | | KAPPA | Scaling parameter for gradual | 0.5 | | | rate update calculation | | | ETA | Scaling parameter for gradual | 2.0 | | | rate update calculation | | | TAU | Upper bound of RTT in gradual | 500ms | | | rate update calculation | | | DELTA | Target feedback interval | 100ms | +..............+..................................+................+ | LOGWIN | Observation window in time for | 500ms | | | calculating packet summary | | | | statistics at receiver | | | QEPS | Threshold for determining queuing| 10ms | | | delay build up at receiver | | | DFILT | Bound on filtering delay | 120ms | | GAMMA_MAX | Upper bound on rate increase | 0.5 | | | ratio for accelerated ramp-up | | | QBOUND | Upper bound on self-inflicted | 50ms | | | queuing delay during ramp up | | +..............+..................................+................+ | MULTILOSS | Multiplier for self-scaling the | 7. | | | expiration threshold of the last | | | | observed loss (loss_exp) based on| | | | measured average loss interval | | | | (loss_int) | | | QTH | Delay threshold for invoking | 50ms | | | non-linear warping | | | LAMBDA | Scaling parameter in the | 0.5 | | | exponent of non-linear warping | | +..............+..................................+................+ | PLRREF | Reference packet loss ratio | 0.01 | | PMRREF | Reference packet marking ratio | 0.01 | | DLOSS | Reference delay penalty for loss | 10ms | | | when packet loss ratio is at | | | | PLRREF | | | DMARK | Reference delay penalty for ECN | 2ms | | | marking when packet marking | | | | is at PMRREF | | +..............+..................................+................+ | FPS | Frame rate of incoming video | 30 | | BETA_S | Scaling parameter for modulating | 0.1 | | | outgoing sending rate | | | BETA_V | Scaling parameter for modulating | 0.1 | | | video encoder target rate | | | ALPHA | Smoothing factor in exponential | 0.1 | | | smoothing of packet loss and | | | | marking ratios | +--------------+----------------------------------+----------------+
Figure 3: List of algorithm parameters.
The receiver-side algorithm can be outlined as below:
On initialization: set d_base = +INFINITY set p_loss = 0 set p_mark = 0 set r_recv = 0 set both t_last and t_curr as current time On receiving a media packet: obtain current timestamp t_curr from system clock obtain from packet header sending time stamp t_sent obtain one-way delay measurement: d_fwd = t_curr - t_sent update baseline delay: d_base = min(d_base, d_fwd) update queuing delay: d_queue = d_fwd - d_base update packet loss ratio estimate p_loss update packet marking ratio estimate p_mark update measurement of receiving rate r_recv On time to send a new feedback report (t_curr - t_last > DELTA): calculate non-linear warping of delay d_tilde if packet loss exists calculate current aggregate congestion signal x_curr determine mode of rate adaptation for sender: rmode send RTCP feedback report containing values of: rmode, x_curr, and r_recv update t_last = t_curr
In order for a delay-based flow to hold its ground when competing against loss-based flows (e.g., loss-based TCP), it is important to distinguish between different levels of observed queuing delay. For instance, over wired connections, a moderate queuing delay value below 100ms is likely self-inflicted or induced by other delay-based flows, whereas a high queuing delay value of several hundreds of milliseconds may indicate the presence of a loss-based flow that does not refrain from increased delay.
If the last observed packet loss is within the expiration window of loss_exp (measured in terms of packet counts), the estimated queuing delay follows a non-linear warping:
/ d_queue, if d_queue<QTH; | d_tilde = < (1) | (d_queue-QTH) \ QTH exp(-LAMBDA ---------------) , otherwise. QTH
In (1), the queuing delay value is unchanged when it is below the first threshold QTH; otherwise it is scaled down following a non-linear curve. This non-linear warping is inspired by the delay-adaptive congestion windown backoff policy in [Budzisz-TON11], so as to "gradually nudge" the controller to operate based on loss-induced congestion signals when competing against loss-based flows. The exact form of the non-linear function has been simplified with respect to [Budzisz-TON11]. The value of the threshold QTH may need to tuned for different operational enviornments.
The value of loss_exp is configured to self-scale with the average packet loss interval loss_int with a multiplier MULTILOSS:
loss_exp = MULTILOSS * loss_int.
Estimation of the average loss interval loss_int, in turn, follows Section 5.4 of the TCP Friendly Rate Control (TFRC) protocol.
In practice, it is recommended to linearly interpolate between the warped (d_tilde) and non-warped (d_queue) values of the queuing delay during the transitional period lasting for the duration of loss_int.
The aggregate congestion signal is:
x_curr = d_tilde + DMARK*(p_mark/PMRREF)^2 + DLOSS*(p_loss/PLRREF)^2. (2)
Here, DMARK is prescribed reference delay penalty associated with ECN markings at the reference marking ratio of PMRREF; DLOSS is prescribed reference delay penalty associated with packet losses at the reference packet loss ratio of PLRREF. The value of DLOSS and DMARK does not depend on configurations at the network node. Since ECN-enabled active queue management schemes typically mark a packet before dropping it, the value of DLOSS SHOULD be higher than that of DMARK. Furthermore, the values of DLOSS and DMARK need to be set consistently across all NADA flows for them to compete fairly.
In the absence of packet marking and losses, the value of x_curr reduces to the observed queuing delay d_queue. In that case the NADA algorithm operates in the regime of delay-based adaptation.
Given observed per-packet delay and loss information, the receiver is also in a good position to determine whether the network is underutilized and recommend the corresponding rate adaptation mode for the sender. The criteria for operating in accelerated ramp-up mode are:
Otherwise the algorithm operates in graduate update mode.
The sender-side algorithm is outlined as follows:
on initialization: set r_ref = RMIN set rtt = 0 set x_prev = 0 set t_last and t_curr as current system clock time on receiving feedback report: obtain current timestamp from system clock: t_curr obtain values of rmode, x_curr, and r_recv from feedback report update estimation of rtt measure feedback interval: delta = t_curr - t_last if rmode == 0: update r_ref following accelerated ramp-up rules else: update r_ref following gradual update rules clip rate r_ref within the range of [RMIN, RMAX] x_prev = x_curr t_last = t_curr
In accelerated ramp-up mode, the rate r_ref is updated as follows:
QBOUND gamma = min(GAMMA_MAX, ------------------) (3) rtt+DELTA+DFILT r_ref = max(r_ref, (1+gamma) r_recv) (4)
The rate increase multiplier gamma is calculated as a function of upper bound of self-inflicted queuing delay (QBOUND), round-trip-time (rtt), target feedback interval (DELTA) and bound on filtering delay for calculating d_queue (DFILT). It has a maximum value of GAMMA_MAX. The rationale behind (3)-(4) is that the longer it takes for the sender to observe self-inflicted queuing delay build-up, the more conservative the sender should be in increasing its rate, hence the smaller the rate increase multiplier.
In gradual update mode, the rate r_ref is updated as:
x_offset = x_curr - PRIO*XREF*RMAX/r_ref (5) x_diff = x_curr - x_prev (6) delta x_offset r_ref = r_ref - KAPPA*-------*------------*r_ref TAU TAU x_diff - KAPPA*ETA*---------*r_ref (7) TAU
The rate changes in proportion to the previous rate decision. It is affected by two terms: offset of the aggregate congestion signal from its value at equilibrium (x_offset) and its change (x_diff). Calculation of x_offset depends on maximum rate of the flow (RMAX), its weight of priority (PRIO), as well as a reference congestion signal (XREF). The value of XREF is chosen so that the maximum rate of RMAX can be achieved when the observed congestion signal level is below PRIO*XREF.
At equilibrium, the aggregated congestion signal stablizes at x_curr = PRIO*XREF*RMAX/r_ref. This ensures that when multiple flows share the same bottleneck and observe a common value of x_curr, their rates at equilibrium will be proportional to their respective priority levels (PRIO) and maximum rate (RMAX). Values of RMIN and RMAX will be provided by the media codec, as specified in [I-D.ietf-rmcat-cc-codec-interactions]. In the absense of such information, NADA sender will choose a default value of 0 for RMIN, and 2Mbps for RMAX.
As mentioned in the sender-side algorithm, the final rate is clipped within the dynamic range specified by the application:
r_ref = min(r_ref, RMAX) (8) r_ref = max(r_ref, RMIN) (9)
The above operations ignore many practical issues such as clock synchronization between sender and receiver, filtering of noise in delay measurements, and base delay expiration. These will be addressed in Section 5
The receiver continuously monitors end-to-end per-packet statistics in terms of delay, loss, and/or ECN marking ratios. It then aggregates all forms of congestion indicators into the form of an equivalent delay and periodically reports this back to the sender. In addition, the receiver tracks the receiving rate of the flow and includes that in the feedback message.
The delay estimation process in NADA follows a similar approach as in earlier delay-based congestion control schemes, such as LEDBAT. Instead of relying on RTP timestamps, the NADA sender generates its own timestamp based on local system clock and embeds that information in the transport packet header. The NADA receiver estimates the forward delay as having a constant base delay component plus a time varying queuing delay component. The base delay is estimated as the minimum value of one-way delay observed over a relatively long period (e.g., tens of minutes), whereas the individual queuing delay value is taken to be the difference between one-way delay and base delay. By re-estimating the base delay periodically, one can avoid the potential issue of base delay expiration, whereby an earlier measured base delay value is no longer valid due to underlying route changes. All delay estimations are based on sender timestamps with a recommended granularity of 100 microseconds or higher.
The individual sample values of queuing delay should be further filtered against various non-congestion-induced noise, such as spikes due to processing "hiccup" at the network nodes. Therefore, instead of simply calculating the value of base delay using d_base = min(d_base, d_fwd), as expressed in Section 5.1, current implementation employs a minimum filter with a window size of 15 samples over per-packet queuing delay values.
The receiver detects packet losses via gaps in the RTP sequence numbers of received packets. Packets arriving out-of-order are discarded, and count towards losses. The instantaneous packet loss ratio p_inst is estimated as the ratio between the number of missing packets over the number of total transmitted packets within the recent observation window LOGWIN. The packet loss ratio p_loss is obtained after exponential smoothing:
p_loss = ALPHA*p_inst + (1-ALPHA)*p_loss. (10)
The filtered result is reported back to the sender as the observed packet loss ratio p_loss.
Estimation of packet marking ratio p_mark follows the same procedure as above. It is assumed that ECN marking information at the IP header can be passed to the receiving endpoint, e.g., by following the mechanism described in [RFC6679].
It is fairly straighforward to estimate the receiving rate r_recv. NADA maintains a recent observation window with time span of LOGWIN, and simply divides the total size of packets arriving during that window over the time span. The receiving rate (r_recv) is included as part of the feedback report.
Figure 4 provides a detailed view of the NADA sender. Upon receipt of an RTCP feedback report from the receiver, the NADA sender calculates the reference rate r_ref as specified in Section 4.3. It further adjusts both the target rate for the live video encoder r_vin and the sending rate r_send over the network based on the updated value of r_ref and rate shaping buffer occupancy buffer_len.
The NADA sender behavior stays the same in the presence of all types of congestion indicators: delay, loss, and ECN marking. This unified approach allows a graceful transition of the scheme as the network shifts dynamically between light and heavy congestion levels.
+----------------+ | Calculate | <---- RTCP report | Reference Rate | -----------------+ | r_ref +------------+-------------+ | | \|/ \|/ +-----------------+ +---------------+ | Calculate Video | | Calculate | | Target Rate | | Sending Rate | +-----------------+ +---------------+ | /|\ /|\ | r_vin | | | | \|/ +-------------------+ | +----------+ | buffer_len | r_send | Video | r_vout -----------+ \|/ | Encoder |--------> |||||||||=================> +----------+ -----------+ RTP packets Rate Shaping Buffer
Figure 4: NADA Sender Structure
The operation of the live video encoder is out of the scope of the design for the congestion control scheme in NADA. Instead, its behavior is treated as a black box.
A rate shaping buffer is employed to absorb any instantaneous mismatch between encoder rate output r_vout and regulated sending rate r_send. Its current level of occupancy is measured in bytes and is denoted as buffer_len.
A large rate shaping buffer contributes to higher end-to-end delay, which may harm the performance of real-time media communications. Therefore, the sender has a strong incentive to prevent the rate shaping buffer from building up. The mechanisms adopted are:
The target rate for the live video encoder deviates from the network congestion control rate r_ref based on the level of occupancy in the rate shaping buffer:
r_vin = r_ref - BETA_V*8*buffer_len*FPS. (11)
The actual sending rate r_send is regulated in a similar fashion:
r_send = r_ref + BETA_S*8*buffer_len*FPS. (12)
In (11) and (12), the first term indicates the rate calculated from network congestion feedback alone. The second term indicates the influence of the rate shaping buffer. A large rate shaping buffer nudges the encoder target rate slightly below -- and the sending rate slightly above -- the reference rate r_ref.
Intuitively, the amount of extra rate offset needed to completely drain the rate shaping buffer within the duration of a single video frame is given by 8*buffer_len*FPS, where FPS stands for the frame rate of the video. The scaling parameters BETA_V and BETA_S can be tuned to balance between the competing goals of maintaining a small rate shaping buffer and deviating from the reference rate point.
The following list of information is required for NADA congestion control to function properly:
The above list of information can be accommodated by 48 bits, or 6 bytes, in total. Choice of the feedback message interval DELTA is discussed in Section 6.3 A target feedback interval of DELTA=100ms is recommended.
The current design works with relative one-way-delay (OWD) as the main indication of congestion. The value of the relative OWD is obtained by maintaining the minimum value of observed OWD over a relatively long time horizon and subtract that out from the observed absolute OWD value. Such an approach cancels out the fixed difference between the sender and receiver clocks. It has been widely adopted by other delay-based congestion control approaches such as [RFC6817]. As discussed in [RFC6817], the time horizon for tracking the minimum OWD needs to be chosen with care: it must be long enough for an opportunity to observe the minimum OWD with zero standing queue along the path, and sufficiently short so as to timely reflect "true" changes in minimum OWD introduced by route changes and other rare events.
The potential drawback in relying on relative OWD as the congestion signal is that when multiple flows share the same bottleneck, the flow arriving late at the network experiencing a non-empty queue may mistakenly consider the standing queuing delay as part of the fixed path propagation delay. This will lead to slightly unfair bandwidth sharing among the flows.
Alternatively, one could move the per-packet statistical handling to the sender instead and use relative round-trip-time (RTT) in lieu of relative OWD, assuming that per-packet acknowledgements are available. The main drawback of RTT-based approach is the noise in the measured delay in the reverse direction.
Note that the choice of either delay metric (relative OWD vs. RTT) involves no change in the proposed rate adaptation algorithm. Therefore, comparing the pros and cons regarding which delay metric to adopt can be kept as an orthogonal direction of investigation.
Like other delay-based congestion control schemes, performance of NADA depends on the accuracy of its delay measurement and estimation module. Appendix A in [RFC6817] provides an extensive discussion on this aspect.
The current recommended practice of applying minimum filter with a window size of 15 samples suffices in guarding against processing delay outliers observed in wired connections. For wireless connections with a higher packet delay variation (PDV), more sophisticated techniques on de-noising, outlier rejection, and trend analysis may be needed.
More sophisticated methods in packet loss ratio calculation, such as that adopted by [Floyd-CCR00], will likely be beneficial. These alternatives are currently under investigation.
In the gradual rate update mode, the parameter TAU indicates the upper bound of round-trip-time (RTT) in feedback control loop. Typically, the observed feedback interval delta is close to the target feedback interval DELTA, and the relative ratio of delta/TAU versus ETA dictates the relative strength of influence from the aggregate congestion signal offset term (x_offset) versus its recent change (x_diff), respectively. These two terms are analogous to the integral and proportional terms in a proportional-integral (PI) controller. The recommended choice of TAU=500ms, DELTA=100ms and ETA = 2.0 corresponds to a relative ratio of 1:10 between the gains of the integral and proportional terms. Consequently, the rate adaptation is mostly driven by the change in the congestion signal with a long-term shift towards its equilibrium value driven by the offset term. Finally, the scaling parameter KAPPA determines the overall speed of the adaptation and needs to strike a balance between responsiveness and stability.
The choice of the target feedback interval DELTA needs to strike the right balance between timely feedback and low RTCP feedback message counts. A target feedback interval of DELTA=100ms is recommended, corresponding to a feedback bandwidth of 16Kbps with 200 bytes per feedback message --- approximately 1.6% overhead for a 1 Mbps flow. Furthermore, both simulation studies and frequency-domain analysis have established that a feedback interval below 250ms (i.e., more frequently than 4 feedback messages per second) will not break up the feedback control loop of NADA congestion control.
In calculating the non-linear warping of delay in (1), the current design uses fixed values of QTH for determining whether to perform the non-linear warping). Its value may need to be tuned for different operational enviornments (e.g., over wired vs. wireless connections). It is possible to adapt its value based on past observed patterns of queuing delay in the presence of packet losses. It needs to be noted that the non-linear warping mechanism may lead to multiple NADA streams stuck in loss-based mode when competing against each other.
In calculating the aggregate congestion signal x_curr, the choice of DMARK and DLOSS influence the steady-state packet loss/marking ratio experienced by the flow at a given available bandwidth. Higher values of DMARK and DLOSS result in lower steady-state loss/marking ratios, but are more susceptible to the impact of individual packet loss/marking events. While the value of DMARK and DLOSS are fixed and predetermined in the current design, a scheme for automatically tuning these values based on desired bandwidth sharing behavior in the presence of other competing loss-based flows (e.g., loss-based TCP) is under investigation.
In the current design, the aggregated congestion signal x_curr is calculated at the receiver, keeping the sender operation completely independent of the form of actual network congestion indications (delay, loss, or marking). Alternatively, one can move the logics of (1) and (2) to the sender. Such an approach requires slightly higher overhead in the feedback messages, which should contain individual fields on queuing delay (d_queue), packet loss ratio (p_loss), packet marking ratio (p_mark), receiving rate (r_recv), and recommended rate adaptation mode (rmode).
One nice property of NADA is the consistent video endpoint behavior irrespective of network node variations. This facilitates gradual, incremental adoption of the scheme.
To start off with, the proposed congestion control mechanism can be implemented without any explicit support from the network, and relies solely on observed one-way delay measurements and packet loss ratios as implicit congestion signals.
When ECN is enabled at the network nodes with RED-based marking, the receiver can fold its observations of ECN markings into the calculation of the equivalent delay. The sender can react to these explicit congestion signals without any modification.
Ultimately, networks equipped with proactive marking based on token bucket level metering can reap the additional benefits of zero standing queues and lower end-to-end delay and work seamlessly with existing senders and receivers.
The NADA scheme has been implemented in [ns-2] and [ns-3] simulation platforms. Extensive ns-2 simulation evaluations of an earlier version of the draft are documented in [Zhu-PV13]. Evaluation results of the current draft over several test cases in [I-D.ietf-rmcat-eval-test] have been presented at recent IETF meetings [IETF-90][IETF-91]. Evaluation results of the current draft over several test cases in [I-D.ietf-rmcat-wireless-tests] have been presented at [IETF-93]. An open source implementation of NADA as part of a ns-3 module is available at [ns3-rmcat]
The scheme has also been implemented and evaluated in a lab setting as described in [IETF-90]. Preliminary evaluation results of NADA in single-flow and multi-flow scenarios have been presented in [IETF-91].
NADA has been extensively evaluated under various test scenarios, including the collection of test cases specified by [I-D.ietf-rmcat-eval-test] and the subset of WiFi-based test cases in [I-D.ietf-rmcat-wireless-tests]. Additional evaluations have been carried out to characterize how NADA interacts with various active queue management (AQM) schemes such as RED, CoDel, and PIE. Most of these evaluations have been carried out in simulators. A few key test cases have also bee evaluated in implementations embedded in video conferencing clients.
Further experiments are suggested for the following scenarios:
This document makes no request of IANA.
The authors would like to thank Randell Jesup, Luca De Cicco, Piers O'Hanlon, Ingemar Johansson, Stefan Holmer, Cesar Ilharco Magalhaes, Safiqul Islam, Michael Welzl, Mirja Kuhlewind, Karen Elisabeth Egede Nielsen, Julius Flohr, Roland Bless, and Andreas Smas for their various valuable review comments and feedback. Thanks to Charles Ganzhorn for contributing to the testbed-based evaluation of NADA during an early stage of its development.
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997. |
[RFC3168] | Ramakrishnan, K., Floyd, S. and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP", RFC 3168, DOI 10.17487/RFC3168, September 2001. |
[RFC3550] | Schulzrinne, H., Casner, S., Frederick, R. and V. Jacobson, "RTP: A Transport Protocol for Real-Time Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, July 2003. |
[RFC5348] | Floyd, S., Handley, M., Padhye, J. and J. Widmer, "TCP Friendly Rate Control (TFRC): Protocol Specification", RFC 5348, DOI 10.17487/RFC5348, September 2008. |
[RFC6679] | Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P. and K. Carlberg, "Explicit Congestion Notification (ECN) for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 2012. |
NADA can work with different network queue management schemes and does not assume any specific network node operation. As an example, this appendix describes three variants of queue management behavior at the network node, leading to either implicit or explicit congestion signals. It needs to be acknowledged that NADA has not yet been tested with non-probabilistic ECN marking behaviors.
In all three flavors described below, the network queue operates with the simple first-in-first-out (FIFO) principle. There is no need to maintain per-flow state. The system can scale easily with a large number of video flows and at high link capacity.
In a conventional network with drop tail or RED queues, congestion is inferred from the estimation of end-to-end delay and/or packet loss. Packet drops at the queue are detected at the receiver, and contributes to the calculation of the aggregated congestion signal x_curr. No special action is required at network node.
In this mode, the network node randomly marks the ECN field in the IP packet header following the Random Early Detection (RED) algorithm. Calculation of the marking probability involves the following steps:
on packet arrival: update smoothed queue size q_avg as: q_avg = w*q + (1-w)*q_avg. calculate marking probability p as: / 0, if q < q_lo; | | q_avg - q_lo p= < p_max*--------------, if q_lo <= q < q_hi; | q_hi - q_lo | \ p = 1, if q >= q_hi.
Here, q_lo and q_hi corresponds to the low and high thresholds of queue occupancy. The maximum marking probability is p_max.
The ECN markings events will contribute to the calculation of an equivalent delay x_curr at the receiver. No changes are required at the sender.
Advanced network nodes may support random early marking based on a token bucket algorithm originally designed for Pre-Congestion Notification (PCN). The early congestion notification (ECN) bit in the IP header of packets are marked randomly. The marking probability is calculated based on a token-bucket algorithm originally designed for the Pre-Congestion Notification (PCN). The target link utilization is set as 90%; the marking probability is designed to grow linearly with the token bucket size when it varies between 1/3 and 2/3 of the full token bucket limit.
Calculation of the marking probability involves the following steps:
upon packet arrival: meter packet against token bucket (r,b); update token level b_tk; calculate the marking probability as: / 0, if b-b_tk < b_lo; | | b-b_tk-b_lo p = < p_max* --------------, if b_lo<= b-b_tk <b_hi; | b_hi-b_lo | \ 1, if b-b_tk>=b_hi.
Here, the token bucket lower and upper limits are denoted by b_lo and b_hi, respectively. The parameter b indicates the size of the token bucket. The parameter r is chosen to be below capacity, resulting in slight under-utilization of the link. The maximum marking probability is p_max.
The ECN markings events will contribute to the calculation of an equivalent delay x_curr at the receiver. No changes are required at the sender. The virtual queuing mechanism from the PCN-based marking algorithm will lead to additional benefits such as zero standing queues.