DetNet | N. Finn |
Internet-Draft | Huawei Technologies Co. Ltd |
Intended status: Standards Track | J-Y. Le Boudec |
Expires: January 3, 2019 | E. Mohammadpour |
EPFL | |
B. Varga | |
J. Farkas | |
Ericsson | |
July 2, 2018 |
DetNet Bounded Latency
draft-finn-detnet-bounded-latency-01
This document presents a parameterized timing model for Deterministic Networking so that existing and future standards can achieve bounded latency and zero congestion loss.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 3, 2019.
Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
The ability for IETF Deterministic Networking (DetNet) or IEEE 802.1 Time-Sensitive Networking (TSN) to provide the DetNet services of bounded latency and zero congestion loss depends upon A) configuring and allocating network resources for the exclusive use of DetNet/TSN flows; B) identifying, in the data plane, the resources to be utilized by any given packet, and C) the detailed behavior of those resources, especially transmission queue selection, so that latency bounds can be reliably assured. Thus, DetNet is an example of an INTSERV Guaranteed Quality of Service [RFC2212]
As explained in [I-D.ietf-detnet-architecture], DetNet flows are characterized by 1) a maximum bandwidth, guaranteed either by the transmitter or by strict input metering; and 2) a requirement for a guaranteed worst-case end-to-end latency. That latency guarantee, in turn, provides the opportunity for the network to supply enough buffer space to guarantee zero congestion loss. To be of use to the applications identified in [I-D.ietf-detnet-use-cases], it must be possible to calculate, before the transmission of a DetNet flow commences, both the worst-case end-to-end network latency, and the amount of buffer space required at each hop to ensure against congestion loss.
Rather than defining, in great detail, specific mechanisms to be used to control packet transmission at each output port, this document presents a timing model for sources, destinations, and the network nodes that relay packets. The parameters specified in this model:
Using the model presented in this document, it should be possible for an implementor, user, or standards development organization to select a particular set of QoS algorithms for each device in a DetNet network, and to select a resource reservation algorithm for that network, so that those elements can work together to provide the DetNet service.
This document does not specify any resource reservation protocol or server. It does not describe all of the requirements for that protocol or server. It does describe a set of requirements for resource reservation algorithms and for QoS algorithms that, if met, will enable them to work together.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
The lowercase forms with an initial capital "Must", "Must Not", "Shall", "Shall Not", "Should", "Should Not", "May", and "Optional" in this document are to be interpreted in the sense defined in [RFC2119], but are used where the normative behavior is defined in documents published by SDOs other than the IETF.
This document uses the terms defined in [I-D.ietf-detnet-architecture].
The bounded latency model assusmes the use of the following paradigm for provisioning a particular DetNet flow:
This paradigm can be static and/or dynamic, and can be implemented using peer-to-peer protocols or with a central server model. In some situations, backtracking and recursing through this list may be necessary.
Issues such as un-provisioning a DetNet flow in favor of another when resources are scarce are not considered. How the path to be taken by a DetNet flow is chosen is not considered in this document.
[Suggestion: This is the introduction to network calculus. The starting point is a model in which a relay system is a black box.]
[NWF I think that at least some of this will be useful. We won't know until we see what J-Y has to say in Section 4.2. I'm especially interested in whether J-Y thinks that the "output delay" in Figure 1 is useful in determining the number of buffers needed in the next hop. It is possible that we can define the parameters we need without this section.]
In Figure 1 we see a breakdown of the per-hop latency experienced by a packet passing through a relay system, in terms that are suitable for computing both hop-by-hop latency and per-hop buffer requirements.
DetNet relay node A DetNet relay node B +-------------------+ +-------------------+ | Reg. Queue | | Reg. Queue | | +-+-+ +-+-+-+ | | +-+-+ +-+-+-+ | -->+ | | | | | | + +------->+ | | | | | | + +---> | +-+-+ +-+-+-+ | | +-+-+ +-+-+-+ | | | | | +-------------------+ +-------------------+ |<->|<-->|<---->|<->|<------>|<->|<-->|<---->|<->|<-- 2,3 4 5 6 1 2,3 4 5 6 1 2,3 1: Output delay 3: Preemption delay 2: Link delay 4: Processing delay 5: Regulation delay 6: Queuing delay.
Figure 1: Timing model for DetNet or TSN
In Figure 1, we see two DetNet relay nodes (typically, bridges or routers), with a wired link between them. In this model, the only queues we deal with explicitly are attached to the output port; other queues are modeled as variations in the other delay times. (E.g., an input queue could be modeled as either a variation in the link delay [2] or the processing delay [4].) There are five delays that a packet can experience from hop to hop.
Not shown in Figure 1 are the other output queues that we presume are also attached to that same output port as the queue shown, and against which this shown queue competes for transmission opportunities.
The initial and final measurement point in this analysis (that is, the definition of a "hop") is the point at which a packet is selected for output. In general, any queue selection method that is suitable for use in a DetNet network includes a detailed specification as to exactly when packets are selected for transmission. Any variations in any of the delay times 1-4 result in a need for additional buffers in the queue. If all delays 1-4 are constant, then any variation in the time at which packets are inserted into a queue depends entirely on the timing of packet selection in the previous node. If the delays 1-4 are not constant, then additional buffers are required in the queue to absorb these variations. Thus:
End-to-end latency bounds can be computed using the delay model in Section 4.3. Here it is important to be aware that for several queuing mechanisms, the worst-case end-to-end delay is less than the sum of the per-hop worst-case delays. An end-to-end latency bound for one detnet flow can be computed as
The two terms in the above formula are computed as follows. First, at the h-th hop along the path of this detnet flow, obtain an upper bound per-hop_non_queuing_latency[h] on the sum of delays 1,2,3,4 of Figure 1. These upper-bounds are expected to depend on the specific technology of the node at the h-th hop but not on the T-SPEC of this detnet flow. Then set non_queuing_latency = the sum of per-hop_non_queuing_latency[h] over all hops h.
Second, compute queuing_latency as an upper bound to the sum of the queuing delays along the path. The value of queuing_latency depends on the T-SPEC of this flow and possibly of other flows in the network, as well as the specifics of the queuing mechanisms deployed along the path of this flow.
For several queuing mechanisms, queuing_latency is less than the sum of upper bounds on the queuing delays (5,6) at every hop. Section 5.1 gives such practical computation examples.
For other queuing mechanisms the only available value of queuing_latency is the sum of the per-hop queuing delay bounds. In such cases, the computation of per-hop queuing delay bounds must account for the fact that the T-SPEC of a detnet flow is no longer satisfied at the ingress of a hop, since burstiness increases as one flow traverses one detnet node.
[[ JYLB: THIS IS WHERE DETAILS OF END-TO-END LATENCY COMPUTATION ARE GIVEN FOR PER-FLOW QUEUING]]
Figure 2 shows an example of a network with 5 nodes, which have the queuing model as Section 7.3. An end-to-end delay bound for flow f of a given AVB class (A or B), traversing from node 1 to 5, is calculated as following:
In the above formula, Cij is a bound on the aggregate response time of the AVB FIFO queue with CBS (Credit Based Shaper) in node i and interleaved regulator of node j, and S4 is a bound on the response time of the AVB FIFO queue with CBS in node 4 for flow f. In fact, using the delay definitions in Section 4.3, Cij is a bound on sum of the delays 1,2,3,6 of node i and 4,5 of node j. Similarly, S4 is a bound on sum of the delays 1,2,3,6 of node 4. The detail of calculation for the these response time bounds can be found in [TSNwithATS].
f -----------------------------> +---+ +---+ +---+ +---+ +---+ | 1 |---| 2 |---| 3 |---| 4 |---| 5 | +---+ +---+ +---+ +---+ +---+ \__C12_/\__C23_/\__C34_/\_S4_/
Figure 2: End-to-end latency computation example
REMARK: The end-to-end delay bound calculation provided here gives a much better upper bound in comparison with end-to-end delay bound computation by adding the delay bounds of each node in the path of a flow [TSNwithATS].
When the input rate to an output queue exceeds the output rate for a sufficient length of time, the queue must overflow. This is congestion loss, and this is what deterministic networking seeks to avoid.
To avoid congestion losses, an upper bound on the backlog present in the queue of Figure 1 must be computed during path computation. This bound depends on the set of flows that use this queue, the details of the specific queuing mechanism and an upper bound on the processing delay (4). The queue must contain the packet in transmission plus all other packets that are waiting to be selected for output.
A conservative backlog bound, that applies to all systems, can be derived as follows.
The backlog bound is counted in data units (bytes, or words of multiple bytes) that are relevant for buffer allocation. For every class we need one buffer space for the packet in transmission, plus space for the packets that are waiting to be selected for output. Excluding transmission and preemption times, the packets are waiting in the queue since reception of the last bit, for a duration equal to the processing delay (4) plus the queuing delays (5,6).
Let
Then a bound on the backlog of traffic of all classes in the queue at this output port is
[[ JYLB: THIS IS WHERE DETAILS OF END-TO-END LATENCY COMPUTATION ARE GIVEN FOR PER-FLOW QUEUING AND FOR TSN WITH ATS]]
Sophisticated QoS mechanisms are available in Layer 3 (L3), see, e.g., [RFC7806] for an overview. In general, we assume that "Layer 3" queues, shapers, meters, etc., are instantiated hierarchically above the "Layer 2" queuing mechanisms, among which packets compete for opportunities to be transmitted on a physical (or sometimes, logical) medium. These "Layer 2 queuing mechanisms" are not the province solely of bridges; they are an essential part of any DetNet relay node. As illustrated by numerous implementation examples, the "Layer 3" some of mechanisms described in documents such as [RFC7806] are often integrated, in an implementation, with the "Layer 2" mechanisms also implemented in the same system. An integrated model is needed in order to successfully predict the interactions among the different queuing mechanisms needed in a network carrying both DetNet flows and non-DetNet flows.
Figure 3 shows the (very simple) model for the flow of packets through the queues of an IEEE 802.1Q bridge. Packets are assigned to a class of service. The classes of service are mapped to some number of physical FIFO queues. IEEE 802.1Q allows a maximum of 8 classes of service, but it is more common to implement 2 or 4 queues on most ports.
| +--------------V---------------+ | Class of Service Assignment | +--+-------+---------------+---+ | | | +--V--+ +--V--+ +--V--+ |Class| |Class| |Class| | 0 | | 1 | . . . | n | |queue| |queue| |queue| +--+--+ +--+--+ +--+--+ | | | +--V-------V---------------V--+ | Transmission selection | +--------------+--------------+ | V
Figure 3: IEEE 802.1Q Queuing Model: Data flow
Some relevant mechanisms are hidden in this figure, and are performed in the "Class n queue" box:
The Class of Service Assignment function can be quite complex, since the introduction of [IEEE802.1Qci]. In addition to the Layer 2 priority expressed in the 802.1Q VLAN tag, a bridge can utilize any of the following information to assign a packet to a particular class of service (queue):
The "Transmission selection" function decides which queue is to transfer its oldest packet to the output port when a transmission opportunity arises.
Figure 3 must be modified if the output port supports preemption ([IEEE8021Qbu] and [IEEE8023br]). This modification is shown in Figure 4.
| +------------------------------V------------------------------+ | Class of Service Assignment | +--+-------+-------+-------+-------+-------+-------+-------+--+ | | | | | | | | +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ +--V--+ |Class| |Class| |Class| |Class| |Class| |Class| |Class| |Class| | a | | b | | c | | d | | e | | f | | g | | h | |queue| |queue| |queue| |queue| |queue| |queue| |queue| |queue| +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ | | | +-+ | | | | | | | | | | | | +--V-------V-------V------+ +V-----V-------V-------V-------V--+ | Interrupted xmit select | | Preempting xmit select | 802.1 +-------------+-----------+ +----------------+----------------+ | | ====== +-------------V-----------+ +----------------V----------------+ | Preemptible MAC | | Express MAC | 802.3 +--------+----------------+ +----------------+----------------+ | | +--------V-----------------------------------V----------------+ | MAC merge sublayer | +--------------------------+----------------------------------+ | +--------------------------V----------------------------------+ | PHY (unaware of preemption) | +--------------------------+----------------------------------+ | V
Figure 4: IEEE 802.1Q Queuing Model: Data flow with preemption
From Figure 4, we can see that, in the IEEE 802 model, the preemption feature is modeled as consisting of two MAC/PHY stacks, one for packets that can be interrupted, and one for packets that can interrupt the interruptible packets. The Class of Service (queue) determines which packets are which. In Figure 4, the classes of service are marked "a, b, ..." instead of with numbers, in order to avoid any implication about which numeric Layer 2 priority values correspond to preemptible or preempting queues. Although it shows three queues going to the preemptible MAC/PHY, any assignment is possible.
In Figure 5, we expand the "Transmission selection" function of Figure 4.
Figure 5 does NOT show the data path. It shows an example of a configuration of the IEEE 802.1Q transmission selection box shown in Figure 3 and Figure 4. Each queue m presents a "Class m Ready" signal. These signals go through various logic, filters, and state machines, until a single queue's "not empty" signal is chosen for presentation to the underlying MAC/PHY. When the MAC/PHY is ready to take another output packet, then a packet is selected from the one queue (if any) whose signal manages to pass all the way through the transmission selection function.
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ |Class| |Class| |Class| |Class| |Class| |Class| |Class| |Class| | 1 | | 0 | | 4 | | 5 | | 6 | | 7 | | 2 | | 3 | |Ready| |Ready| |Ready| |Ready| |Ready| |Ready| |Ready| |Ready| +--+--+ +--+--+ +--+--+ +-XXX-+ +--+--+ +--+--+ +--+--+ +--+--+ | | | | | | | | +--V--+ +--V--+ +--+--+ +--V--+ | +--V--+ +--V--+ | |Prio.| |Prio.| |Prio.| |Prio.| | |Sha- | |Sha- | | | 0 | | 4 | | 5 | | 6 | | | per| | per| | | PFC | | PFC | | PFC | | PFC | | | A | | B | | +--+--+ +--+--+ +-XXX-+ +-XXX-+ | +--+--+ +-XXX-+ | | | | | +--V--+ +--V--+ +--V--+ +--+--+ +--+--+ +--V--+ +--V--+ +--+--+ |Time | |Time | |Time | |Time | |Time | |Time | |Time | |Time | | Gate| | Gate| | Gate| | Gate| | Gate| | Gate| | Gate| | Gate| | 1 | | 0 | | 4 | | 5 | | 6 | | 7 | | 2 | | 3 | +--+--+ +-XXX-+ +--+--+ +--+--+ +-XXX-+ +--+--+ +-XXX-+ +--+--+ | | | +--V-------+-------V-------+--+ | |802.1Q Enhanced Transmission | | | Selection (ETS) = Weighted | | | Fair Queuing (WFQ) | | +--+-------+------XXX------+--+ | | | +--V-------+-------+-------+-------+-------V-------+-------+--+ | Strict Priority selection (rightmost first) | +-XXX------+-------+-------+-------+-------+-------+-------+--+ | V
Figure 5: 802.1Q Transmission Selection
The following explanatory notes apply to Figure 5
Consider a network with a set of nodes (switches and hosts) along with a set of flows between hosts. Hosts are sources or destinations of flows. There are four types of flows, namely, control-data traffic (CDT), class A, class B, and best effort (BE) in decreasing order of priority. Flows of classes A and B are together referred to as AVB flows. It is assumed a subset of TSN functions as described next.
It is also assumed that contention occurs only at the output port of a TSN node. Each node output port performs per-class scheduling with eight classes: one for CDT, one for class A traffic, one for class B traffic, and five for BE traffic denoted as BE0-BE4 (according to TSN standard). In addition, each node output port also performs per-flow regulation for AVB flows using an interleaved regulator (IR), called Asynchronous Traffic Shaper (ATS) in TSN. Thus, at each output port of a node, there is one interleaved regulator per-input port and per-class. The detailed picture of scheduling and regulation architecture at a node output port is given by Figure 6. The packets received at a node input port for a given class are enqueued in the respective interleaved regulator at the output port. Then, the packets from all the flows, including CDT and BE flows, are enqueued in a class based FIFO system (CBFS) [TSNwithATS].
+--+ +--+ +--+ +--+ | | | | | | | | |IR| |IR| |IR| |IR| | | | | | | | | +-++XXX++-+ +-++XXX++-+ | | | | | | | | +-----+ +-v-XXX-v-+ +-v-XXX-v-+ +-----+ +-----+ +-----+ +-----+ +-----+ | | | | | | |Class| |Class| |Class| |Class| |Class| | CDT | | Class A | | Class B | | BE4 | | BE3 | | BE2 | | BE1 | | BE0 | | | | | | | | | | | | | | | | | +--+--+ +----+----+ +----+----+ +--+--+ +--+--+ +--+--+ +--+--+ +--+--+ | | | | | | | | | +-v-+ +-v-+ | | | | | | |CBS| |CBS| | | | | | | +-+-+ +-+-+ | | | | | | | | | | | | | +--v---------v-----------v---------v-------V-------v-------v-------v--+ | Strict Priority selection | +----------------------------------+----------------------------------+ | V
Figure 6: Architecture of one TSN node output port with interleaved regulators (IRs)
The CBFS includes two CBS subsystems, one for each class A and B. The CBS serves a packet from a class according to the available credit for that class. The credit for each class A or B increases based on the idle slope, and decreases based on the send slope, both of which are parameters of the CBS. The CDT and BE0-BE4 flows in the CBFS are served by separate FIFO subsystems. Then, packets from all flows are served by a transmission selection subsystem that serves packets from each class based on its priority. All subsystems are non-preemptive. Guarantees for AVB traffic can be provided only if CDT traffic is bounded; it is assumed that the CDT traffic has an affine arrival curve r t + b in each node, i.e. the amount of bits entering a node within a time interval t is bounded by r t + b.
[[ EM: THE FOLLOWING PARAGRAPH SHOULD BE ALIGNED WITH Section 8.2. ]]
Additionally, it is assumed that flows are regulated at their source, according to either leaky bucket (LB) or length rate quotient (LRQ). The LB-type regulation forces flow f to conform to an arrival curve r_f t+b_f . The LRQ-type regulation with rate r_f ensures that the time separation between two consecutive packets of sizes l_n and l_n+1 is at least l_n/r_f. Note that if flow f is LRQ-regulated, it satisfies an arrival curve constraint r_f t + L_f where L_f is its maximum packet size (but the converse may not hold). For an LRQ regulated flow, b_f = L_f. At the source hosts, the traffic satisfies its regulation constraint, i.e. the delay due to interleaved regulator at hosts is ignored.
At each switch implementing an interleaved regulator, packets of multiple flows are processed in one FIFO queue; the packet at the head of the queue is regulated based on its regulation constraints; it is released at the earliest time at which this is possible without violating the constraint. The regulation type and parameters for a flow are the same at its source and at all switches along its path.
[[NWF More sections that discuss specific models]]
[[NWF This section talks about the paramters that must be passed hop-by-hop (T-SPEC? F-SPEC?) by a resoure reservation protocol.]]
[I-D.ietf-detnet-architecture] | Finn, N. and P. Thubert, "Deterministic Networking Architecture", Internet-Draft draft-ietf-detnet-architecture-00, September 2016. |
[I-D.ietf-detnet-dp-alt] | Korhonen, J., Farkas, J., Mirsky, G., Thubert, P., Zhuangyan, Z. and L. Berger, "DetNet Data Plane Protocol and Solution Alternatives", Internet-Draft draft-ietf-detnet-dp-alt-00, October 2016. |
[I-D.ietf-detnet-use-cases] | Grossman, E., "Deterministic Networking Use Cases", Internet-Draft draft-ietf-detnet-use-cases-17, June 2018. |
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997. |
[RFC2212] | Shenker, S., Partridge, C. and R. Guerin, "Specification of Guaranteed Quality of Service", RFC 2212, DOI 10.17487/RFC2212, September 1997. |
[RFC4364] | Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private Networks (VPNs)", RFC 4364, DOI 10.17487/RFC4364, February 2006. |
[RFC6658] | Bryant, S., Martini, L., Swallow, G. and A. Malis, "Packet Pseudowire Encapsulation over an MPLS PSN", RFC 6658, DOI 10.17487/RFC6658, July 2012. |
[RFC7806] | Baker, F. and R. Pan, "On Queuing, Marking, and Dropping", RFC 7806, DOI 10.17487/RFC7806, April 2016. |