Transport Services (tsv) | B. Briscoe, Ed. |
Internet-Draft | Simula Research Lab |
Intended status: Informational | K. De Schepper |
Expires: December 5, 2016 | Nokia Bell Labs |
M. Bagnulo Braun | |
Universidad Carlos III de Madrid | |
June 3, 2016 |
Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service: Problem Statement
draft-briscoe-tsvwg-aqm-tcpm-rmcat-l4s-problem-00
This document motivates a new service that the Internet could provide to eventually replace best efforts for all traffic: Low Latency, Low Loss, Scalable throughput (L4S). It is becoming common for all (or most) applications being run by a user at any one time to require low latency, but the only solution the IETF can offer for ultra-low queuing latency is Diffserv, which only offers low latency for some packets at the expense of others. Diffserv has also proved hard to deploy widely end-to-end.
In contrast, a zero-config incrementally deployable solution has been demonstrated that keeps average queuing delay under a millisecond for all applications even under very heavy load; and it keeps congestion loss to zero. At the same time it solves the long-running problem with the scalability of TCP throughput. Even with a high capacity broadband access, the resulting performance under load is remarkably and consistently improved for applications such as interactive video, conversational video, voice, Web, gaming, instant messaging, remote desktop and cloud-based apps. This document explains the underlying problems that have been preventing the Internet from enjoying such performance improvements. It then outlines the parts necessary for a solution and the steps that will be needed to standardize them.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 5, 2016.
Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
It is increasingly common for all of a user's applications at any one time to require low delay: interactive Web, Web services, voice, conversational video, interactive video, instant messaging, online gaming, remote desktop and cloud-based applications. In the last decade or so, much has been done to reduce propagation delay by placing caches or servers closer to users. However, queuing remains a major, albeit intermittent, component of latency. Low loss is also important because, for interactive applications, losses translate into delays.
It has been demonstrated that, once access network bit rate reaches levels now common in the developed world, increasing capacity offers diminishing returns if latency (delay) is not addressed. Differentiated services (Diffserv) offers Expedited Forwarding [RFC3246] for some packets at the expense of others, but this is not applicable when all (or most) of a user's applications require low latency.
Therefore, the goal is an Internet service with ultra-Low queueing Latency, ultra-Low Loss and Scalable throughput (L4S) - for all traffic. Having motivated the goal of 'L4S for all', this document enumerates the problems that have to be overcome to reach it.
It must be said that queuing delay only degrades performance infrequently [Hohlfeld14]. It only occurs when a large enough capacity-seeking (e.g. TCP) flow is running alongside the user's traffic in the bottleneck link, which is typically in the access network. Or when the low latency application is itself a large capacity-seeking flow (e.g. interactive video). At these times, the performance improvement must be so remarkable that network operators will be motivated to deploy it.
Active Queue Management (AQM) is part of the solution to queuing under load. AQM improves performance for all traffic, but there is a limit to how much queuing delay can be reduced by solely changing the network; without addressing the root of the problem.
The root of the problem is the presence of standard TCP congestion control (Reno [RFC5681]) or compatible variants (e.g. TCP Cubic [I-D.ietf-tcpm-cubic]). We shall call this family of congestion controls 'Classic' TCP. It has been demonstrated that if the sending host replaces Classic TCP with a 'Scalable' alternative, when a suitable AQM is deployed in the network the performance under load of all the above interactive applications can be stunningly improved - even in comparison to a state-of-the-art AQM such as fq_CoDel [I-D.ietf-aqm-fq-codel] or PIE [I-D.ietf-aqm-pie].
It has been convincingly demonstrated [DCttH15] that it is possible to deploy such an L4S service alongside the existing best efforts service so that all of a user's applications can shift to it when their stack is updated. Access networks are typically designed with one link as the bottleneck for each site (which might be a home, small enterprise or mobile device), so deployment at a single node should give nearly all the benefit. Although the main incremental deployment problem has been solved, and the remaining work seems straightforward, there may need to be changes in approach during the process of engineering a complete solution.
There are three main parts to the L4S approach (illustrated in Fig {ToDo: ASCII art of slide 9 from https://riteproject.files.wordpress.com/2015/10/1604-l4s-bar-bof.pdf}):
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. In this document, these words will appear with that interpretation only when in ALL CAPS. Lower case uses of these words are not to be interpreted as carrying RFC-2119 significance.
Then specific interworking aspects of the following three components parts will need to be defined:
{ToDo: /Why/ the various elements are necessary:}
ECN rather than drop
Packet identifier (pretty obvious why)
Scalable congestion notification (host behaviour)
Semi-permeable membrane (network behaviour)
{We will probably move some of the text in the bullets under "The Technology Problem" to here, e.g. why you need capacity shared across the semi-permeable membrane.}
All the following approaches address some part of the same problem space as L4S. In each case, it is shown that L4S complements them or improves on them, rather than being a mutually exclusive alternative:
A transport layer that solves the current latency issues will provide new service, product and application opportunities.
If applications can rely on minimal queues in the network, they can focus on reducing their own latency by only minimizing the application send queue. Following existing applications will immediately experience a better quality of experience in the best effort class:
The lower transport layer latency will also allow more interactive application functions offloading to the cloud. If last-minute interactions need to be done locally, more data must be send over the link. When all interactive processing can be done in the cloud, only the info to be rendered to the end user can be sent. It will allow applications such as:
Also lower network layers can finally be further optimized for low latency and stable throughput. Today it is not cost efficient, as the largest part of the traffic (classic best effort) needs to allow "big" queues anyway (up to several 100s of milliseconds) to make classic congestion control work correctly. While technology is known and feasible to support low latency with reliable throughput (even mobile), it is today not considered as economically relevant, as best effort can absorb any burst, delay or throughput variations without end-users experiencing any difference from the normal tay-to-day operation due to congestion control limitations.
{ToDo: Just bullets below - text to be added by those interested in various use-cases}
Different types of access network: DSL, cable, mobile
The challenges and opportunities with radio links: cellular, Wifi
Private networks of heterogeneous data centres (DC interconnect, multi-tenant cloud, etc)
Different types of transport/app: elastic (TCP/SCTP); real-time (RTP, RMCAT); query (DNS/LDAP).
Avoiding reliance on middleboxes to enable encryption/privacy (because the L4S approach does not look deeper than IP in the network).
This specification contains no IANA considerations.
Because the L4S service can serve all traffic that is using the capacity of a link, it should not be necessary to police access to the L4S service. In contrast, Diffserv has to use traffic policers to limit how much traffic can access each service, otherwise it doesn't work, In turn, traffic policers require traffic contracts between users and networks and between networks. Because L4S will lack all this management complexity, it is more likely to work end-to-end.
During early deployment (and perhaps always), some networks will not offer the L4S service. These networks do not need to police or re-mark L4S traffic - they just forward it unchanged as best efforts traffic, as they would already forward traffic with ECT(1) today. At a bottleneck, such networks will introduce some queuing and dropping. When the scalable congestion controll detects a drop it has to respond as if it is a Classic congestion control, and there will then be no interworking problems.
Certain network operators might choose to restict access to the L4S class, perhaps only to customers who have paid a premium. In the packet classifer, they could identify such customers using some other field (e.g. source address range), and just ignoring the L4S identifier for non-paying customers. This will ensure that the L4S identifier survives end-to-end even though the service does not have to be supported at every hop. Such arrangements would only require simple registered/not-registered packet classification, rather than the complex application-specific traffic contracts of Diffserv.
The L4S service does rely on self-constraint - not in terms of limiting capacity usage, but in terms of limiting burstiness. It is believed that standardisation of dynamic behaviour (cf. TCP slow-start) and self-interest will be sufficient to prevent transports from sending excessive bursts of L4S traffic, given the application's own latency will suffer most from such behaviour.
Whether burst policing becomes necessary remains to be seen. Without it, there will be potential for attacks on the low latency of the L4S service. However it may only be necessary to apply such policing reactively, e.g. punitively targeted at any deployments of new bursty malware.
{ToDo: Paraphrase discussion from ecn-l4s-id}
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997. |
[RFC3168] | Ramakrishnan, K., Floyd, S. and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP", RFC 3168, DOI 10.17487/RFC3168, September 2001. |
[RFC4774] | Floyd, S., "Specifying Alternate Semantics for the Explicit Congestion Notification (ECN) Field", BCP 124, RFC 4774, DOI 10.17487/RFC4774, November 2006. |
[RFC6679] | Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P. and K. Carlberg, "Explicit Congestion Notification (ECN) for RTP over UDP", RFC 6679, DOI 10.17487/RFC6679, August 2012. |
This list of requirements was produced at an ad hoc meeting during IETF-94 in Prague. The list prioritised features that would need to be added to DCTCP to make it safe for use on the public Internet alongside existing non-DCTCP traffic. It also includes features to improve the performance of DCTCP in the wider range of conditions found on the public Internet.
The table is too wide for the ASCII draft format, so it been split into two, with a common column of row index numbers on the left.
# | Requirement | Reference |
---|---|---|
0 | ARCHITECTURE | |
1 | L4S IDENTIFIER | [I-D.briscoe-tsvwg-ecn-l4s-id] |
2 | DUAL QUEUE AQM | [I-D.briscoe-aqm-dualq-coupled] |
SCALABLE TRANSPORT SAFETY ADDITIONS | ||
3-1 | Fall back to Reno/Cubic on loss | [I-D.ietf-tcpm-dctcp] |
3-2 | TCP ECN Feedback | [I-D.ietf-tcpm-accurate-ecn] |
3-4 | Scaling TCP's Congestion Window for Small Round Trip Times | |
3-5 | Reduce RTT-dependence | |
3-6 | Smooth ECN feedback over own RTT | |
3-7 | Fall back to Reno/Cubic if classic ECN bottleneck detected | |
SCALABLE TRANSPORT PERFORMANCE ENHANCEMENTS | ||
3-8 | Faster-than-additive increase | |
3-9 | Less drastic exit from slow-start |
# | WG | TCP | DCTCP | DCTCP-bis | TCP Prague | SCTP Prague | RMCAT Prague |
---|---|---|---|---|---|---|---|
0 | tsvwg? | Y | Y | Y | Y | Y | Y |
1 | tsvwg? | Y | Y | Y | Y | ||
2 | aqm? | n/a | n/a | n/a | n/a | n/a | n/a |
3-1 | tcpm | Y | Y | Y | Y | Y | |
3-2 | tcpm | Y | Y | Y | Y | n/a | n/a |
3-4 | tcpm | Y | Y | Y | Y | Y | ? |
3-5 | tcpm/ iccrg? | Y | Y | Y | ? | ||
3-6 | tcpm/ iccrg? | ? | Y | Y | Y | ? | |
3-7 | tcpm/ iccrg? | Y | Y | ? | |||
3-8 | tcpm/ iccrg? | Y | Y | Y | ? | ||
3-9 | tcpm/ iccrg? | Y | Y | Y | ? |