Internet Engineering Task Force | M. Scharf |
Internet-Draft | Alcatel-Lucent Bell Labs |
Intended status: Informational | A. Ford |
Expires: December 09, 2011 | Roke Manor Research |
June 07, 2011 |
MPTCP Application Interface Considerations
draft-ietf-mptcp-api-02
Multipath TCP (MPTCP) adds the capability of using multiple paths to a regular TCP session. Even though it is designed to be totally backward compatible to applications, the data transport differs compared to regular TCP, and there are several additional degrees of freedom that applications may wish to exploit. This document summarizes the impact that MPTCP may have on applications, such as changes in performance. Furthermore, it discusses compatibility issues of MPTCP in combination with non-MPTCP-aware applications. Finally, the document describes a basic application interface for MPTCP-aware applications that provides access to multipath address information and a level of control equivalent to regular TCP.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 09, 2011.
Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Multipath TCP adds the capability of using multiple paths to a regular TCP session [RFC0793]. The motivations for this extension include increasing throughput, overall resource utilisation, and resilience to network failure, and these motivations are discussed, along with high-level design decisions, as part of the Multipath TCP architecture [RFC6182]. The MPTCP protocol [I-D.ietf-mptcp-multiaddressed] offers the same reliable, in-order, byte-stream transport as TCP, and is designed to be backward compatible with both applications and the network layer. It requires support inside the network stack of both endpoints.
This document first presents the impacts that MPTCP may have on applications, such as performance changes compared to regular TCP. Second, it defines the interoperation of MPTCP and applications that are unaware of the multipath transport. MPTCP is designed to be usable without any application changes, but some compatibility issues have to be taken into account. Third, this memo specifies a basic Application Programming Interface (API) for MPTCP-aware applications. The API presented here is an extension to the regular TCP API to allow an MPTCP-aware application the equivalent level of control and access to information of an MPTCP connection that would be possible with the standard TCP API on a regular TCP connection.
An advanced API for MPTCP is outside the scope of this document. Such an advanced API could offer a more fine-grained control over multipath transport functions and policies. The appendix includes a brief, non-compulsory list of potential features of such an advanced API.
The de facto standard API for TCP/IP applications is the "sockets" interface. This document provides an abstract definition of MPTCP-specific extensions to this interface. These are operations that can be used by an application to get or set additional MPTCP-specific information on a socket, in order to provide an equivalent level of information and control over MPTCP as exists for an application using regular TCP. It is up to the applications, high-level programming languages, or libraries to decide whether to use these optional extensions. For instance, an application may want to turn on or off the MPTCP mechanism for certain data transfers, or limit its use to certain interfaces. The abstract specification is in line with the Posix standard [POSIX] as much as possible.
There are also various related extensions of the sockets interface: [I-D.ietf-shim6-multihome-shim-api] specifies sockets API extensions for a multihoming shim layer. The API enables interactions between applications and the multihoming shim layer for advanced locator management and for access to information about failure detection and path exploration. Experimental extensions to the sockets API are also defined for the Host Identity Protocol (HIP) [I-D.ietf-hip-native-api] in order to manage the bindings of identifiers and locator. Further related API extensions exist for IPv6 [RFC3542], Mobile IP [RFC4584], and SCTP [I-D.ietf-tsvwg-sctpsocket]. There can be interactions or incompatibilities of these APIs with MPTCP, which are discussed later in this document.
Some network stack implementations, specially on mobile devices, have centralized connection managers or other higher-level APIs to solve multi-interface issues, as surveyed in [I-D.ietf-mif-current-practices]. Their interaction with MPTCP is outside the scope of this note.
The target readers of this document are application developers whose software may benefit significantly from MPTCP. This document also provides the necessary information for developers of MPTCP to implement the API in a TCP/IP network stack.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
This document uses the MPTCP terminology introduced in [I-D.ietf-mptcp-multiaddressed].
Concerning the API towards applications, the following terms are distinguished:
This section discusses the impact that the use of MPTCP will have on applications, in comparison to what may be expected from the use of regular TCP.
One of the key goals of adding multipath capability to TCP is to improve the performance of a transport connection by load distribution over separate subflows across potentially disjoint paths. Furthermore, it is an explicit goal of MPTCP that it should not provide a worse performing connection that would have existed through the use of single-path TCP. A corresponding congestion control algorithm is described in [I-D.ietf-mptcp-congestion]. The following sections summarize the performance impact of MPTCP as seen by an application.
The most obvious performance improvement that will be gained with the use of MPTCP is an increase in throughput, since MPTCP will pool more than one path (where available) between two endpoints. This will provide greater bandwidth for an application. If there are shared bottlenecks between the flows, then the congestion control algorithms will ensure that load is evenly spread amongst regular and multipath TCP sessions, so that no end user receives worse performance than single-path TCP.
This performance increase additionally means that an MPTCP session could achieve throughput that is greater than the capacity of a single interface on the device. If any applications make assumptions about interfaces due to throughput (or vice versa), they must take this into account (although an MPTCP implementation must always respect an application's request for a particular interface).
Furthermore, the flexibility of MPTCP to add and remove subflows as paths change availability could lead to a greater variation, and more frequent change, in connection bandwidth. Applications that adapt to available bandwidth (such as video and audio streaming) may need to adjust some of their assumptions to most effectively take this into account.
The transport of MPTCP signaling information results in a small overhead. If multiple subflows share a same bottleneck, this overhead slightly reduces the capacity that is available for data transport. Yet, this potential reduction of throughput will be neglectible in many usage scenarios, and the protocol contains optimisations in its design so that this overhead is minimal.
If the delays on the constituent subflows of an MPTCP connection differ, the jitter perceivable to an application may appear higher as the data is spread across the subflows. Although MPTCP will ensure in-order delivery to the application, the application must be able to cope with the data delivery being burstier than may be usual with single-path TCP. Since burstiness is commonplace on the Internet today, it is unlikely that applications will suffer from such an impact on the traffic profile, but application authors may wish to consider this in future development.
In addition, applications that make round trip time (RTT) estimates at the application level may have some issues. Whilst the average delay calculated will be accurate, whether this is useful for an application will depend on what it requires this information for. If a new application wishes to derive such information, it should consider how multiple subflows may affect its measurements, and thus how it may wish to respond. In such a case, an application may wish to express its scheduling preferences, as described later in this document.
The use of multiple subflows simultaneously means that, if one should fail, all traffic will move to the remaining subflow(s), and additionally any lost packets can be retransmitted on these subflows.
Subflow failure may be caused by issues within the network, which an application would be unaware of, or interface failure on the node. An application may, under certain circumstances, be in a position to be aware of such failure (e.g. by radio signal strength, or simply an interface enabled flag), and so must not make assumptions of an MPTCP flow's stablity based on this. An MPTCP implementation must never override an application's request for a given interface, however, so the cases where this issue may be applicable are limited.
MPTCP has been designed in order to pass through the majority of middleboxes. Empirical evidence suggests that new TCP options can successfully be used on most paths in the Internet. Nevertheless some middleboxes may still refuse to pass MPTCP messages due to the presence of TCP options, or they may strip TCP options. If this is the case, MPTCP should fall back to regular TCP. Although this will not create a problem for the application (its communication will be set up either way), there may be additional (and indeed, user-perceivable) delay while the first handshake fails. Therefore, an alternative approach could be to try both MPTCP and regular TCP connection attempts at the same time, and respond to whichever replies first (or apply a timeout on the MPTCP attempt, while having TCP SYN/ACK ready to reply to, thus reducing the setup delay by a RTT) in a similar fashion to the "Happy Eyeballs" proposal for IPv6 [I-D.ietf-v6ops-happy-eyeballs].
An MPTCP implementation can learn the rate of MPTCP connection attempt successes or failures to particular hosts or networks, and on particular interfaces, and could therefore learn heuristics of when and when not to use MPTCP. A detailed discussion of the various fallback mechanisms, for failures occurring at different points in the connection, is presented in [I-D.ietf-mptcp-multiaddressed].
There may also be middleboxes that transparently change the length of content. If such middleboxes are present, MPTCP's reassembly of the byte stream in the receiver is difficult. Still, MPTCP can detect such middleboxes and then fall back to regular TCP. An overview of the impact of middleboxes is presented in [RFC6182] and MPTCP's mechanisms to work around these are presented and discussed in [I-D.ietf-mptcp-multiaddressed].
MPTCP can also have other unexpected implications. For instance, intrusion detection systems could be triggered. A full analysis of MPTCP's impact on such middleboxes is for further study after deployment experiments.
In regular TCP, there is a one-to-one mapping of the socket interface to a flow through a network. Since MPTCP can make use of multiple subflows, applications cannot implicitly rely on this one-to-one mapping any more. Applications that require the transport along a single path can disable the use of MPTCP as described later in this document. Examples include monitoring tools that want to measure the available bandwidth on a path, or routing protocols such as BGP that require the use of a specific link.
Furthermore, an implementation may choose to persist an MPTCP connection even if an IP address is not allocated any more to a host, depending on the policy concerning the first subflow (fate-sharing, see Section 4.2.2). In this case, the IP address exposed to an MPTCP-unaware application can differ to the addresses actually been used by MPTCP. It is even possible that an IP address gets assigned to another host during the lifetime of an MPTCP connection.
The support for multiple IP addresses within one MPTCP connection can result in additional security vulnerabilities, such as possibilities for attackers to hijack connections. The protocol design of MPTCP minimizes this risk. An attacker on one of the paths can cause harm, but this is hardly an additional security risk compared to single-path TCP, which is vulnerable to man-in-the-middle attacks, too. A detailed thread analysis of MPTCP is published in [RFC6181].
MPTCP is an extension of TCP, but it is designed to be backward compatible for legacy applications. TCP interacts with other parts of the network stack by different interfaces. The de facto standard API between TCP and applications is the sockets interface. The position of MPTCP in the protocol stack can be illustrated in Figure 1.
+-------------------------------+ | Application | +-------------------------------+ ^ | ~~~~~~~~~~~|~Socket Interface|~~~~~~~~~~~ | v +-------------------------------+ | MPTCP | + - - - - - - - + - - - - - - - + | Subflow (TCP) | Subflow (TCP) | +-------------------------------+ | IP | IP | +-------------------------------+
In general, MPTCP can affect all interfaces that make assumptions about the coupling of a TCP connection to a single IP address and TCP port pair, to one sockets endpoint, to one network interface, or to a given path through the network.
This means that there are two classes of applications:
In the following, it is discussed to which extent MPTCP affects legacy applications using the existing sockets API. The existing sockets API implies that applications deal with data structures that store, amongst others, the IP addresses and TCP port numbers of a TCP connection. A design objective of MPTCP is that legacy applications can continue to use the established sockets API without any changes. However, in MPTCP there is a one-to-many mapping between the socket endpoint and the subflows. This has several subtle implications for legacy applications using sockets API functions.
During binding, an application can either select a specific address, or bind to INADDR_ANY. Furthermore, on some systems other socket options (e.g., SO_BINDTODEVICE) can be used to bind to a specific interface. If an application uses a specific address or binds to a specific interface, then MPTCP MUST respect this and not interfere in the application's choices. The binding to a specific address or interface implies that the application is not aware of MPTCP and will disable the use of MPTCP on this connection. An application that wishes to bind to a specific set of addresses with MPTCP must use multipath-aware calls to achieve this (as described in Section 5.3.3).
If an application binds to INADDR_ANY, it is assumed that the application does not care which addresses to use locally. In this case, a local policy MAY allow MPTCP to automatically set up multiple subflows on such a connection.
The basic sockets API of MPTCP-aware applications allows to express further preferences in an MPTCP-compatible way (e.g. bind to a subset of interfaces only).
Applications can use the getpeername() or getsockname() functions in order to retrieve the IP address of the peer or of the local socket. These functions can be used for various purposes, including security mechanisms, geo-location, or interface checks. The socket API was designed with an assumption that a socket is using just one address, and since this address is visible to the application, the application may assume that the information provided by the functions is the same during the lifetime of a connection. However, in MPTCP, unlike in TCP, there is a one-to-many mapping of a connection to subflows, and subflows can be added and removed while the connections continues to exist. Therefore, MPTCP cannot expose addresses by getpeername() or getsockname() that are both valid and constant during the connection's lifetime.
This problem is addressed as follows: If used by a legacy application, the MPTCP stack MUST always return the addresses of the first subflow of an MPTCP connection, in all circumstances, even if that particular subflow is no longer in use.
As this address may not be valid any more if the first subflow is closed, the MPTCP stack MAY close the whole MPTCP connection if the first subflow is closed (i.e. fate sharing between the initial subflow and the MPTCP connection as a whole). Whether to close the whole MPTCP connection by default SHOULD be controlled by a local policy. Further experiments are needed to investigate its implications.
The functions getpeername() and getsockname() SHOULD also always return the addresses of the first subflow if the socket is used by an MPTCP-aware application, in order to be consistent with MPTCP-unaware applications, and, e. g., also with SCTP. Instead of getpeername() or getsockname(), MPTCP-aware applications can use new API calls, documented later, in order to retrieve the full list of address pairs for the subflows in use.
The existing sockets API includes options that modify the behavior of sockets and their underlying communications protocols. Various socket options exist on socket, TCP, and IP level. The value of an option can usually be set by the setsockopt() system function. The getsockopt() function gets information. In general, the existing sockets interface functions cannot configure each MPTCP subflow individually. In order to be backward compatible, existing APIs therefore SHOULD apply to all subflows within one connection, as far as possible.
One commonly used TCP socket option (TCP_NODELAY) disables the Nagle algorithm as described in [RFC1122]. This option is also specified in the Posix standard [POSIX]. Applications can use this option in combination with MPTCP exactly in the same way. It then SHOULD disable the Nagle algorithm for the MPTCP connection, i.e., all subflows.
In addition, the MPTCP protocol instance MAY use a different path scheduler algorithm if TCP_NODELAY is present. For instance, it could use an algorithm that is optimized for latency-sensitive traffic. Specific algorithms are outside the scope of this document.
Applications can explicitly configure send and receive buffer sizes by the sockets API (SO_SNDBUF, SO_RCVBUF). These socket options can also be used in combination with MPTCP and then affect the buffer size of the MPTCP connection. However, when defining buffer sizes, application programmers should take into account that the transport over several subflows requires a certain amount of buffer for resequencing in the receiver. MPTCP may also require more storage space in the sender, in particular, if retransmissions are sent over more than one path. In addition, very small send buffers may prevent MPTCP from efficiently scheduling data over different subflows. Therefore, it does not make sense to use MPTCP in combination with small send or receive buffers.
An MPTCP implementation MAY set a lower bound for send and receive buffers and treat a small buffer size request as an implicit request not to use MPTCP.
Some network stacks also provide other implementation-specific socket options or interfaces that affect TCP's behavior. If a network stack supports MPTCP, it must be ensured that these options do not interfere.
It is up to a local policy at the end system whether a network stack should automatically enable MPTCP for sockets even if there is no explicit sign of MPTCP awareness of the corresponding application. Such a choice may be under the control of the user through system preferences.
The enabling of MPTCP, either by application or by system defaults, does not necessarily mean that MPTCP will always be used. Both endpoints must support MPTCP, and there must be multiple addresses at at least one endpoint, for MPTCP to be used. Even if those requirements are met, however, MPTCP may not be immediately used on a connection. It may make sense for multiple paths to be brought into operation only after a given period of time, or if the connection is saturated.
While applications can use MPTCP with the unmodified sockets API, multipath transport results in many degrees of freedom. MPTCP manages the data transport over different subflows automatically. By default, this is transparent to the application, but an application could use an additional API to interface with the MPTCP layer and to control important aspects of the MPTCP implementation's behaviour.
This document describes a basic MPTCP API. The API contains a minimum set of functions that provide an equivalent level of control and information as exists for regular TCP. It maintains backward compatibility with legacy applications.
An advanced MPTCP API is outside the scope of this document. The basic API does not allow a sender or a receiver to express preferences about the management of paths or the scheduling of data, even if this can have a significant performance impact and if an MPTCP implementation could benefit from additional guidance by applications. A list of potential further API extensions is provided in the appendix. The specification of such an advanced API is for further study and may partly be implementation-specific.
MPTCP mainly affects the sending of data. Therefore, the basic API only affects the sender side of a data transfer. A receiver may also have preferences about data transfer choices, and it may have performance requirements, too. A receiver may also have preferences about data transfer choices, and it may have performance requirements, too. Yet, the configuration of such preferences is outside of the scope of the basic API.
Because of the importance of the sockets interface there are several fundamental design objectives for the basic interface between MPTCP and applications:
The following is a list of the core requirements for the basic API:
The first requirement is the most important one, since some applications could benefit a lot from MPTCP, but there are also cases in which it hardly makes sense. The existing sockets API provides similar mechanisms to enable or disable advanced TCP features. The second requirement corresponds to the binding of addresses with the bind() socket call, or, e.g., explicit device bindings with a SO_BINDTODEVICE option. The third requirement ensures that there is an equivalent to getpeername() or getsockname() that is able to deal with more than one subflow. Finally, it should be possible for the application to retrieve a unique connection identifier (local to the endpoint on which it is running) for the MPTCP connection. This is equivalent to using the (address, port) pair for a connection identifier in single-path TCP, which is no longer static in MPTCP.
An application can continue to use getpeername() or getsockname() in addition to the basic MPTCP API. In that case, both functions return the corresponding addresses of the first subflow, as already explained.
The abstract, basic MPTCP API consists of a set of new values that are associated with an MPTCP socket. Such values may be used for changing properties of an MPTCP connection, or retrieving information. These values could be accessed by new symbols on existing calls such as setsockopt() and getsockopt(), or could be implemented as entirely new function calls. This implementation decision is out of scope for this document. The following list presents symbolic names for these MPTCP socket settings.
Table Table 1 shows a list of the abstract socket operations for the basic configuration of MPTCP. The first column gives the symbolic name of the operation. The second and third columns indicate whether the operation provides values to be read ("Get") or takes values to configure ("Set"). The fourth column lists the type of data associated with this operation.
Name | Get | Set | Data type |
---|---|---|---|
TCP_MULTIPATH_ENABLE | o | o | boolean |
TCP_MULTIPATH_ADD | o | list of addresses | |
TCP_MULTIPATH_REMOVE | o | list of addresses | |
TCP_MULTIPATH_SUBFLOWS | o | list of pairs of addresses | |
TCP_MULTIPATH_CONNID | o | 32-bit integer |
There are restrictions when these new socket operations can be used:
An application can explicitly indicate multipath capability by setting TCP_MULTIPATH_ENABLE to a value larger than 0. In this case, the MPTCP implementation SHOULD try to negitiate MPTCP for that connection. Note that multipath transport will not necessarily be enabled, as it requires multiple addresses and support in the other end-system and potentially also on middleboxes.
An application can disable MPTCP setting TCP_MULTIPATH_ENABLE to a value of 0. In that case, MPTCP MUST NOT be used on that connection.
After connection establishment, an application can get the value of TCP_MULTIPATH_ENABLE. A value of 0 then means lack of MPTCP support. Any value equal to or larger than 1 means that MPTCP is supported.
As alternative to setting an explicit value, an application could also use a new, separate address family called AF_MULTIPATH [I-D.sarolahti-mptcp-af-multipath]. This separate address family can be used to exchange multiple addresses between an application and the standard sockets API, and additionally acts as an explicit indication that an application is MPTCP-aware, i.e., that it can deal with the semantic changes of the sockets API, in particular concerning getpeername() and getsockname(). The usage of AF_MULTIPATH is also more flexible with respect to multipath transport, either IPv4 or IPv6, or both in parallel [I-D.sarolahti-mptcp-af-multipath].
Before connection establishment, an application can use TCP_MULTIPATH_ADD socket option to indicate a set of local IP addresses that MPTCP may bind to. The parameter of the function is a list of addresses in a corresponding data structure. By extension, this operation will also control the list of addresses that can be advertised to the peer via MPTCP signalling.
An application MAY also indicate a TCP port number that MPTCP should bind to for a given address. The port number MAY be different to the one used by existing subflows. If no port number is provided by the application, the port number is automatically selected by the MPTCP implementation, and will usually be the same across all subflows.
This operation can also be used to modify the address list in use during the lifetime of an MPTCP connection. In this case, it is used to indicate a set of additional local addresses that the MPTCP connection can make use of, and which can be signalled to the peer. It should be noted that this signal is only a hint, and an MPTCP implementation MAY only use a subset of the addresses.
The TCP_MULTIPATH_REMOVE operation can be used to remove a (set of) local addresses from an MPTCP connection. MPTCP MUST close any corresponding subflows (i.e. those using the local address that is no longer present), and signal the removal of the address to the peer. If alternative paths are available using the supplied address list but MPTCP is not currently using them, an MPTCP implementation SHOULD establish alternative subflows before undertaking the address removal.
It should be remembered that these operations SHOULD support both IPv4 and IPv6 addresses, potentially in the same call.
An application can get a list of the addresses used by the currently established subflows by means of the read-only TCP_MULTIPATH_SUBFLOWS operation. The return value is a list of pairs of tuples of IP address and TCP port number. In one pair, the first tuple refers to the local IP address and the local TCP port, and the second one to the remote IP address and remote TCP port used by the subflow. The list MUST only include established subflows. Both addresses in each pair MUST be either IPv4 or IPv6.
An application that wants a unique identifier for the connection, analogous to an (address, port) pair in regular TCP, can query the TCP_MULTIPATH_CONNID value to get a local connection identifier for the MPTCP connection.
This is a 32-bit number, and SHOULD be the same as the local connection identifier sent in the MPTCP handshake.
For dealing with multi-homing, several socket API extensions have been defined for SCTP [I-D.ietf-tsvwg-sctpsocket]. As MPTCP realizes multipath transport from and to multi-homed endsystems, some of these interface function calls are actually applicable to MPTCP in a similar way.
API developers MAY wish to integrate SCTP and MPTCP calls to provide a consistent interface to the application. Yet, it must be emphasized that the transport service provided by MPTCP is different to SCTP, and this is why not all SCTP API functions can be mapped directly to MPTCP. Furthermore, a network stack implementing MPTCP does not necessarily support SCTP and its specific socket interface extensions. This is why the basic API of MPTCP defines additional socket options only, which are a backward compatible extension of TCP's application interface. An integration with the SCTP API is outside the scope of the basic API.
The use of MPTCP can interact with various related sockets API extensions. The use of a multihoming shim layer conflicts with multipath transport such as MPTCP or SCTP [I-D.ietf-shim6-multihome-shim-api]. Care should be taken for the usage not to confuse with the overlapping features of other APIs:
In order to avoid any conflict, multiaddressed MPTCP SHOULD NOT be enabled if a network stack uses SHIM6, HIP, or Mobile IPv6. Furthermore, applications should not try to use both the MPTCP API and another multihoming or mobility layer API.
It is possible, however, that some of the MPTCP functionality, such as congestion control, could be used in a SHIM6 or HIP environment. Such operation is outside the scope of this document.
In multihomed or multiaddressed environments, there are various issues that are not specific to MPTCP, but have to be considered, too. These problems are summarized in [I-D.ietf-mif-problem-statement].
Specifically, there can be interactions with DNS. Whilst it is expected that an application will iterate over the list of addresses returned from a call such as getaddrinfo(), MPTCP itself MUST NOT make any assumptions about multiple A or AAAA records from the same DNS query referring to the same host, as it is possible that multiple addresses refer to multiple servers for load balancing purposes.
Will be added in a later version of this document.
No IANA considerations.
This document discusses MPTCP's application implications and specifies a basic MPTCP API. For legacy applications, it is ensured that the existing sockets API continues to work. MPTCP-aware applications can use the basic MPTCP API that provides some control over the transport layer equivalent to regular TCP. A more fine-granular interaction between applications and MPTCP requires an advanced MPTCP API, which is not specified in this document.
Authors sincerely thank to the following people for their helpful comments and reviews of the document: Costin Raiciu, Philip Eardley, Javier Ubillos, and Michael Tuexen.
Michael Scharf is supported by the German-Lab project (http://www.german-lab.de/) funded by the German Federal Ministry of Education and Research (BMBF). Alan Ford is supported by Trilogy (http://www.trilogy-project.org/), a research project (ICT-216372) partially funded by the European Community under its Seventh Framework Program. The views expressed here are those of the author(s) only. The European Commission is not liable for any use that may be made of the information in this document.
Multipath transport results in many degrees of freedom. The basic MPTCP API only defines a minimum set of the API extensions for the interface between the MPTCP layer and applications, which does not offer much control of the MPTCP implementation's behaviour. A future, advanced API could address further features of MPTCP and provide more control.
Applications that use TCP may have different requirements on the transport layer. While developers have become used to the characteristics of regular TCP, new opportunities created by MPTCP could allow the service provided to be optimised further. An advanced API could enable MPTCP-aware applications to specify preferences and control certain aspects of the behavior, in addition to the simple control provided by the basic interface. An advanced API could also address aspects that are completely out-of-scope of the basic API, for example, the question whether a receiving application could influence the sending policy.
Furthermore, an advanced MPTCP API could be part of a new overall interface between the network stack and applications that addresses other issues as well, such as the split between identifiers and locators. An API that does not use IP addresses (but, instead e.g. a connectbyname() function) would be useful for numerous purposes, independent of MPTCP.
This appendix documents a list of potential usage scenarios and requirements for the advanded API. The specification and implementation of a corresponding API is outside the scope of this document.
There are different MPTCP usage scenarios. An application that wishes to transmit bulk data will want MPTCP to provide a high throughput service immediately, through creating and maximising utilisation of all available subflows. This is the default MPTCP use case.
But at the other extreme, there are applications that are highly interactive, but require only a small amount of throughput, and these are optimally served by low latency and jitter stability. In such a situation, it would be preferable for the traffic to use only the lowest latency subflow (assuming it has sufficient capacity), maybe with one or two additional subflows for resilience and recovery purposes. The key challenge for such a strategy is that the delay on a path may fluctuate significantly and that just always selecting the path with the smallest delay might result in instability.
The choice between bulk data transport and latency-sensitive transport affects the scheduler in terms of whether traffic should be, by default, sent on one subflow or across several ones. Even if the total bandwidth required is less than that available on an individual path, it is desirable to spread this load to reduce stress on potential bottlenecks, and this is why this method should be the default for bulk data transport. However, that may not be optimal for applications that require latency/jitter stability.
In the case of the latter option, a further question arises: Should additional subflows be used whenever the primary subflow is overloaded, or only when the primary path fails (hot-standby)? In other words, is latency stability or bandwidth more important to the application? This results in two different options: Firstly, there is the single path which can overflow into an additional subflow; and secondly there is single-path with hot-standby, whereby an application may want an alternative backup subflow in order to improve resilience. In case that data delivery on the first subflow fails, the data transport could immediately be continued on the second subflow, which is idle otherwise.
Yet another complication is introduced with the potential that MPTCP introduces for changes in available bandwidth as the number of available subflows changes. Such jitter in bandwidth may prove confusing for some applications such as video or audio streaming that dynamically adapt codecs based on available bandwidth. Such applications may prefer MPTCP to attempt to provide a consistent bandwidth as far as is possible, and avoid maximising the use of all subflows.
A further, mostly orthogonal question is whether data should be duplicated over the different subflows, in particular if there is spare capacity. This could improve both the timeliness and reliability of data delivery.
In summary, there are at least three possible performance objectives for multipath transport (not necessarily disjoint):
In an advanced API, applications could provide high-level guidance to the MPTCP implementation concerning these performance requirements, for instance, which is considered to be the most important one. The MPTCP stack would then use internal mechanisms to fulfill this abstract indication of a desired service, as far as possible. This would both affect the assignment of data (including retransmissions) to existing subflows (e.g., 'use all in parallel', 'use as overflow', 'hot standby', 'duplicate traffic') as well as the decisions when to set up additional subflows to which addresses. In both cases different policies can exist, which can be expected to be implementation-specific.
Therefore, an advanced API could provide a mechanism how applications can specify their high-level requirements in an implementation-independent way. One possibility would be to select one "application profile" out of a number of choices that characterize typical applications. Yet, as applications today do not have to inform TCP about their communication requirements, it requires further studies whether such an approach would be realistic.
Of course, independent of an advanced API, such functionality could also partly be achieved by MPTCP-internal heuristics that infer some application preferences e.g. from existing socket options, such as TCP_NODELAY. Whether this would be reliable, and indeed appropriate, is for further study, too.
The following is a list of potential requirements for an advanced MPTCP API beyond the features of the basic API. It is included here for information only:
An advanced API fulfilling these requirements would allow application developers to more specifically configure MPTCP. It could avoid suboptimal decisions of internal, implicit heuristics. However, it is unclear whether all of these requirements would have a significant benefit to applications, since they are going above and beyond what the existing API to regular TCP provides.
A subset of this functions might also be implemented system wide or by other configuration mechanisms. These implementation details are left for further study.
The advanced API may also integrate or use the SCTP Socket API. The following functions that are defined for SCTP have a similar functionality like the basic MPTCP API:
The syntax and semantics of these functions are described in [I-D.ietf-tsvwg-sctpsocket].
A potential objective for the advanced API is to provide a consistent MPTCP and SCTP interface to the application. This is left for further study in this document.
Changes compared to version draft-ietf-mptcp-api-01:
Changes compared to version draft-ietf-mptcp-api-00:
Changes compared to version draft-scharf-mptcp-api-03:
Changes compared to version draft-scharf-mptcp-api-02:
Changes compared to version draft-scharf-mptcp-api-01:
Changes compared to version draft-scharf-mptcp-api-00: