SDNRG | E. Haleplidis |
Internet-Draft | S. Denazis |
Intended status: Informational | University of Patras |
Expires: January 5, 2015 | K. Pentikousis |
EICT | |
J. Hadi Salim | |
Mojatatu Networks | |
D. Meyer | |
Brocade | |
O. Koufopavlou | |
University of Patras | |
July 4, 2014 |
SDN Layers and Architecture Terminology
draft-haleplidis-sdnrg-layer-terminology-05
Software-Defined Networking (SDN) can in general be defined as a new approach for network programmability. Network programmability refers to the capacity to initialize, control, change, and manage network behavior dynamically via open interfaces as opposed to relying on closed-box solutions and propietary-defined interfaces. SDN emphasizes the role of software in running networks through the introduction of an abstraction for the data forwarding plane and, by doing so, separates it from the control plane. This separation allows faster innovation cycles at both planes as experience has already shown. However, there is increasing confusion as to what exactly SDN is, what is the layer structure in an SDN architecture and how do layers interface with each other. This document aims to answer these questions and provide a concise reference document for SDNRG, in particular, and the SDN community, in general, based on relevant peer-reviewed literature and documents in the RFC series.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 5, 2015.
Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Software-Defined Networking (SDN) is a relevant new term for the programmable networks paradigm [PNSurvey99][OF08]. In short, SDN refers to the ability of software applications to program individual network devices dynamically and therefore control the behavior of the network as a whole [NV09]. Another view of what SDN is defined in [RFC7149] as a set of techniques used to facilitate the design, the delivery and the operation of network services in a deterministic, dynamic, and scalable manner.
A key element in SDN is the introduction of an abstraction between the (traditional) Forwarding and the Control planes in order to separate them and provide applications with the means necessary to programmatically control the network. The goal is to leverage this separation, and the associated programmability, in order to reduce complexity and enable faster innovation at both planes [A4D05].
Feamster et al. [SDNHistory] review the historical evolution of the programmable networks research area, starting with earlier efforts which date back to the 1980s. As the authors document, many of the ideas, concepts and concerns are applicable to the latest R&D in SDN, and SDN standardization we may add, and have been under extensive investigation and discussion in the research community for quite some time. For example, Rooney et al. [Tempest] discuss how to allow third-party access to the network without jeopardizing network integrity, or how to accommodate legacy networking solutions in their (then new) programmable environment. Further, the concept of separating the control and data planes, which is prominent in SDN, has been extensively discussed even prior to 1998 [Tempest][P1520], in SS7 networks [ITUSS7], Ipsilon Flow Switching [RFC1953][RFC2297] and ATM [ITUATM].
SDN research often focuses on varying aspects of programmability, and we are frequently confronted with conflicting points of view regarding what exactly SDN is. For instance, we find that for various reasons (e.g. work focusing on one domain and therefore not necessarily applicable as-is to other domains), certain well-accepted definitions do not correlate well with each other. For example, both OpenFlow [OpenFlow] and NETCONF [RFC6241] have been characterized as SDN interfaces, but they refer to control and management respectively.
This motivates us to consolidate the definitions of SDN in the literature and correlate them with earlier work in IETF and the research community. Of particular interest, for example, is to determine which layers comprise the SDN architecture and which interfaces and their corresponding attributes are best suitable to be used between them. As such, the aim of this document is not to standardize any particular layer or interface but rather to provide a concise reference document which reflects current approaches regarding the SDN layers architecture. We expect that this document would be useful to upcoming work in SDNRG as well as future discussions within the SDN community as a whole.
This document aims to address the potential work item in the SDNRG charter named "Survey of SDN approaches and Taxonomies", fostering better understanding of prominent SDN technologies in a technology-impartial and business-agnostic manner. As such, we do not make any value statements nor discuss the applicability of any of the frameworks examined in this draft for any particular purpose. Instead, we document their characteristics and attributes and classify them, thus providing a taxonomy. Already there are a number of survey papers regarding SDN that discuss taxonomies such as [SLTSDN] and [SDNACS].
This document does not constitute a new IETF standard nor a new specification, and aims to receive rough consensus within SDNRG to be published in the IRTF Stream as per [RFC5743].
The remainder of this document is organized as follows. Section 2 explains the terminology used in this document. Section 3 introduces a high-level overview of current SDN architecture abstractions. Finally, Section 4 discusses how the SDN Layer Architecture relates with prominent SDN-enabling technologies
This document uses the following terms:
Figure 1 provides a detailed high-level overview of the current SDN architecture abstractions. Note that in a particular implementation planes can be collocated with other planes or can be physically separated, as we discuss below.
SDN is based on the concept of separation between a controlled entity and a controller entity. The controller manipulates the controlled entity via an Interface. Interfaces, when local, are mostly API calls through some library or system call. However, such interfaces may be extended via some protocol definition, which may use local inter-process communication (IPC) or a protocol that could also act remotely; the protocol may be defined as an open standard or in a proprietary manner.
The concept of separation via IPCs is explored in RINA [RINA] where the premise is that all network communications is considered an IPC and that allows a recursive approach on creating hierarchical network connections. RINA's [RINA] approach has a lot of commonalities with the described SDN layer abstractions as we can also view these layers as being hierarchical stacked on top of each other as needed.
o--------------------------------o | | | +-------------+ +----------+ | | | Application | | Service | | | +-------------+ +----------+ | | Application Plane | o---------------Y----------------o | *-----------------------------Y---------------------------------* | Network Services Abstraction Layer (NSAL) | *------Y------------------------------------------------Y-------* | | | Service Interface | | | o------Y------------------o o---------------------Y------o | | Control Plane | | Management Plane | | | +----Y----+ +-----+ | | +-----+ +----Y----+ | | | Service | | App | | | | App | | Service | | | +----Y----+ +--Y--+ | | +--Y--+ +----Y----+ | | | | | | | | | | *----Y-----------Y----* | | *---Y---------------Y----* | | | Control Abstraction | | | | Management Abstraction | | | | Layer (CAL) | | | | Layer (MAL) | | | *----------Y----------* | | *----------Y-------------* | | | | | | | o------------|------------o o------------|---------------o | | | CP | MP | Southbound | Southbound | Interface | Interface | | *------------Y---------------------------------Y----------------* | Device and resource Abstraction Layer (DAL) | *------------Y---------------------------------Y----------------* | | | | | o-------Y----------o +-----+ o--------Y----------o | | | Forwarding Plane | | App | | Operational Plane | | | o------------------o +-----+ o-------------------o | | Network Device | +---------------------------------------------------------------+
Figure 1: SDN Layer Architecture
This document follows a network device centric approach: Control refers to the device packet handling capability, while Management refers to the overall device operation aspects. We view a network device as a complex resource which contains and is part of multiple resources similar to [DIOPR]. Resources can be simple, single components of a network device, for example a port or a queue of the device, and can also be aggregated into complex resources, for example a network device.
The reader should keep in mind throughout this document that we make no distinction between "physical" and "virtual" resources, as we do not delve into implementation or performance aspects. In other words, a resource can be implemented fully in hardware, fully in software, or any hybrid combination in between. Further, we do not distinguish on whether a resource is implemented as an overlay or as a part/component of some other device. Finally, network device software can run on so-called "bare metal" or on a virtualized substrate.
SDN spans multiple planes as illustrated in Figure 1. Starting from the bottom part of the figure and moving towards the upper part, we identify the following planes:
All planes mentioned above are connected via Interfaces (as indicated with "Y" in Figure 1. An Interface may take multiple roles depending on whether the connected planes reside on the same (physical or virtual) device. If the respective planes are designed so that they do not have to reside in the same device, then the Interface can only take the form of a protocol. If the planes are co-located on the same device, then the Interface could be implemented via an open/proprietary protocol, an open/proprietary software inter-process communication API, or operating system kernel system calls.
Applications, i.e. software programs that perform specific computations that consume services without providing access to other applications, can be implemented natively inside a plane or can span multiple planes. For instance, applications or services can span both the control and management plane and, thus, be able to use both the CPSI and MPSI. An example of such a case would be an application that uses both [OpenFlow] and [OF-CONFIG].
Services, i.e. software programs that provide APIs to other applications or services, can also be natively implemented in specific planes. Services that span multiple planes belong to the application plane as well.
While not shown in Figure 1, services, applications and entire planes, can be placed in a recursive manner thus providing overlay semantics to the model. For example, application plane services can provide through NSAL services to other applications or services. Additional examples include virtual resources that are realized on top of a physical resources and hierarchical control plane controllers [KANDOO].
It must be noted, however, that in Figure 1 we present an abstract view of the various planes, which is devoid of implementation details. Many implementations tend to place the management plane on top of the control plane, which may be interpreted as having the control plane acting as a service to the management plane. Traditionally, the control plane was tightly coupled with the device. When taken as whole, the control plane was distributed network-wide. On the other hand, the management plane has been traditionally centralized and was responsible for managing the control plane and the devices. However, with the adoption of SDN principles, this distinction is no longer so clear-cut.
Additionally, this document considers four abstraction layers:
We observe that the view presented in this document is quite well-aligned with recently published work by the ONF; see [ONFArch]. A key difference, however, is that the ONF architecture does not include the management plane in its scope.
SDN related activities have begun in many other SDO's, such as:
A Network Device is an entity that receives packets on its ports and performs one or more network functions on them. For example, the network device could forward a received packet, drop it, alter the packet header (or payload) and forward the packet, and so on. A Network Device is an aggregation of multiple resources such as ports, cpu, memory and queues. Resources are either simple or can be aggregated to form complex resources that can be viewed as one resource. The Network Device is in itself a complex resource.
Network devices can be implemented in hardware or software and can be either physical or virtual. As has already been mentioned before, this document makes no such distinction. Each network device has both a Forwarding Plane and an Operational Plane.
The Forwarding Plane, commonly referred to as the "data path", is responsible for handling and forwarding packets. The Forwarding Plane provides switching, routing transformation and filtering functions. Resources of the forwarding plane include but are not limited to filters, meters, markers and classifiers.
The Operational Plane is responsible for the operational state of the network device, for instance, with respect to status of network ports and interfaces. Operational plane resources include, but are not limited to, memory, CPU, ports, interfaces and queues.
The Forwarding and the Operational Planes are exposed via the Device and resource Abstraction Layer (DAL), which may be expressed by one or more abstraction models. Examples of Forwarding Plane abstraction models are ForCES [RFC5812] and OpenFlow [OpenFlow]. Examples of the Operational Plane abstraction model include the ForCES model [RFC5812], the YANG model [RFC6020] and SNMP MIBs [RFC3418].
Examples of Network Devices include switches and routers. Additional examples include network elements that may operate at a layer above IP, such as firewalls, load balancers and video transcoders.
Note that applications can also reside in a network device. Examples of such applications include event monitoring, and handling (offloading) topology discovery or ARP [RFC0826] in the device itself instead of forwarding such traffic to the control plane.
The control plane is usually distributed and is responsible mainly for the configuration of the forwarding plane using a Control Plane Southbound Interface (CPSI) with DAL as a point of reference. CP is responsible for instructing FP about how to handle network packets.
Communication between control planes, colloquially referred to as the "east-west" interface, is usually implemented through gateway protocols like BGP [RFC4271]. However, the corresponding protocol messages are in fact exchanged in-band and subsequently redirected by the forwarding plane to the control plane for further processing. Examples in this category include [RCP], [SoftRouter] and [RouteFlow].
Control Plane functionalities usually include:
The CPSI is usually defined with the following characteristics:
Examples include fast- and high-frequency of flow or table updates, high throughput and robustness for packet handling and events.
CPSI can be implemented using a protocol, an API or even interprocess communication. If the Control Plane and the Network Device are not collocated, then this interface is certainly a protocol. Examples of CPSIs are ForCES [RFC5810] and the Openflow protocol [OpenFlow].
The Control Abstraction Layer (CAL) provides access to control applications and services to various CPSIs. The Control Plane may support more than one CPSIs.
Control applications can use CAL to control a network device without providing any service to upper layers. Examples include applications that perform control functions, such as OSPF, BGP, etc.
Control Plane service examples include a virtual private LAN service, service tunnels, topology services, etc.
The Management Plane is usually centralized and aims to ensure that the network, which consists of network devices, is running optimally by communicating with the network devices's Operational Plane using a Management Plane Southbound Interface (MPSI) with DAL as a point of reference.
Management plane functionalities are typically initiated, based on an overall network view, and traditionally have been human-centric. However, lately algorithms are replacing most human intervention. Management plane functionalities [FCAPS] [RFC3535] usually include:
Normally MSPI, in contrast to the CPSI, is not a time-critical interface and does not share the CPSI requirements.
MSPI is [RFC3535] typically closer to human interaction than the control plane and therefore the MSPI usually has the following characteristics:
As an example of usability versus performance, we refer to the consensus of the 2002 IAB Workshop [RFC3535], as mentioned in [RFC6632], where textual configuration files should be able to contain international characters. Human-readable strings should utilize UTF-8, and protocol elements should be in case-insensitive ASCII which require more processing capabilities to parse.
The MPSI can range from a protocol, to an API or even interprocess communication. If the Management Plane is not embedded in the network device, the MSPI is certainly a protocol. Examples of MPSIs are ForCES [RFC5810], NETCONF [RFC6241], OVSDB [RFC7047] and SNMP [RFC3411].
The Management Abstraction Layer (MAL) provides access to management applications and services to various MPSIs. The Management Plane may support more than one MPSI.
Management Applications can use MAL to manage the network device without providing any service to upper layers. Examples of management applications include network monitoring and fault detection and recovery applications.
Management Plane Services provide access to other services or applications above the Management Plane.
During the SDNRG meetings as well via the list, one of the most commonly discussed topic, in regards to this document, was the clear distinction between control and management. We have identified the following characteristics that together or each one may provide the necessary differentiator between the planes.
Timescale refers to how fast an application in the respective plane react or need to manipulate the forwarding or operational plane of the device. In general, the control plane needs to send updates very often within the range of milliseconds and that requires a high bandwidth and low latency links. In contrast the management plane reacts generally at very slow timeframes, minutes, hours or even days, e.g. in the case of changing the configuration state of the device, or and thus do not need to be very efficient on the wire.
Another distinction that discussed was the distinction between ephemeral versus persistent state. Ephemeral state is state that may have a very limited lifespan, such as routing decisions, and thus is usually associated with the control plane. On the other hand, persistent state is state that may have a larger and extended lifespan which may range from hours to days and months and is usually associated with the management plane. Persistent state is also usually associated with data store of the state.
Before the concept of centralizing the controller, usually the control plane is local to device and distributed whilst the management plane is usually centralized and remote from the device. However, as has been noted before, centralizing, or "locally centralizing" the controller tends to muddle the distinction of the control and management plane on locality.
An additional distinction was introduced in the 89th IETF and refers to the CAP theorem.
The CAP theorem views a distributed computing system as composed of multiple computational resources (i.e., CPU, memory, storage) that are connected via a communications network and together perform a task and identifies three characteristics of distributed systems that are universally desirable:
In 2000 Eric Brewer [CAPBR] conjectured that a distributed system can satisfy any two of these guarantees at the same time, but not all three. This conjecture was later proven by Gilbert and Lynch [CAPGL] and is now usually called the CAP theorem
Correctly forwarding a packet through a network, is a computational problem. One of the major abstractions that SDN posits - all network elements are computational resources that perform the single computational task of inspecting fields in an incoming packet and deciding how to forward it.
Since the task of forwarding a packet from network ingress to network egress is obviously carried out by a large number of forwarding elements, the network of forwarding devices is a distributed computational system. Hence, the CAP theorem applies to forwarding of packets.
In the context of the CAP theorem, control plane operations are usually local and fast (available), while management plane operations are usually centralized (consistent) and slow.
The CAP theorem provides insights on SDN performance. For example, in regards to locality, although not explicitly stated, modern SDN philosophy, centralizing the controller, stresses consistency. The controller acts as a consistent global database, and specific mechanisms ensure that a packet entering the network is handled consistently by all switches and acts like a management entity. The issue of tolerance to loss of connectivity to the controller is not addressed by the basic SDN model. When an SDN switch can't reach its controller the flow will be unavailable until the connection is restored. The use of multiple non-collocated SDN controllers has been proposed (e.g., by configuring the SDN switch with a list of controllers); this improves partition tolerance, but at the cost of loss of absolute consistency
The Network Services Abstraction Layer (NSAL) provides access from services of the control, management and application planes to services and applications of the application plane. Note that the term SAL is overloaded, as it is often used in several contexts ranging from system design to service-oriented architectures therefore we prefixed it with "Network" to emphasize that this term relates to Figure 1 and we map it accordingly in Section 4 to prominent SDN approaches.
Service Interfaces can take many forms pertaining to their specific requirements. Examples of service interfaces include but are not limited to, RESTful APIs, open or proprietary protocols such as NETCONF, inter-process communications, CORBA interfaces, etc.
Two leading standards of service interface are RESTful interfaces and RPC interfaces. Both follow a client-server architecture and use XML or JSON to pass messages but each have some slightly different characteristics.
RESTful interfaces, designed with the Representational state transfer design paradigm [REST], have the following characteristics:
Remote procedure calls (RPC), e.g. [RFC5531], XML-RPC etc., have the following characteristics:
Applications and services that use services from the control and/or management plane form the Application Plane.
Additionally, services residing in the Application Plane may provide services to other services and applications that reside in the application plane via the service interface.
Examples of applications include network topology discovery, network provisioning, path reservation, etc.
We advocate that the SDN southbound interface should encompass both CSPI and MSPI.
The SDN northbound interface is implemented in the Network Services Abstraction Layer of Figure 1.
The above model can be used to describe in a concise manner all prominent SDN-enabling technologies, as we explain in the following subsections.
The IETF-standardized Forwarding and Control Element Separation (ForCES [RFC5810]) framework consists of one model and two protocols. ForCES separates the Forwarding from the Control Plane via an open interface, namely the ForCES protocol which operates on entities of the forwarding plane that have been modeled using the ForCES model.
The ForCES model is based on the fact that a network element is composed of numerous logically separate entities that cooperate to provide a given functionality -such as routing or IP switching- and yet appear as a normal integrated network element to external entities and secondly with a protocol to transport information.
ForCES models the Forwarding Plane using Logical Functional Blocks (LFBs) which are connected in a graph, composing the Forwarding Element (FE). LFBs are described in an XML language, based on an XML schema.
LFB definitions include:
The ForCES model can be used to define LFBs from fine- to coarse-grained as needed irrelevant of whether they are physical or virtual.
The ForCES protocol is agnostic to the model and can be used to monitor, configure and control any ForCES-modeled element. The protocol has very simple commands: Set, Get and Del(ete). ForCES is a protocol designed for high throughput and fast updates.
ForCES [RFC5810] can be mapped to the framework illustrated in Figure 1 as follows:
The Network Configuration Protocol (NETCONF [RFC6241]), is an IETF-standardized network management protocol [RFC6632]. NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices.
NETCONF protocol operations are realized as remote procedure calls (RPCs). The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. Recent studies, such as [ESNet] and [PENet], have shown that NETCONF performs better than SNMP [RFC3411].
Additionally, the YANG data modeling language [RFC6020] has been developed for specifying NETCONF data models and protocol operations. YANG is a data modeling language used to model configuration and state data manipulated by NETCONF, NETCONF remote procedure calls, and NETCONF notifications.
YANG models the hierarchical organization of data as a tree, in which each node has either a value or a set of child nodes. Additionally, YANG structures data models into modules and submodules allowing reusability and augmentation. YANG models can describe constraints to be enforced on the data. Additionally YANG has a set of base datatype and allows custom defined datatypes as well.
YANG allows the definition of NETCONF RPCs allowing the protocol to have an extensible number of commands. For RPC definition, the operations names, input parameters, and output parameters are defined using YANG data definition statements.
NETCONF can be mapped to the framework illustrated in Figure 1 as follows:
[OpenFlow] is a framework originally developed by Standford, and currently under active standards development through the Open Networking Foundation. Initially, the goal was to provide a way for researchers to run experimental protocols in a production network [OFSIGC]. OpenFlow provides a protocol with which a controller may manage a static model of an OpenFlow switch.
An OpenFlow switch consists of one or more flow tables which perform packet lookups, actions on a success packet lookup and forwarding, a group table and an OpenFlow channel to an external controller. The switch communicates with the controller which manages the switch via the OpenFlow protocol.
OpenFlow has undergone many revisions. The current version is 1.4 [OpenFlow] and supports amongst others, multiple controllers for high availability and extensible flow match field protocol messages to support arbitraty match fields. Efforts to define OpenFlow 2.0 [PPIPP] are already underway aiming to provide an abstract forwarding model to provide protocol independence and device programmability.
OpenFlow can be mapped to the framework illustrated in Figure 1 as follows:
I2RS is currently developed by a recently-established IETF working group. The intention is to provide a standard interface to the routing system for real-time or event-driven interaction through a collection of protocol-based control or management interfaces. Essentially, I2RS aims to make the routing information base (RIB) programmable thus enabling new kinds of network provisioning and operation.
I2RS does not initially intend to create new interfaces, but rather leverage or extend existing ones and define informational models for the routing system. For example, the latest I2RS problem statement [I-D.ietf-i2rs-problem-statement] discusses previously-defined IETF protocols and data models such as ForCES, YANG, NETCONF, and SNMP.
Currently the I2RS working group is developing an Information Model [I-D.ietf-i2rs-rib-info-model] in regards to the Network Services Abstraction Layer for the I2RS agent.
I2RS can be mapped to the framework illustrated in Figure 1 as follows:
Bidirectional Forwarding Detection (BFD) [RFC5880], is an IETF network protocol designed for detecting communication failures between two forwarding elements which are directly connected. It is intended to be implemented in some component of the forwarding engine of a system, in cases where the forwarding and control engines are separated.
BFD provides low-overhead detection of faults even on physical media that do not support failure detection of any kind, such as Ethernet, virtual circuits, tunnels and MPLS Label Switched Paths.
BFD could be mapped to the framework illustrated in Figure 1 either as:
The Simple Network Management Protocol (SNMP) is an IETF management protocol and is currently at the third version called SNMPv3 and described in STD 62, RFC 3417 [RFC3417], RFC 3412 [RFC3412] and RFC 3414 [RFC3414]. It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects. SNMP exposes management data (managed objects) in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried and set by managing applications.
SNMP uses an extensible design for describing data, defined by management information bases (MIBs). MIBs describe the structure of the management data of a device subsystem. MIBs use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2 SMIv2 [RFC2578]
SNMP could be mapped to the framework illustrated in Figure 1 as:
The authors would like to acknowledge Salvatore Loreto and Sudhir Modali for the initial discussion on the SDNRG mailing list as well as their draft-specific comments that helped put this document in a better shape.
Additionally the authors would like to acknowledge Russ White, Linda Dunbar, Robert Raszuk, Pedro Martinez-Julia, Lee Young, Yaakov Stein, Shivleela Arlimatti, Gurkan Deniz, Scott Brim, Carlos Pignataro, Ramki Krishnan, Bless Roland, Tim Copley, Francisco Javier Ros Munoz, Sriganesh Kini, Alan Clark, Erik Nordmark, Scott Mansfield, Dirk Kutscher, Roland Bless, David E Mcdysan, Bhumip Khasnabish and Georgios Karagiannis for their critical comments and discussions at the IETF 88 and 89 meetings (and the SDNRG mailing list), which we took into consideration while revising this document.
Special thanks to Yaakov Stein for providing text related to the CAP theorem and Scott Mansfield for information regarding ITU status on SDN
This memo makes no requests to IANA.
TBD