Internet Engineering Task Force | D. Joachimpillai |
Internet-Draft | Verizon |
Intended status: Standards Track | J. Hadi Salim |
Expires: September 14, 2016 | Mojatatu Networks |
March 13, 2016 |
ForCES Inter-FE LFB
draft-ietf-forces-interfelfb-03
This document describes how to extend the ForCES LFB topology across FEs by defining the Inter-FE LFB Class. The Inter-FE LFB Class provides the ability to pass data and metadata across FEs without needing any changes to the ForCES specification. The document focuses on Ethernet transport.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 14, 2016.
Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
This document reiterates the terminology defined in several ForCES documents [RFC3746], [RFC5810], [RFC5811], and [RFC5812] [RFC7391] [RFC7408] for the sake of contextual clarity.
In the ForCES architecture, a packet service can be modelled by composing a graph of one or more LFB instances. The reader is referred to the details in the ForCES Model [RFC5812].
The current ForCES model describes the processing within a single Forwarding Element (FE) in terms of logical forwarding blocks (LFB), including provision for the Control Element (CE) to establish and modify that processing sequence, and the parameters of the individual LFBs.
Under some circumstance, it would be beneficial to be able to extend this view, and the resulting processing across more than one FE. This may be in order to achieve scale by splitting the processing across elements, or to utilize specialized hardware available on specific FEs.
Given that the ForCES inter-LFB architecture calls out for the ability to pass metadata between LFBs, it is imperative therefore to define mechanisms to extend that existing feature and allow passing the metadata between LFBs across FEs.
This document describes how to extend the LFB topology across FEs i.e inter-FE connectivity without needing any changes to the ForCES definitions. It focuses on using Ethernet as the interconnection between FEs.
The scope of this document is to solve the challenge of passing ForCES defined metadata alongside packet data across FEs (be they physical or virtual) for the purpose of distributing the LFB processing.
To illustrate the problem scope we present two use cases where we start with a single FE running all the LFBs functionality then split it into multiple FEs achieving the same end goals.
A sample LFB topology depicted in Figure 1 demonstrates a service graph for delivering basic IPV4 forwarding service within one FE. For the purpose of illustration, the diagram shows LFB classes as graph nodes instead of multiple LFB class instances.
Since the illustration on Figure 1 is meant only as an exercise to showcase how data and metadata are sent down or upstream on a graph of LFB instances, it abstracts out any ports in both directions and talks about a generic ingress and egress LFB. Again, for illustration purposes, the diagram does not show exception or error paths. Also left out are details on Reverse Path Filtering, ECMP, multicast handling etc. In other words, this is not meant to be a complete description of an IPV4 forwarding application; for a more complete example, please refer the LFBlib document [RFC6956].
The output of the ingress LFB(s) coming into the IPv4 Validator LFB will have both the IPV4 packets and, depending on the implementation, a variety of ingress metadata such as offsets into the different headers, any classification metadata, physical and virtual ports encountered, tunnelling information etc. These metadata are lumped together as "ingress metadata".
Once the IPV4 validator vets the packet (example ensures that no expired TTL etc), it feeds the packet and inherited metadata into the IPV4 unicast LPM LFB.
+----+ | | IPV4 pkt | | IPV4 pkt +-----+ +---+ +------------->| +------------->| | | | | + ingress | | + ingress |IPv4 | IPV4 pkt | | | metadata | | metadata |Ucast+------------>| +--+ | +----+ |LPM | + ingress | | | +-+-+ IPv4 +-----+ + NHinfo +---+ | | | Validator metadata IPv4 | | | LFB NextHop| | | LFB | | | | | | IPV4 pkt | | | + {ingress | +---+ + NHdetails} Ingress metadata | LFB +--------+ | | Egress | | <--+ |<-----------------+ | LFB | +--------+
Figure 1: Basic IPV4 packet service LFB topology
The IPV4 unicast LPM LFB does a longest prefix match lookup on the IPV4 FIB using the destination IP address as a search key. The result is typically a next hop selector which is passed downstream as metadata.
The Nexthop LFB receives the IPv4 packet with an associated next hop info metadata. The NextHop LFB consumes the NH info metadata and derives from it a table index to look up the next hop table in order to find the appropriate egress information. The lookup result is used to build the next hop details to be used downstream on the egress. This information may include any source and destination information (for our purposes, MAC addresses to use) as well as egress ports. [Note: It is also at this LFB where typically the forwarding TTL decrementing and IP checksum recalculation occurs.]
The details of the egress LFB are considered out of scope for this discussion. Suffice it is to say that somewhere within or beyond the Egress LFB the IPV4 packet will be sent out a port (Ethernet, virtual or physical etc).
Figure 2 demonstrates one way the router LFB topology in Figure 1 may be split across two FEs (eg two ASICs). Figure 2 shows the LFB topology split across FEs after the IPV4 unicast LPM LFB.
FE1 +-------------------------------------------------------------+ | +----+ | | +----------+ | | | | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | LFB +-------------->| +------------->| | | | | | + ingress | | + ingress |IPv4 | | | +----------+ metadata | | metadata |Ucast| | | ^ +----+ |LPM | | | | IPv4 +--+--+ | | | Validator | | | LFB | | +---------------------------------------------------|---------+ | IPv4 packet + {ingress + NHinfo} metadata FE2 | +---------------------------------------------------|---------+ | V | | +--------+ +--------+ | | | Egress | IPV4 packet | IPV4 | | | <-----+ LFB |<----------------------+NextHop | | | | |{ingress + NHdetails} | LFB | | | +--------+ metadata +--------+ | +-------------------------------------------------------------+
Figure 2: Split IPV4 packet service LFB topology
Some proprietary inter-connect (example Broadcom HiGig over XAUI [brcm-higig]) are known to exist to carry both the IPV4 packet and the related metadata between the IPV4 Unicast LFB and IPV4 NextHop LFB across the two FEs.
This document defines the inter-FE LFB, a standard mechanism for encapsulating, generating, receiving and decapsulating packets and associated metadata FEs over Ethernet.
FE1 +-------------------------------------------------------------+ | +----+ | | +----------+ | | | | | Network | pkt |NF2 | pkt +-----+ | | | Function +-------------->| +------------->| | | | | 1 | + NF1 | | + NF1/2 |NF3 | | | +----------+ metadata | | metadata | | | | ^ +----+ | | | | | +--+--+ | | | | | | | | +---------------------------------------------------|---------+ V
Figure 3: A Network Function Service Chain within one FE
In this section we show an example of an arbitrary Network Function which is more coarse grained in terms of functionality. Each Network Function may constitute more than one LFB.
The setup in Figure 3 is a typical of most packet processing boxes where we have functions like DPI, NAT, Routing, etc connected in such a topology to deliver a packet processing service to flows.
The setup in Figure 3 can be split out across 3 FEs instead of as demonstrated in Figure 4. This could be motivated by scale out reasons or because different vendors provide different functionality which is plugged-in to provide such functionality. The end result is to have the same packet service delivered to the different flows passing through.
FE1 FE2 +----------+ +----+ FE3 | Network | pkt |NF2 | pkt +-----+ | Function +-------------->| +------------->| | | 1 | + NF1 | | + NF1/2 |NF3 | +----------+ metadata | | metadata | | ^ +----+ | | | +--+--+ | V
Figure 4: A Network Function Service Chain Distributed Across Multiple FEs
We address the inter-FE connectivity requirements by defining the inter-FE LFB class. Using a standard LFB class definition implies no change to the basic ForCES architecture in the form of the core LFBs (FE Protocol or Object LFBs). This design choice was made after considering an alternative approach that would have required changes to both the FE Object capabilities (SupportedLFBs) as well LFBTopology component to describe the inter-FE connectivity capabilities as well as runtime topology of the LFB instances.
The distributed LFB topology described in Figure 2 is re-illustrated in Figure 5 to show the topology location where the inter-FE LFB would fit in.
As can be observed in Figure 5, the same details passed between IPV4 unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of the Inter-FE LFB. This information is illustrated as multiplicity of inputs into the egress InterFE LFB instance. Each input represents a unique set of selection information.
FE1 +-------------------------------------------------------------+ | +----------+ +----+ | | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | LFB +-------------->| +------------->| | | | | | + ingress | | + ingress |IPv4 | | | +----------+ metadata | | metadata |Ucast| | | ^ +----+ |LPM | | | | IPv4 +--+--+ | | | Validator | | | | LFB | | | | IPv4 pkt + metadata | | | {ingress + NHinfo} | | | | | | | +..--+..+ | | | |..| | | | | +-V--V-V--V-+ | | | Egress | | | | InterFE | | | | LFB | | | +------+----+ | +---------------------------------------------------|---------+ | Ethernet Frame with: | IPv4 packet data and metadata {ingress + NHinfo + Inter FE info} FE2 | +---------------------------------------------------|---------+ | +..+.+..+ | | |..|.|..| | | +-V--V-V--V-+ | | | Ingress | | | | InterFE | | | | LFB | | | +----+------+ | | | | | IPv4 pkt + metadata | | {ingress + NHinfo} | | | | | +--------+ +----V---+ | | | Egress | IPV4 packet | IPV4 | | | <-----+ LFB |<----------------------+NextHop | | | | |{ingress + NHdetails} | LFB | | | +--------+ metadata +--------+ | +-------------------------------------------------------------+
Figure 5: Split IPV4 forwarding service with Inter-FE LFB
The egress of the inter-FE LFB uses the received packet and metadata to select details for encapsulation when sending messages towards the selected neighboring FE. These details include what to communicate as the source and destination FEs (abstracted as MAC addresses as described in Section 5.2); in addition the original metadata may be passed along with the original IPV4 packet.
On the ingress side of the inter-FE LFB the received packet and its associated metadata are used to decide the packet graph continuation. This includes which of the original metadata and which next LFB class instance to continue processing on. In the illustrated Figure 5, an IPV4 Nexthop LFB instance is selected and appropriate metadata is passed on to it.
The ingress side of the inter-FE LFB consumes some of the information passed and passes on the IPV4 packet alongside with the ingress and NHinfo metadata to the IPV4 NextHop LFB as was done earlier in both Figure 1 and Figure 2.
Section 5.1 describes some of the issues related to using Ethernet as the transport and how we mitigate them.
Section 5.2 defines a payload format that is to be used over Ethernet. An existing implementation of this specification on top of Linux Traffic Control [linux-tc] is described in [tc-ife].
There are several issues that may occur due to using direct Ethernet encapsulation that need consideration.
Because we are adding data to existing Ethernet frames, MTU issues may arise. We recommend:
A raw packet arriving at the Inter-FE LFB (from upstream LFB Class instances) may have COS metadatum indicating how it should be treated from a Quality of Service perspective.
The resulting Ethernet frame will be eventually (preferentially) treated by a downstream LFB(typically a port LFB instance) and their COS marks will be honored in terms of priority. In other words the presence of the Inter-FE LFB does not change the COS semantics
It is noted that a lot of the traffic passing through an FE that utilizes the Inter-FE LFB is expected to be IP based which is generally assumed to be congestion controlled and therefore does not need additional congestion control mechanisms[draft-ietf-tsvwg-rfc5405bis]. Traffic engineering SHOULD be done when deploying Inter-FE encapsulation. The increase in packet overhead due to the additional information encapsulated should be considered as input to the traffic engineering exercise. Furthermore, the Inter-FE LFB MUST only be deployed within a single network (with a single network operator) or networks of an adjacent set of cooperating network operators where traffic is managed to avoid congestion. Additional measures SHOULD be imposed to restrict the impact of Inter-FE encapsulated traffic on other traffic; example:
The Ethernet wire encapsulation is illustrated in Figure 6. The process that leads to this encapsulation is described in Section 6. The resulting frame is 32 bit aligned.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Destination MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Destination MAC Address | Source MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Inter-FE ethertype | Metadata length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | TLV encoded Metadata ~~~..............~~ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | TLV encoded Metadata ~~~..............~~ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Original packet data ~~................~~ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 6: Packet format suggestion
The Ethernet header illustrated in Figure 6) has the following semantics:
The Ethernet inter-FE LFB has two LFB input port groups and three LFB output ports as shown in Figure 7.
The inter-FE LFB defines two components used in aiding processing described in Section 6.2.
+-----------------+ Inter-FE LFB | | Encapsulated | OUT2+--> decapsulated Packet -------------->|IngressInGroup | + metadata Ethernet Frame | | | | raw Packet + | OUT1+--> Encapsulated Ethernet -------------->|EgressInGroup | Frame Metadata | | | EXCEPTIONOUT +--> ExceptionID, packet | | + metadata +-----------------+
Figure 7: Inter-FE LFB
The Inter-FE LFB (instance) can be positioned at the egress of a source FE. Figure 5 illustrates an example source FE in the form of FE1. In such a case an Inter-FE LFB instance receives, via port group EgressInGroup, a raw packet and associated metadata from the preceding LFB instances. The input information is used to produce a selection of how to generate and encapsulate the new frame. The set of all selections is stored in the LFB component IFETable described further below. The processed encapsulated Ethernet Frame will go out on OUT1 to a downstream LFB instance when processing succeeds or to the EXCEPTIONOUT port in the case of a failure.
The Inter-FE LFB (instance) can be positioned at the ingress of a receiving FE. Figure 5 illustrates an example destination FE in the form of FE1. In such a case an Inter-FE LFB receives, via an LFB port in the IngressInGroup, an encapsulated Ethernet frame. Successful processing of the packet will result in a raw packet with associated metadata IDs going downstream to an LFB connected on OUT2. On failure the data is sent out EXCEPTIONOUT.
The egress Inter-FE LFB receives packet data and any accompanying Metadatum at an LFB port of the LFB instance's input port group labelled EgressInGroup.
The LFB implementation may use the incoming LFB port (within LFB port group EgressInGroup) to map to a table index used to lookup the IFETable table.
If lookup is successful, a matched table row which has the InterFEinfo details is retrieved with the tuple {optional IFEtype, optional StatId, Destination MAC address(DSTFE), Source MAC address(SRCFE), optional metafilters}. The metafilters lists define a whitelist of which Metadatum are to be passed to the neighboring FE. The inter-FE LFB will perform the following actions using the resulting tuple:
The resulting packet is sent to the next LFB instance connected to the OUT1 LFB-port; typically a port LFB.
In the case of a failed lookup the original packet and associated metadata is sent out the EXCEPTIONOUT port with exceptionID of EncapTableLookupFailed [RFC6956]. Note that the EXCEPTIONOUT LFB port is merely an abstraction and implementation may in fact drop packets as described above.
An ingressing inter-FE LFB packet is recognized by inspecting the ethertype, and optionally the destination and source MAC addresses. A matching packet is mapped to an LFB instance port in the IngressInGroup. The IFETable table row entry matching the LFB instance port may have optionally programmed metadata filters. In such a case the ingress processing should use the metadata filters as a whitelist of what metadatum is to be allowed.
In the case of processing failure of either ingress or egress positioning of the LFB, the packet and metadata are sent out the EXCEPTIONOUT LFB port with appropriate error id. Note that the EXCEPTIONOUT LFB port is merely an abstraction and implementation may in fact drop packets as described above.
There are two LFB components accessed by the CE. The reader is asked to refer to the definitions in Figure 8.
The first component, populated by the CE, is an array known as the IFETable table. The array rows are made up of IFEInfo structure. The IFEInfo structure constitutes: optional IFETYPE, optionally present StatId, Destination MAC address(DSTFE), Source MAC address(SRCFE), optionally present array of allowed Metaids (MetaFilterList).
The second component(ID 2), populated by the FE and read by the CE, is an indexed array known as the IFEStats table. Each IFEStats row which carries statistics information in the structure bstats.
A note about the StatId relationship between the IFETable table and IFEStats table: An implementation may choose to map between an IFETable row and IFEStats table row using the StatId entry in the matching IFETable row. In that case the IFETable StatId must be present. Alternative implementation may map at provisioning time an IFETable row to IFEStats table row. Yet another alternative implementation may choose not to use the IFETable row StatId and instead use the IFETable row index as the IFEStats index. For these reasons the StatId component is optional.
<LFBLibrary xmlns="urn:ietf:params:xml:ns:forces:lfbmodel:1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" provides="IFE"> <frameDefs> <frameDef> <name>PacketAny</name> <synopsis>Arbitrary Packet</synopsis> </frameDef> <frameDef> <name>InterFEFrame</name> <synopsis> Ethernet Frame with encapsulate IFE information </synopsis> </frameDef> </frameDefs> <dataTypeDefs> <dataTypeDef> <name>bstats</name> <synopsis>Basic stats</synopsis> <struct> <component componentID="1"> <name>bytes</name> <synopsis>The total number of bytes seen</synopsis> <typeRef>uint64</typeRef> </component> <component componentID="2"> <name>packets</name> <synopsis>The total number of packets seen</synopsis> <typeRef>uint32</typeRef> </component> <component componentID="3"> <name>errors</name> <synopsis>The total number of packets with errors</synopsis> <typeRef>uint32</typeRef> </component> </struct> </dataTypeDef> <dataTypeDef> <name>IFEInfo</name> <synopsis>Describing IFE table row Information</synopsis> <struct> <component componentID="1"> <name>IFETYPE</name> <synopsis> the ethernet type to be used for outgoing IFE frame </synopsis> <optional/> <typeRef>uint16</typeRef> </component> <component componentID="2"> <name>StatId</name> <synopsis> the Index into the stats table </synopsis> <optional/> <typeRef>uint32</typeRef> </component> <component componentID="3"> <name>DSTFE</name> <synopsis> the destination MAC address of destination FE </synopsis> <typeRef>byte[6]</typeRef> </component> <component componentID="4"> <name>SRCFE</name> <synopsis> the source MAC address used for the source FE </synopsis> <typeRef>byte[6]</typeRef> </component> <component componentID="5"> <name>MetaFilterList</name> <synopsis> the allowed metadata filter table </synopsis> <optional/> <array type="variable-size"> <typeRef>uint32</typeRef> </array> </component> </struct> </dataTypeDef> </dataTypeDefs> <LFBClassDefs> <LFBClassDef LFBClassID="18"> <name>IFE</name> <synopsis> This LFB describes IFE connectivity parameterization </synopsis> <version>1.0</version> <inputPorts> <inputPort group="true"> <name>EgressInGroup</name> <synopsis> The input port group of the egress side. It expects any type of Ethernet frame. </synopsis> <expectation> <frameExpected> <ref>PacketAny</ref> </frameExpected> </expectation> </inputPort> <inputPort group="true"> <name>IngressInGroup</name> <synopsis> The input port group of the ingress side. It expects an interFE encapsulated Ethernet frame. </synopsis> <expectation> <frameExpected> <ref>InterFEFrame</ref> </frameExpected> </expectation> </inputPort> </inputPorts> <outputPorts> <outputPort> <name>OUT1</name> <synopsis> The output port of the egress side. </synopsis> <product> <frameProduced> <ref>InterFEFrame</ref> </frameProduced> </product> </outputPort> <outputPort> <name>OUT2</name> <synopsis> The output port of the Ingress side. </synopsis> <product> <frameProduced> <ref>PacketAny</ref> </frameProduced> </product> </outputPort> <outputPort> <name>EXCEPTIONOUT</name> <synopsis> The exception handling path </synopsis> <product> <frameProduced> <ref>PacketAny</ref> </frameProduced> <metadataProduced> <ref>ExceptionID</ref> </metadataProduced> </product> </outputPort> </outputPorts> <components> <component componentID="1" access="read-write"> <name>IFETable</name> <synopsis> the table of all InterFE relations </synopsis> <array type="variable-size"> <typeRef>IFEInfo</typeRef> </array> </component> <component componentID="2" access="read-only"> <name>IFEStats</name> <synopsis> the stats corresponding to the IFETable table </synopsis> <typeRef>bstats</typeRef> </component> </components> </LFBClassDef> </LFBClassDefs> </LFBLibrary>
Figure 8: Inter-FE LFB XML
The authors would like to thank Joel Halpern and Dave Hood for the stimulating discussions. Evangelos Haleplidis shepherded and contributed to improving this document. Alia Atlas was the AD sponsor of this document and did a tremendous job of critiquing it. The authors are grateful to Joel Halpern in his role as the Routing Area reviewer in shaping the content of this document. David Black put a lot of effort in making sure congestion control considerations are sane.
This memo includes one IANA requests within the registry https://www.iana.org/assignments/forces
The request is for the sub-registry "Logical Functional Block (LFB) Class Names and Class Identifiers" to request for the reservation of LFB class name IFE with LFB classid 18 with version 1.0.
LFB Class Identifier | LFB Class Name | LFB Version | Description | Reference |
---|---|---|---|---|
18 | IFE | 1.0 | An IFE LFB to standardize inter-FE LFB for ForCES Network Elements | This document |
This memo includes a request for a new ethernet protocol type as described in Section 5.2.
The FEs involved in the Inter-FE LFB belong to the same Network Device (NE) and are within the scope of a single administrative Ethernet LAN private network. Trust of policy in the control and its treatment in the datapath exists already.
This document does not alter [RFC5812] or the ForCES Protocol[RFC5810]. As such, it has no impact on their security considerations. This document simply defines the operational parameters and capabilities of an LFB that performs LFB class instance extensions across nodes under a single administrative control. This document does not attempt to analyze the presence or possibility of security interactions created by allowing LFB graph extension on packets. Any such issues, if they exist should be resolved by the designers of the particular data path i.e they are not the responsibility of general mechanism outlined in this document; one such option for protecting Ethernet is the use of IEEE 802.1AE Media Access Control Security [ieee8021ae] which provides encryption and authentication.
[RFC5810] | Doria, A., Hadi Salim, J., Haas, R., Khosravi, H., Wang, W., Dong, L., Gopal, R. and J. Halpern, "Forwarding and Control Element Separation (ForCES) Protocol Specification", RFC 5810, DOI 10.17487/RFC5810, March 2010. |
[RFC5811] | Hadi Salim, J. and K. Ogawa, "SCTP-Based Transport Mapping Layer (TML) for the Forwarding and Control Element Separation (ForCES) Protocol", RFC 5811, DOI 10.17487/RFC5811, March 2010. |
[RFC5812] | Halpern, J. and J. Hadi Salim, "Forwarding and Control Element Separation (ForCES) Forwarding Element Model", RFC 5812, DOI 10.17487/RFC5812, March 2010. |
[RFC7391] | Hadi Salim, J., "Forwarding and Control Element Separation (ForCES) Protocol Extensions", RFC 7391, DOI 10.17487/RFC7391, October 2014. |
[RFC7408] | Haleplidis, E., "Forwarding and Control Element Separation (ForCES) Model Extension", RFC 7408, DOI 10.17487/RFC7408, November 2014. |