Internet Engineering Task Force | D.J. Joachimpillai |
Internet-Draft | Verizon |
Intended status: Informational | October 11, 2012 |
Expires: April 12, 2013 |
ForCES Inter-FE LFB
draft-joachimpillai-forces-interfelfb-00
Forwarding and Control Element Separation (ForCES) defines an architectural framework and associated protocols to standardize information exchange between the control plane and the forwarding plane in a ForCES Network Element (ForCES NE). RFC5812 has defined the ForCES Model provides a formal way to represent the capabilities, state, and configuration of forwarding elements within the context of the ForCES protocol, so that control elements (CEs) can control the FEs accordingly. More specifically, the model describes the logical functions that are present in an FE, what capabilities these functions support, and how these functions are or can be interconnected.
At the moment the ForCES charter restricts the LFB topology to be within an FE. This documents describes a non-intrusive way to extend the LFB topology across FEs.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http:/⁠/⁠datatracker.ietf.org/⁠drafts/⁠current/⁠.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 12, 2013.
Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http:/⁠/⁠trustee.ietf.org/⁠license-⁠info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
This document follows the terminology defined by the ForCES Model in [RFC5812]. The required definitions are repeated below for clarity.
In the ForCES architecture, a packet service can be modelled by composing a graph of one or more LFB instances. The reader is refered to the details in the ForCES Model [RFC5812].
The FEObject LFB capabilities as defined in the ForCES Model [RFC5812] include an array (SupportedLFBs) that contains the information about each supported LFB class. Each entry in the SupportedLFBs array describes an LFB class that the FE supports. In addition to indicating that the FE supports the class, FEs with modifiable LFB topologies include information about how LFBs of the specified class may be connected to other LFBs. This information describes which LFB classes the specified LFB class may succeed or precede in an LFB topology. The capability of an FE is advertised to the CE upon association.
The CE may create a packet service when describing LFB instance graph connections by updating the FEOBject LFBTopology component. The created topology contains information about each inter-LFB link inside the FE, where each link is described in an LFBLinkType dataTypeDef. The LFBLinkType component contains sufficient information to identify precisely the end points of a link of a service graph.
Often there are requirements for the packet service graph to cross FE boundaries. This could be from a desire to scale the service or need to interact with LFBs which reside in a separate FE (eg lookaside interface to a shared TCAM, an interconnected chip, or as coarse grained functionality as an external NAT FE box being part of the service graph etc).
Given that the ForCES inter-LFB architecture calls out for ability to pass metadata between LFBs, it is imperative to define mechanisms to allow passing the metadata between inter-FE LFBs (given that packet data passing is already taken care of).
The ForCES charter restricts the LFB links in a topology to be within a single FE (intra-FE connectivity) and as such both the relevant capabilities and component definitions in the FEObject LFB are restricted to that scope. This document describes extending the LFB topology across FEs i.e inter-FE connectivity without needing any changes to the ForCES definitions.
A sample LFB topology Figure 1 demonstrates a service graph for delivering basic IPV4 forwarding service within one FE. Note: although the diagram shows LFB classes connecting in the graph in reality it is a graph of LFB class instances that are inter-connected.
The illustration is meant only as an exercise to showcase how data and metadata is sent down or upstream on a graph of LFBs. For this reason, it abstracts out any ports in both directions and talks about a generic ingress and egress LFB. For illustration purposes, the diagram does not show expection or error paths. Also left out are details on Reverse Path Filtering, ECMP, multicast handling etc. In other words, this is not meant to be a complete description of an IPV4 forwarding application; for a more complete example, please refer to the LFBlib document[XXX: ref here].
The output of the ingress LFB(s) coming into the IPv4 Validator LFB will have both the IPV4 packets and, depending on the implementation, a variety of ingress metadata such as offsets into the different headers, any classification metadata, physical and virtual ports encountered, tunnelling information etc. These metadata are lumped together as "ingress metadata".
Once the IPV4 validator vets the packet (example ensures that no expired TTL etc), it feeds the packet and inherited metadata into the IPV4 unicast LPM LFB.
+----+ | | IPV4 pkt | | IPV4 pkt +-----+ +---+ +------------->| |------------->| | | | | + ingress | | + ingress |IPv4 | IPV4 pkt | | | metadata | | metadata |Ucast|------------>| |--+ | +----+ |LPM | + ingress | | | +-+-+ IPv4 +-----+ + NHinfo +---+ | | | Validator metadata IPv4 | | | LFB NextHop| | | LFB | | | | | | IPV4 pkt +---+ + {ingress + NHdetails} Ingress metadata | LFB +-------+ | |Egress | | <--|LFB |<------------------+ +-------+
Figure 1: Basic IPV4 packet service LFB topology
The IPV4 unicast LPM LFB does a longest prefix match lookup on the IPV4 FIB using the destination IP address as a search key. The result is typically a next hop selector which is passed downstream as metadata.
The Nexthop LFB receives the IPv4 packet with an associated next hop info metadata. The NextHop LFB consumes the NH info metadata and derives from it a table index to look up the next hop table in order to find the appropriate egress information. The lookup result is used to build the next hop details to be used downstream on the egress. This information may include any source and destination information (MAC address to use, if ethernet;) as well egress ports. [Note: It is also at this LFB where typically the forwarding TTL decrement and IP checksum recalculation occurs.]
The details of the egress LFB are left out intentionally from this discussion. Suffice it is to say that somewhere within or beyond the Egress LFB the IPV4 packet will be sent out a port (ethernet, virtual or physical etc).
Figure 2 demonstrates one way the LFB topology in Figure 1 may be split across two FEs (eg two ASICs). Figure 2 shows the LFB topology split across FEs after the IPV4 unicast LPM LFB.
FE1 +-------------------------------------------------------------+ | +----+ | | +----------+ | | | | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | LFB |+------------->| |------------->| | | | | | + ingress | | + ingress |IPv4 | | | +----------+ metadata | | metadata |Ucast| | | ^ +----+ |LPM | | | | IPv4 +-----+ | | | Validator | | | LFB | | +---------------------------------------------------|---------+ | IPv4 packet + {ingress + NHinfo} metadata FE2 | +---------------------------------------------------|---------+ | V | | +--------+ +--------+ | | | Egress | IPV4 packet | IPV4 | | | <-----| LFB |<-------------------- |NextHop | | | | |{ingress + NHdetails} | LFB | | | +--------+ metadata +--------+ | +-------------------------------------------------------------+
Figure 2: Split IPV4 packet service LFB topology
Some proprietary inter-connect (example Broadcom Higig over XAUI (XXX: ref needed)) maybe used to carry both the IPV4 packet and the related metadata between the IPV4 Unicast LFB and IPV4 NextHop LFB across the two FEs.
We address the inter-FE connectivity by proposing an inter-FE LFB. Using an LFB implies no change to the basic ForCES architecture in the form of the core LFBs (FE Protocol or Object LFBs). This design choice was made after considering an alternative approach that would have required changes to both the FE Object capabilities (SupportedLFBs) as well LFBTopology component to describe the inter-FE connectivity capabilities as well as runtime topology of the LFB instances.
The distributed LFB topology described in Figure 2 is re-illustrated in Figure 3 to show the topology location where the inter-FE LFB would fit in.
As can be observed in Figure 3, the same details passed between IPV4 unicast LPM LFB and the IPV4 NH LFB are passed to the egress side of the Inter-FE LFB. In addition an index for the inter-FE LFB (interFEid) is passed as metadata.
FE1 +-------------------------------------------------------------+ | +----------+ +----+ | | | Ingress | IPV4 pkt | | IPV4 pkt +-----+ | | | LFB |+------------->| |------------->| | | | | | + ingress | | + ingress |IPv4 | | | +----------+ metadata | | metadata |Ucast| | | ^ +----+ |LPM | | | | IPv4 +-----+ | | | Validator | | | | LFB | | | | IPv4 pkt + metadata | | | {ingress + NHinfo + InterFEid}| | | | | | +----V----+ | | | InterFE | | | | LFB | | | +---------+ | +---------------------------------------------------|---------+ IPv4 packet and metadata {ingress + NHinfo + InterFEinfo} FE2 | +---------------------------------------------------|---------+ | +----V----+ | | | InterFE | | | | LFB | | | +---------+ | | | | | IPv4 pkt + metadata | | {ingress + NHinfo} | | | | | +--------+ +----V---+ | | | Egress | IPV4 packet | IPV4 | | | <-----| LFB |<-------------------- |NextHop | | | | |{ingress + NHdetails} | LFB | | | +--------+ metadata +--------+ | +-------------------------------------------------------------+
Figure 3: Split IPV4 forwarding service with Inter-FE LFB
The egress of the inter-FE LFB uses the received Inter-FE index (InterFEid metadata) to select details for encapsulation towards the neighboring FE. These details, encapsulated as InterFEinfo metadata, will include what the source and destination FEID, LFB class and LFB instance are to be used. In addition to the newly constructed metadata, the original metadata is left untouched and passed along with the IPV4 packet.
On the ingress side of the inter-FE LFB, the InterFEinfo metadata is used to decide the graph continuation i.e which LFB instance is to be passed the packet plus the origial metadata. In this case an IPV4 Nexthop LFB instance.
The ingress side of the inter-FE LFB consumes the InterFEinfo metadata and passes on the IPV4 packet alongside with the ingress + NHinfo metadata to the IPV4 NextHop LFB as was done earlier in both Figure 1 and Figure 2.
It is expected there will be several inter-FE transport possibilities. We describe the suggested encapsulation format (Figure 4) extended from the ForCES redirect packet format. We expect that for any transport mechanism used, that a description of how the different fields will be encapsulated to be explained. We provide a description of how ethernet encapsulation will be used in this case in Section 4.2.1.
+-- T = NESelector-TLV (optional) | +---- NEID | | | +---- Destination FEID | | | +---- Source FEID | +-- T = LFBSelector-TLV (optional) | +---- Destination LFB class | | | +---- Destination LFB Instance | +-- T = METADATA-TLV | | | +-- +-Meta Data ILV (I = metaid, L= length) | | | | | | | +----- V= Metadata value | . | | . | | . +-Meta Data ILV | . . +-- T = REDIRECTDATA-TLV | +-- Redirected packet Data
Figure 4: Packet format suggestion
The NESelector and LFBSelector are considered to be part of the InterFEinfo metadata described earlier. In some cases, each or both of these metadatum may be left out in the encapsulation activity (by the inter-FE LFB implementation) if they are already implicitly defined. An example where that may make sense is in the case of look-aside interfaces or proprietary hard-coded connections (such as the one shown in Figure 2). For this reason they are tagged as optional.
The METADATA and REDIRECTDATA TLV encapsulations are taken directly from [RFC5810] section 7.9.
Although it is expected that multiple transport encapsulations could be used they are all expected to carry the format described in Figure 1 or in the case of legacy formats be able to intepret the inter-FE config and translate into proprietary or legacy formatting.
Already a variety of metadata passing encapsulations exist which are proprieatary or semi-standard by virtue of being widely deployed. These include the NPF LA-1 (XXX: ref here), Broadcom Higig/2 (XXX: ref here), as well as interlaken(XXX: ref here).
In this section we describe a format that is to be used over Ethernet. It is expected that an ethernet type (To be defined) will be used to imply that a wire format is carrying an inter-FE LFB packet.
XXX: The finer details on what the source and destination MAC address selection are left out for the next draft release. Also left out are any load balancing/multi-pathing activities across selections of destinations FEs.
*--+ Ethernet type = XXXX | +-- T = NESelector-TLV (optional) | +---- NEID | | | +---- Destination FEID | | | +---- Source FEID | +-- T = LFBSelector-TLV | +---- Destination LFB class | | | +---- Destination LFB Instance | +-- T = METADATA-TLV | | | +-- +-Meta Data ILV (I = metaid, L= length) | | | | | | | +----- V= Metadata value | . | | . | | . +-Meta Data ILV | . | . . +-- T = REDIRECTDATA-TLV | +-- Redirected packet Data
Figure 5: Packet format suggestion
There are several issues that may arise due to using direct ethernet encapsulation.
XXX: This issue will be addressed further in the next draft release. Suggestions welcome.
The inter-FE LFB has a single LFB input port and two LFB output ports.
+-----------------+ | | Packet + | OUT +--> encapsulated Packet+metadata -------------->| | Metadata | EXCEPTIONOUT +--> Errorid, packet + metadata | | +-----------------+
Figure 6: Inter-FE LFB
The Inter-FE LFB receives packet and metadata. The InterFEid metadatum MUST be present. It is expected that the formating of the received packet buffer and metadata is implementation specific. If the location of the Inter-FE LFB format is such that it is at the ingress of the FE, then it is expected that some parsing LFB would already have formated the packet and metadata into native format.
The Inter-FE LFB uses the InterFEid metadatum to lookup the NextFE table. The output result constitutes a table row which has the InterFEinfo details i.e. the tuple {NEID,Destination FEID,Source FEID, Destination LFB class,Destination LFB Instance}.
In the case of successful lookup, for the introduced ethernet formating, the inter-FE LFB will create an ethernet frame with ethertype XXXX.
The resulting packet is sent to the LFB instance connected to the OUT LFB port.
In the case of failure, the packet and metadata are sent out the EXCEPTIONOUT LFB port with proper error id (XXX: More description to be added).
There is a single LFB component populated by the CE. The component is an array component known as the NextFE table. Each row of the table constitutes the columns with {NEID,Destination FEID,Source FEID,Destination LFB class, Destination LFB Instance}. The table is looked up by a 32 bit index passed from an upstream LFB class instance in the form of InterFEid metadatum.
The CE programs LFB instances in a service graph that require inter-FE connectivity with InterFEid values to correspond to the inter-FE LFB NextFE table entries to use.
XXX: If we support multiple encapsulation methods(other than ethernet), then we could use capabilities to advertise them as different possibilities. It is envisioned then that the NextFE table row will have column indicating to the inter-FE LFB how to encapsulate the different matches.
TBA
TBA
The author would like to thank Jamal Hadi Salim for reviewing this draft and making suggestions to make it align better with the ForCES architecture.
This memo includes no request to IANA.
TBD
[RFC5810] | Doria, A., Hadi Salim, J., Haas, R., Khosravi, H., Wang, W., Dong, L., Gopal, R. and J. Halpern, "Forwarding and Control Element Separation (ForCES) Protocol Specification", RFC 5810, March 2010. |
[RFC5812] | Halpern, J. and J. Hadi Salim, "Forwarding and Control Element Separation (ForCES) Forwarding Element Model", RFC 5812, March 2010. |
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. |