Networking Working Group | T. Przygienda |
Internet-Draft | J. Drake |
Intended status: Standards Track | A. Atlas |
Expires: July 28, 2017 | Juniper Networks |
January 24, 2017 |
RIFT: Routing in Fat Trees
draft-przygienda-rift-01
This document outlines a specialized, dynamic routing protocol for Clos and fat-tree network topologies. The protocol (1) deals with automatic construction of fat-tree topologies based on detection of links, (2) minimizes the amount of routing state held at each level, (3) automatically prunes the topology distribution exchanges to a sufficient subset of links, (4) supports automatic disaggregation of prefixes on link and node failures to prevent blackholing and suboptimal routing, (5) allows traffic steering and re-routing policies and ultimately (6) provides mechanisms to synchronize a limited key-value data-store that can be used after protocol convergence to e.g. bootstrap higher levels of functionality on nodes.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on July 28, 2017.
Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Clos [CLOS] and Fat-Tree [FATTREE] have gained prominence in today's networking, primarily as a result of a the paradigm shift towards a centralized data-center based architecture that is poised to deliver a majority of computation and storage services in the future. The existing set of dynamic routing protocols was geared originally towards a network with an irregular topology and low degree of connectivity and consequently several attempts to adapt those have been made. Most succesfully BGP [RFC4271] [RFC7938] has been extended to this purpose, not as much due to its inherent suitability to solve the problem but rather because the perceived capability to modify it "quicker" and the immanent difficulties with link-state [DIJKSTRA] based protocols to fulfill certain of the resulting requirements.
In looking at the problem through the very lens of its requirements an optimal approach does not seem to be a simple modification of either a link-state (distributed computation) or distance-vector (diffused computation) approach but rather a mixture of both, colloquially best described as 'link-state towards the spine' and 'distance vector towards the leafs'. The balance of this document details the resulting protocol.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
This section presents the terminology used in this document. It is assumed that the reader is thoroughly familiar with the terms and concepts used in OSPF [RFC2328] and IS-IS [RFC1142] as well as the according graph theoretical concepts of shortest path first (SPF) [DIJKSTRA] computation and directed acyclic graphs (DAG).
. +--------+ +--------+ . | | | | ^ N . |Spine 21| |Spine 22| | .Level 2 ++-+--+-++ ++-+--+-++ <-*-> E/W . | | | | | | | | | . P111/2| |P121 | | | | S v . ^ ^ ^ ^ | | | | . | | | | | | | | . +--------------+ | +-----------+ | | | +---------------+ . | | | | | | | | . South +-----------------------------+ | | ^ . | | | | | | | All TIEs . 0/0 0/0 0/0 +-----------------------------+ | . v v v | | | | | . | | +-+ +<-0/0----------+ | | . | | | | | | | | .+-+----++ optional +-+----++ ++----+-+ ++-----++ .| | E/W link | | | | | | .|Node111+----------+Node112| |Node121| |Node122| .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ . | | | South | | | | . | +---0/0--->-----+ 0/0 | +----------------+ | . 0/0 | | | | | | | . | +---<-0/0-----+ | v | +--------------+ | | . v | | | | | | | .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ .| | (L2L) | | | | Level 0 | | .|Leaf111~~~~~~~~~~~~Leaf112| |Leaf121| |Leaf122| .+-+-----+ +-+---+-+ +--+--+-+ +-+-----+ . + + \ / + + . Prefix111 Prefix112 \ / Prefix121 Prefix122 . multihomed . Prefix .+---------- Pod 1 ---------+ +---------- Pod 2 ---------+
Figure 1: A two level spine-and-leaf topology
We will use this topology (called commonly a fat tree/network in modern DC considerations [VAHDAT08] as homonym to the original definition of the term [FATTREE]) in all further considerations. It depicts a generic "fat-tree" and the concepts explained in three levels here carry by induction for further levels and higher degrees of connectivity.
[RFC7938] gives the original set of requirements augmented here based upon recent experience in the operation of fat-tree networks.
Following list represents possible requirements and requirements under discussion:
Derived from the above requirements we present a detailed outline of a protocol optimized for Routing in Fat Trees (RIFT) that in most abstract terms has many properties of a modified link-state protocol [RFC2328][RFC1142] when "pointing north" and path-vector [RFC4271] protocol when "pointing south". Albeit an unusual combination, it does quite naturally exhibit the desirable properties we seek.
The novel property of RIFT is that it floods northbound "flat" link-state information so that each level understands the full topology of levels south of it. In contrast, in the southbound direction the protocol operates like a path vector protocol or rather a distance vector with implicit split horizon since the topology constraints make a diffused computation front propagating in all directions unnecessary.
All protocol elements are carried over UDP. LIE exchange happens over well-known multicast address with a TTL of 1. TIE exchange mechanism uses address and port indicated by each node in the LIE exchange with TTL of 1 as well.
All packet formats are defined in Thrift or protobuf models.
Each node is provisioned with the level at which it is operating and its PoD. A default level and PoD of zero are assumed, meaning that leafs do not need to be configured with a level (or even PoD). Nodes in the spine are configured with a PoD of zero. This information is propagated in the LIEs exchanged. Adjacencies are formed if and only if
A node configure with any PoD membership MUST, after building first northbound adjacency making it participant in a PoD, advertise that PoD as part of its LIEs.
LIEs arriving with a TTL larger than 1 MUST be ignored.
LIE exchange uses three-way handshake mechanism [RFC5303]. LIE packets contain nonces and may contain an SHA-1 [RFC6234] over nonces and some of the LIE data which prevents corruption and replay attacks. TIE flooding reuses those nonces to prevent mismatches and can use those for security purposes in case it is using QUIC [QUIC].
Topology and reachability information in RIFT is conveyed by the means of TIEs which have good amount of commonalities with LSAs in OSPF. They contain sequence numbers, lifetimes and a type. Each type has a large identifying number space and information is spread across possibly many TIEs of a certain type by the means of a hash function that a node or deployment can individually determine. One extreme side of the scale is a prefix per TIE which leads to BGP-like behavior vs. dense packing into few TIEs leading to more traditional IGP trade-off with fewer TIEs. An implementation may even rehash at the cost of significant amount of readvertisements of TIEs.
More information about the TIE structure can be found in the schema in Section 7.
As a central concept to RIFT, each node represents itself differently depending on the direction in which it is advertising information. More precisely, a spine node represents two different databases to its neighbors depending whether it advertises TIEs to the south or to the north/sideways. We call those differing TIE databases either south- or northbound (S-TIEs and N-TIEs) depending on the direction of distribution.
The N-TIEs hold all of the node's adjacencies, local prefixes and northbound policy-guided prefixes while the S-TIEs hold only all of the node's neighbors and the default prefix with necessary disaggregated prefixes and southbound policy-guided prefixes. We will explain this in detail further in Section 4.2.4 and Section 4.2.5.
As an example to illustrate databases holding both representations, consider the topology in Figure 1 with the optional link between Node111 and Node 112 (so that the flooding on an east-west link can be shown). This example assumes unnumbered interfaces. First, here are the TIEs generated by some nodes. For simplicity, the KeyValueElements and the PolicyGuidedPrefixesElements which may be included in an S-TIE or an N-TIE are not shown.
Spine21 S-TIE: NodeElement(layer=2, neighbors((Node111, layer 1, cost 1), (Node112, layer 1, cost 1), (Node121, layer 1, cost 1), (Node122, layer 1, cost 1))) SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) Node111 S-TIE: NodeElement(layer=1, neighbors((Spine21,layer 2,cost 1), (Spine22, layer 2, cost 1), (Node112, layer 1, cost 1), (Leaf111, layer 0, cost 1), (Leaf112, layer 0, cost 1))) SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) Node111 N-TIE: NodeLinkElement(layer=1, neighbors((Spine21, layer 2, cost 1, links(...)), (Spine22, layer 2, cost 1, links(...)), (Node112, layer 1, cost 1, links(...)), (Leaf111, layer 0, cost 1, links(...)), (Leaf112, layer 0, cost 1, links(...)))) NorthPrefixesElement(prefixes(Node111.loopback) Node121 S-TIE: NodeElement(layer=1, neighbors((Spine21,layer 2,cost 1), (Spine22, layer 2, cost 1), (Leaf121, layer 0, cost 1), (Leaf122, layer 0, cost 1))) SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) Node121 N-TIE: NodeLinkElement(layer=1, neighbors((Spine21, layer 2, cost 1, links(...)), (Spine22, layer 2, cost 1, links(...)), (Leaf121, layer 0, cost 1, links(...)), (Leaf122, layer 0, cost 1, links(...)))) NorthPrefixesElement(prefixes(Node121.loopback) Leaf112 N-TIE: NodeLinkElement(layer=0, neighbors((Node111, layer 1, cost 1, links(...)), (Node112, layer 1, cost 1, links(...)))) NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112, Prefix_MH))
Figure 2: example TIES generated in a 2 level spine-and-leaf topology
The mechanism used to distribute TIEs is the well-known (albeit modified in several respects to address fat tree requirements) flooding mechanism used by today's link-state protocols. Albeit initially more demanding to implement it avoids many problems with diffused computation update style used by path vector. TIEs themselves are transported over UDP with the ports indicates in the LIE exchanges.
Once QUIC [QUIC] achieves the desired stability in deployments it may prove a valuable candidate for TIE transport.
Every N-TIE is flooded northbound, providing a node at a given level with the complete topology of the Clos or Fat Tree network underneath it, including all specific prefixes. This means that a packet received from a node at the same or lower level whose destination is covered by one of those specific prefixes may be routed directly towards the node advertising that prefix rather than sending the packet to a node at a higher level.
It should be noted that east-west links are included in N-TIE flooding; they need to be flooded in case the level above the current level is disconnected from one or more nodes in the current level and southbound SPF desires to use those links as backup in case of some switches in the spine being partitioned in respect to some PoDs.
A node's S-TIEs, consisting of a node's adjacencies and a default IP prefix, are flooded southbound in order to allow the nodes one level down to see connectivity of the higher level as well as reachability to the rest of the fabric. In order to allow a disconnected node in a given level to receive the S-TIEs of other nodes at its level, every *Node* S-TIE is "reflected" northbound to level from which it was received. A node does not send an S-TIE northbound if it is from the same or lower level. No S-TIEs are propagated southbound.
Node S-TIE "reflection" allows to support disaggregation on failures describes in Section 4.2.4 and flooding reduction in Section 4.2.3.7.
Observe that a node does not reflood S-TIE received from the lower level towards other southbound nodes which has implications on the way TIREs are generated in the northbound direction. More specifically, a node describes in TIDEs sent on southbound interfaces the N-TIEs and the S-TIEs it is holding (to ensure that the southbound node can repeat possibly missing TIEs it holds) while when sending TIREs in the northbound direction a node will request ONLY the S-TIEs of the very northbound peer it received the TIDEs from.
As an example to illustrate these rules, consider using the topology in Figure 1, with the optional link between Node111 and Node 112, and the associated TIEs given in Figure 2. The flooding from particular nodes of the TIEs is given in Table 1.
Router floods to | Neighbor | TIEs |
---|---|---|
Leaf111 | Node112 | Leaf111 N-TIE, Node111 S-TIE |
Leaf111 | Node111 | Leaf111 N-TIE, Node112 S-TIE |
Node111 | Leaf111 | Node111 S-TIE |
Node111 | Leaf112 | Node111 S-TIE |
Node111 | Node112 | Node111 S-TIE, Node111 N-TIE, Leaf111 N-TIE, Leaf112 N-TIE, Spine21 S-TIE, Spine22 S-TIE |
Node111 | Spine21 | Node111 N-TIE, Node112 N-TIE, Leaf111 N-TIE, Leaf112 N-TIE, Spine22 S-TIE |
Node111 | Spine22 | Node111 N-TIE, Node112 N-TIE, Leaf111 N-TIE, Leaf112 N-TIE, Spine21 S-TIE |
Node121 | Leaf121 | Node121 S-TIE |
Node121 | Leaf122 | Node121 S-TIE |
Node121 | Spine21 | Node121 N-TIE, Leaf121 N-TIE, Leaf122 N-TIE, Spine22 S-TIE |
Node121 | Spine22 | Node121 N-TIE, Leaf121 N-TIE, Leaf122 N-TIE, Spine22 S-TIE |
Spine21 | Node111 | Spine21 S-TIE |
Spine21 | Node112 | Spine21 S-TIE |
Spine21 | Node121 | Spine21 S-TIE |
Spine21 | Node122 | Spine21 S-TIE |
Spine22 | Node111 | Spine22 S-TIE |
Spine22 | Node112 | Spine22 S-TIE |
Spine22 | Node121 | Spine22 S-TIE |
Spine22 | Node122 | Spine22 S-TIE |
The initial exchange of RIFT is modelled after ISIS with TIDE being equivalent to CSNP and TIRE playing the role of PSNP. The content of TIDEs in north and south direction will contain obviously just the according database variant and reflect the flooding scopes defined.
RIFT does not purge information that has been distributed by the protocol. Purging mechanisms in other routing protocols have proven through many years of experience to be complex and fragile. Abundant amounts of memory are available today even on low-end platforms. The information will age out and all computations will deliver correct results if a node leaves the network due to the new information distributed by its adjacent nodes.
Once a RIFT node issues a TIE with an ID, it MUST preserve the ID in its database until it restarts, even if the TIE looses all content. The re-advertisement of empty TIE fullfills the purpose of purging any information advertised in previous versions. The originator is free to not re-originate the according empty TIE again or originate an empty TIE with relatively short lifetime to prevent large number of long-lived empty stubs polluting the network. Each node will timeout and clean up the according empty TIEs independently.
Several nodes can, but strictly only under conditions defined below, run a hashing function based on TIE originator value and partition flooding between them.
Steps for flooding reduction and partitioning:
Additional rules for flooding reduction and partitioning:
The mechanism can be implemented optionally in each node. The capability is carried in the node N-TIE.
Obviously flooding reduction does NOT apply to self originated TIEs. Observe further that all policy-guided information consists of self-originated TIEs.
Under normal circumstances, a node S-TIEs contain just its adjacencies, a default route and policy-guided prefixes. However, if a node detects that its default IP prefix covers one or more prefixes that are reachable through it but not through one or more other nodes at the same level, then it must explicitly advertise those prefixes in an S-TIE. Otherwise, some percentage of the northbound traffic for those prefixes would be sent to nodes without according reachability, causing it to be blackholed. Even when not blackholing, the resulting forwarding could 'backhaul' packets through the higher level spines, clearly an undesirable condition affecting the blocking probabilities of the fabric.
We refer to the process of advertising additional prefixes as 'de-aggregation'.
A node determines the set of prefixes needing de-aggregation using the following steps:
To summarize the above in simplest terms: if a node detects that its default route encompasses prefixes for which one of the other nodes in its level has no possible next-hops in the level below, it has to disaggregate it to prevent blackholing or suboptimal routing. Hence a node X needs to determine if it can reach a different set of south neighbors than other nodes at the same level, which are connected via at least one south or east-west neighbor. If it can, then prefix disaggregation may be required. If it can't, then no prefix disaggregation is needed. An example of disaggregation is provided in Section 5.3.
A possible algorithm is described last:
A node X computes its SPF based upon the received N-TIEs. This results in a set of routes, each categorized by (prefix, path_distance, next-hop-set). Alternately, for clarity in the following procedure, these can be organized by next-hop-set as ( (next-hops), {(prefix, path_distance)}). If partial_neighbors isn't empty, then the following procedure describes how to identify prefixes to disaggregate.
disaggregated_prefixes = {empty } nodes_same_layer = { empty } for each S-TIE if S-TIE.layer == X.layer add S-TIE.originator to nodes_same_layer end if end for for each next-hop-set NHS isolated_nodes = nodes_same_layer for each NH in NHS if NH in partial_neighbors isolated_nodes = intersection(isolated_nodes, partial_neighbors[NH].nodes) end if end for if isolated_nodes is not empty for each prefix using NHS add (prefix, distance) to disaggregated_prefixes end for end if end for copy disaggregated_prefixes to X's S-TIE if X's S-TIE is different schedule S-TIE for flooding end if
Figure 3: Computation to Disaggregate Prefixes
Each disaggregated prefix is sent with the accurate path_distance. This allows a node to send the same S-TIE to each south neighbor. The south neighbor which is connected to that prefix will thus have a shorter path.
Finally, to summarize the less obvious points:
In a fat tree, it can be sometimes desirable to guide traffic to particular destinations or keep specific flows to certain paths. In RIFT, this is done by using policy-guided prefixes with their associated communities. Each community is an abstract value whose meaning is determined by configuration. It is assumed that the fabric is under a single administrative control so that the meaning and intent of the communities is understood by all the nodes in the fabric. Any node can originate a policy-guided prefix.
Since RIFT uses distance vector concepts in a southbound direction, it is straightforward to add a policy-guided prefix to an S-TIE. For easier troubleshooting, the approach taken in RIFT is that a node's southbound policy-guided prefixes are sent in its S-TIE and the receiver does inbound filtering based on the associated communities (an egress policy is imaginable but would lead to different S-TIEs per neighbor possibly which is not considered in RIFT protocol procedures). A southbound policy-guided prefix can only use links in the south direction. If an PGP S-TIE is received on an east-west or northbound link, it must be discarded by ingress filtering.
Conceptually, a southbound policy-guided prefix guides traffic from the leaves up to at most the northmost layer. It is also necessary to to have northbound policy-guided prefixes to guide traffic from the northmost layer down to the appropriate leaves. Therefore, RIFT includes northbound policy-guided prefixes in its N PGP-TIE and the receiver does inbound filtering based on the associated communities. A northbound policy-guided prefix can only use links in the northern direction. If an N PGP TIE is received on an east-west or southbound link, it must be discarded by ingress filtering.
By separating southbound and northbound policy-guided prefixes and requiring that the cost associated with a PGP is strictly monotonically increasing at each hop, the path cannot loop. Because the costs are strictly increasing, it is not possible to have a loop between a northbound PGP and a southbound PGP. If east-west links were to be allowed, then looping could occur and issues such as counting to infinity would become an issue to be solved. If complete generality of path - such as including east-west links and using both north and south links in arbitrary sequence - then a Path Vector protocol or a similar solution must be considered.
If a node has received the same prefix, after ingress filtering, as a PGP in an S-TIE and in an N-TIE, then the node determines which policy-guided prefix to use based upon the advertised cost.
A policy-guided prefix is always preferred to a regular prefix, even if the policy-guided prefix has a larger cost.
The set of policy-guided prefixes received in a TIE is subject to ingress filtering and then regenerated to be sent out in the receiver's appropriate TIE. Both the ingress filtering and the regeneration use the communities associated with the policy-guided prefixes to determine the correct behavior. The cost on re-advertisement MUST increase in a strictly monotonic fashion.
When a node X receives a PGP S-TIE or N-TIE that is originated from a node Y which does not have an adjacency with X, such a TIE MUST be discarded. Similarly, if node Y is at the same layer as node X, then X MUST discard PGP S- and N-TIEs.
Next, policy can be applied to determine which policy-guided prefixes to accept. Since ingress filtering is chosen rather than egress filtering and per-neighbor PGPs, policy that applies to links is done at the receiver. Because the RIFT adjacency is between nodes and there may be parallel links between the two nodes, the policy-guided prefix is considered to start with the next-hop set that has all links to the originating node Y.
A policy-guided prefix has or is assigned the following attributes:
The specific action to apply based upon a community is deployment specific. Here are some examples of things that can be done with communities. The length of a community is a 64 bits number and it can be written as a single field M or as a multi-field (S = M[0-31], T = M[32-63]) in these examples. For simplicity, the policy-guided prefix is referred to as P, the processing node as X and the originator as Y.
Once a policy-guided prefix has completed ingress filtering and policy, it is almost ready to store and use. It is still necessary to adjust the cost of the prefix to account for the link from the computing node X to the originating neighbor node Y.
There are three different policies that can be used:
The default policy is Minimum Unequal-Cost but well-known communities can be defined to get the other behaviors.
Regardless of the policy used, a node MUST store a PGP cost that is at least 1 greater than the PGP cost received. This enforces the strictly monotonically increasing condition that avoids loops.
Two databases of PGPs - from N-TIEs and from S-TIEs are stored. When a PGP is inserted into the appropriate database, the usual tiebreaking on cost is performed. Observe that the node retains all PGP TIEs due to normal flooding behavior and hence loss of the best prefix will lead to re-evaluation of TIEs present and readvertisement of a new best PGP.
A node must regenerate policy-guided prefixes and retransmit them. The node has its database of southbound policy-guided prefixes to send in its S-TIE and its database of northbound policy-guided prefixes to send in its N-TIE.
Of course, a leaf does not need to regenerate southbound policy-guided prefixes.
PGPs may overlap with prefixes introduced by automatic de-aggregation. The topic is under further discussion. The break in connectivity that leads to infeasiblity of a PGP is mirrored in adjacency tear-down and according removal of such PGPs. Nevertheless, the underlying link-state flooding will be likely reacting significantly faster than a hop-by-hop redistribution and with that the preference for PGPs may cause intermittant blackholes.
A node has three sources of relevant information. A node knows the full topology south from the received N-TIEs. A node has the set of prefixes with associated distances and bandwidths from received S-TIEs. A node can also have a set of PGPs.
To compute reachability, a node runs conceptually a northbound and a southbound SPF. We call that N-SPF and S-SPF and when progressing Dijkstra we use only the according direction of adjacencies. In both cases we are using the N-TIEs to find according adjacencies but in case of N-SPF we use the adjacent node's S-TIE to validate backlinks since as in IS-IS or OSPF, unidirectional links are associated together to confirm bidirectional connectivity. This enforces the requirement that a packet traversing in a southbound direction must not go take any northbound links since a node has topological visibility only south of itself and vice versa. As well, there are no links at the computing node's level that go to a northbound level and will find a backlink (except the next higher level providing node S-TIE). All paths computed in an southbound SPF will contain only east-west and southbound links.
A node runs such standard shortest path first (SPF) algorithms on the network graph. If a node is minimized to have a cost of COST_INFINITY, then it is not reachable.
Since neither computation cannot "loop" (with exception of E-W links), it is relatively simple to compute non-equal-cost paths and "saturate" the fabric to the extent desired.
After the SPF is run, it is necessary to attach according prefixes. For S-SPF, prefixes from an N-TIE are attached to the originating node with that node's next-hop set and a distance equal to the prefix's cost plus the node's minimized path distance. The RIFT route database, a set of (prefix, type=spf, path_distance, next-hop set), accumulates these results.
In case of N-SPF prefixes from each S-TIE need to also be added to the RIFT route database. The N-SPF is really just a stub so the computing node needs simply to determine, for each prefix in an S-TIE that originated from adjacent node, what next-hops to use to reach that node. Since there may be parallel links, the next-hops to use can be a set; presence of the computing node in the associated Node S-TIE is sufficient to verify that at least one link has bidirectional connectivity. The set of minimum cost next-hops from the computing node X to the originating adjacent node is determined.
Each prefix has its cost adjusted before being added into the RIFT route database. The cost of the prefix is set to the cost received plus the cost of the minimum cost next-hop to that neighbor. Then each prefix can be added into the RIFT route database with the next_hop_set; ties are broken based upon distance and type. Prefixes obtained from prefix N-TIEs (southbound routes) are prefered over prefixes from S-TIEs (northbound routes).
for each S-TIE if S-TIE.layer > X.layer next_hop_set = set of minimum cost links to the S-TIE.originator next_hop_cost = minimum cost link to S-TIE.originator end if for each prefix P in the S-TIE P.cost = P.cost + next_hop_cost if P not in route_database: add (P, type=DistVector, P.cost, next_hop_set) to route_database end if if (P in route_database) and (route_database[P].type is not PolicyGuided): if route_database[P].cost > P.cost): update route_database[P] with (P, DistVector, P.cost, next_hop_set) else if route_database[P].cost == P.cost update route_database[P] with (P, DistVector, P.cost, merge(next_hop_set, route_database[P].next_hop_set)) else // Not prefered route so ignore end if end if end for end for
Figure 4: Adding Routes from S-TIE Prefixes
An exemplary implementation for node X follows:
Each policy-guided prefix P has its cost and next_hop_set already stored in the associated database, as specified in Section 4.2.5.3; the cost stored for the PGP is already updated to considering the cost of the link to the advertising neighbor. By definition, a policy-guided prefix is preferred to a regular prefix.
for each policy-guided prefix P: if P not in route_database: add (P, type=PolicyGuided, P.cost, next_hop_set) end if if P in route_database : if (route_database[P].type is not PolicyGuided) or (route_database[P].cost > P.cost): update route_database[P] with (P, PolicyGuided, P.cost, next_hop_set) else if route_database[P].cost == P.cost update route_database[P] with (P, PolicyGuided, P.cost, merge(next_hop_set, route_database[P].next_hop_set)) else // Not prefered route so ignore end if end if end for
Figure 5: Adding Routes from Policy-Guided Prefixes
The leaf node SHOULD set the 'overload' bit on its N-TIE, since if the spine nodes were to forward traffic not meant for the local node, the leaf node does not have the topology information to prevent a routing/forwarding loop.
Overload Bit MUST be respected in all according reachability computations. A node with overload bit set MUST NOT advertise any reachability prefixes southbound.
Since the leafs do see only "one hop away" they do not need to run a full SPF but can simply gather prefix candidates from their neighbors and build the according routing table.
A leaf will have no N-TIEs except optionally from its east-west neighbors. A leaf will have S-TIEs from its neighbors.
Instead of creating a network graph from its N-TIEs and running an SPF, a leaf node can simply compute the minimum cost and next_hop_set to each leaf neighbor by examining its local interfaces, determining bi-directionality from the associated N-TIE, and specifying the neighbor's next_hop_set set and cost from the minimum cost local interfaces to that neighbor.
Then a leaf attaches prefixes as in Section 4.2.6.1.1 as well as the policy-guided prefixes as in Section 4.2.6.1.2.
The protocol supports a southbound distribution of key-value pairs that can be used to e.g. distribute configuration information during topology bringup. The KV TIEs (which are always S-TIEs) can arrive from multiple nodes and need tie-breaking per key uses the following rules
Observe that if a node goes down, the node south of it looses adjacencies to it and with that the KVs will be disregarded and on tie-break changes new KV readvertised to prevent stale information being used by nodes further south. KV information is not result of independent computation of every node but a diffused computation.
This section describes RIFT deployment in the example topology without any node or link failures. We disregard flooding reduction for simplicity's sake.
As first step, the following bi-directional adjacencies will be created (and any other links that do not fulfill LIE rules in Section 4.2.2 disregarded):
Consequently, N-TIEs would be originated by Node 111 and Node 112 and each set would be sent to both Spine 21 and Spine 22. N-TIEs also would be originated by Leaf 111 (w/ Prefix 111) and Leaf 112 (w/ Prefix 112 and the multihomed prefix) and each set would be sent to Node 111 and Node 112. Node 111 and Node 112 would then flood these N-TIEs to Spine 21 and Spine 22.
Similarly, N-TIEs would be originated by Node 121 and Node 122 and each set would be sent to both Spine 21 and Spine 22. N-TIEs also would be originated by Leaf 121 (w/ Prefix 121 and the multihomed prefix) and Leaf 122 (w/ Prefix 122) and each set would be sent to Node 121 and Node 122. Node 121 and Node 122 would then flood these N-TIEs to Spine 21 and Spine 22.
At this point both Spine 21 and Spine 22, as well as any controller to which they are connected, would have the complete network topology. At the same time, Node 111/112/121/122 hold only the N-ties of level 0 of their respective PoD. Leafs hold only their own N-TIEs.
S-TIEs with adjacencies and a default IP prefix would then be originated by Spine 21 and Spine 22 and each would be flooded to Node 111, Node 112, Node 121, and Node 122. Node 111, Node 112, Node 121, and Node 122 would each send the S-TIE from Spine 21 to Spine 22 and the S-TIE from Spine 22 to Spine 21. (S-TIEs are reflected up to level from which they are received but they are NOT propagated southbound.)
An S Tie with a default IP prefix would be originated by Node 111 and Node 112 and each would be sent to Leaf 111 and Leaf 112. Leaf 111 and Leaf 112 would each send the S-TIE from Node 111 to Node 112 and the S-TIE from Node 112 to Node 111.
Similarly, an S Tie with a default IP prefix would be originated by Node 121 and Node 122 and each would be sent to Leaf 121 and Leaf 122. Leaf 121 and Leaf 122 would each send the S-TIE from Node 121 to Node 122 and the S-TIE from Node 122 to Node 121. At this point IP connectivity with maximum possible ECMP has been established between the Leafs while constraining the amount of information held by each node to the minimum necessary for normal operation and dealing with failures.
. | | | | .+-+---+-+ +-+---+-+ .| | | | .|Node111| |Node112| .+-+---+-+ ++----+-+ . | | | | . | +---------------+ X . | | | X Failure . | +-------------+ | X . | | | | .+-+---+-+ +--+--+-+ .| | | | .|Leaf111| |Leaf112| .+-------+ +-------+ . + + . Prefix111 Prefix112
Figure 6: Single Leaf link failure
In case of a failing leaf link between node 112 and leaf 112 the link-state information will cause recomputation of the necessary SPF and the higher levels will stop forwarding towards prefix 112 through node 112. Only nodes 111 and 112, as well as both spines will see control traffic. Leaf 111 will receive a new S-TIE from node 112 and reflect back to node 111. Node 111 will deaggregate Prefix 111 and Prefix 112 but we will not describe it further here since deaggregation is emphasized in the next example. It is worth observing however in this example that if Leaf111 would keep on forwarding traffic towards Prefix112 using the advertised south-bound default of Node112 the traffic would end up on Spine21 and Spine22 and cross back into Pod1 using Node111. This is arguably not as bad as blackholing present in the next example but clearly undesirable. Fortunately, deaggregation prevents this type of behavior except for a transitory period of time.
. +--------+ +--------+ S-TIE of Spine21 . | | | | received by . |Spine 21| |Spine 22| reflection of . ++-+--+-++ ++-+--+-++ Nodes 112 and 111 . | | | | | | | | . | | | | | | | 0/0 . | | | | | | | | . | | | | | | | | . +--------------+ | +--- XXXXXX + | | | +---------------+ . | | | | | | | | . | +-----------------------------+ | | | . 0/0 | | | | | | | . | 0/0 0/0 +- XXXXXXXXXXXXXXXXXXXXXXXXX -+ | . | 1.1/16 | | | | | | . | | +-+ +-0/0-----------+ | | . | | | 1.1./16 | | | | .+-+----++ +-+-----+ ++-----0/0 ++----0/0 .| | | | | 1.1/16 | 1.1/16 .|Node111| |Node112| |Node121| |Node122| .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ . | | | | | | | | . | +---------------+ | | +----------------+ | . | | | | | | | | . | +-------------+ | | | +--------------+ | | . | | | | | | | | .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ .| | | | | | | | .|Leaf111| |Leaf112| |Leaf121| |Leaf122| .+-+-----+ ++------+ +-----+-+ +-+-----+ . + + + + . Prefix111 Prefix112 Prefix121 Prefix122 . 1.1/16
Figure 7: Fabric partition
Figure 7 shows the arguably most catastrophic but also the most interesting case. Spine 21 is completely severed from access to Prefix 121 (we use in the figure 1.1/16 as example) by double link failure. However unlikely, if left unresolved, forwarding from leaf 111 and leaf 112 to P121 would suffer 50% blackholing based on pure default route advertisements by spine 21 and spine 22.
The mechanism used to resolve this scenario is hinging on the distribution of southbound representation by spine 21 that is reflected by node 111 and node 112 to spine 22. Spine 22, having computed reachability to all prefixes in the network, advertises with the default route the ones that are reachable only via lower level neighbors that Spine 21 does not show an adjacency to. That results in node 111 and node 112 obtaining a longest-prefix match to Prefix 121 which leads through Spine 22 and prevents blackholing through Spine 21 still advertising the 0/0 aggregate only.
The Prefix 121 advertised by spine 22 does not have to be propagated further towards leafs since they do no benefit from this information. Hence the amount of flooding is restricted to spine 21 reissuing its S-TIEs and reflection of those by node 111 and node 112. The resulting SPF in Spine 22 issues the new S-TIEs containing 1.1/16 and reflection of those by node 111 and node 112 again. None of the leafs become aware of the changes and the failure is constrained strictly to the level that became partitioned.
To finish with an example of the resulting sets computed using notation introduced in Section 4.2.4, Spine 22 constructs the following sets:
With that and |H (for r=Prefix 121) and |H (for r=Prefix 122) being disjoint from |A (for Spine 21), Spine 22 will originate an S-TIE with Prefix 121 and Prefix 122, that is flooded to Nodes 112, 112, 121 and 122.
This document does not deal with leaf-to-leaf connections currently but it can incorporate such an addition. Such extensions would lead to change of rules for LIE processing (forming E-W adjacencies would be allowed at leaf level even if the PODs are not the same for both. Additionally, mechanisms such as modified computation or overload bits would be used to prevent "front-hauling" of traffic intended for the spines through the lower cost leaf to leaf links).
Losing full, flat topology information at every node will have an impact on some of the end-to-end network services. This is the price paid for minimal disturbance in case of failures and reduced flooding and memory requirements on nodes lower south in the level hierarchy.
Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC6822] is used today in link-state routing protocols to provide the option of several instances on the same physical topology. RIFT supports this capability by carrying transport ports in the LIE protocol exchanges. Multiplexing of LIEs can be achieved by either choosing varying multicast addresses or ports on the same address.
This section introduces the schema for information elements.
On schema changes that
major version of the schema MUST increase. All other changes MUST increase minor version within the same major.
Thrift serializer/deserializer MUST not discard optional, unknown fields but preserve and serialize them again when re-flooding.
//! Thrift file for RIFT, routing for fat trees //! @note: all numbers are implementation co'erced to unsigned versions using the highest bit /// represents protocol major version const i32 CURRENT_MAJOR_VERSION = 1 const i32 CURRENT_MINOR_VERSION = 0 typedef i64 SystemID typedef i32 IPv4Address /// this has to be of length long enough to accomodate prefix typedef binary IPv6Address typedef i16 UDPPortType typedef i16 TIENrType typedef i16 MTUSizeType typedef i32 SeqNrType typedef i32 LifeTimeType typedef i16 LevelType typedef i16 PodType typedef i16 VersionType typedef i32 MetricType typedef i64 KeyIDType typedef i32 LinkIDType typedef string KeyNameType typedef bool TieDirectionType typedef byte PrefixLenType const LevelType DEFAULT_LEVEL = 0 const PodType DEFAULT_POD = 0 const LinkIDType UNDEFINED_LINKID = 0 const MetricType DEFAULT_DISTANCE = 1 /// any distance larger than this will be considered infinity const MetricType INFINITE_DISTANCE= 0x70000000 /// any element with 0 distance will be ignored const MetricType INVALID_DISTANCE = 0 /// RIFT packet header struct PacketHeader { 1: required VersionType major_version = CURRENT_MAJOR_VERSION; 2: required VersionType minor_version = CURRENT_MINOR_VERSION; // this is the node sending the packet, in case of TIE/TIRE/TIDE also the originator 3: required SystemID sender; 4: optional LevelType level = DEFAULT_LEVEL; } struct ProtocolPacket { 1: required PacketHeader header; 2: required Content content; } union Content { 1: optional LIE hello; 2: optional TIDEPacket tide; 3: optional TIREPacket tire; 4: optional TIEPacket tie; } // serves as community for PGP struct Community { 1: required i32 top; 2: required i32 bottom; } // @todo: flood header separately in UDP ? // to allow caching to TIEs while changing lifetime? struct TIEPacket { 1: required TIEHeader header; 2: required TIEElement element; } enum TIETypeType { Illegal = 0, TIETypeMinValue = 1, NodeTIEType = 2, PrefixTIEType = 3, PGPrefixTIEType = 4, KeyValueTIEType = 5, TIETypeMaxValue = 6, } /// RIFT LIE packet struct LIE { 1: optional string name; // UDP port to which we can flood TIEs, same address // as the hello TX this hello has been received on 2: required UDPPortType flood_port; 3: optional Neighbor neighbor; 4: optional PodType pod = DEFAULT_POD; // this node's level is already included on the packet header } struct LinkID { 1: required LinkIDType local_id; 2: required LinkIDType remote_id; // more properties of the link can go in here } struct Neighbor { 1: required SystemID originator; 2: required UDPPortType flood_port; } /// ID of a TIE /// @note: TIEID space is a total order achieved by comparing the elements in sequence defined struct TIEID { /// indicates whether N or S-TIE, True > False 1: required TieDirectionType northbound; 2: required SystemID originator; 3: required TIETypeType tietype; 4: required TIENrType tie_nr; } struct TIEHeader { 2: required TIEID tieid; 3: required SeqNrType seq_nr; // in seconds 4: required LifeTimeType lifetime; } // sorted, otherwise protocol doesn't work properly struct TIDEPacket { /// all 00s marks starts 1: required TIEID start_range; /// all FFs mark end 2: required TIEID end_range; /// sorted list of headers 3: required list<TIEHeader> headers; } struct TIREPacket { 1: required set<TIEHeader> headers; } struct NodeNeighborsTIEElement { /// if neighbor systemID repeats in set or TIEs of same node /// the behavior is undefined 1: required SystemID neighbor; 2: required LevelType level; 3: optional MetricType cost = DEFAULT_DISTANCE; // can carry description of multiple // parallel links in a TIE 4: optional set<LinkID> link_ids; } /// capabilities the node supports struct NodeCapabilities { 1: required bool flood_reduction = true; } /// flags the node sets struct NodeFlags { 1: required bool overflow = false; } struct NodeTIEElement { 1: required LevelType level; 2: optional NodeCapabilities capabilities; 3: optional NodeFlags flags; 4: required set<NodeNeighborsTIEElement> neighbors; } struct IPv4PrefixType { 1: required IPv4Address address; 2: required PrefixLenType prefixlen; } struct IPv6PrefixType { 1: required IPv6Address address; 2: required PrefixLenType prefixlen; } union IPPrefixType { 1: optional IPv4PrefixType ipv4prefix; 2: optional IPv6PrefixType ipv6prefix; } struct PrefixWithMetric { 1: required IPPrefixType prefix; 2: optional MetricType cost = 1; } struct PrefixTIEElement { /// if the same prefix repeats in multiple TIEs of same node /// or with different metrics, behavior is unspecified 1: required set<PrefixWithMetric> prefixes; } struct KeyValue { 1: required KeyIDType keyid; 2: optional KeyNameType key; 3: optional string value = ""; } struct KeyValueTIEElement { /// if the same key repeats in multiple TIEs of same node /// or with different values, behavior is unspecified 1: required set<KeyValue> keyvalues; } /// single element in a TIE union TIEElement { /// hinges of enum TIETypeType::NodeTIEType 1: optional NodeTIEElement node; /// hinges of enum TIETypeType::PrefixTIEType 2: optional PrefixTIEElement prefixes; /// ignored in S-TIEs 3: optional KeyValueTIEElement keyvalues; /// @todo: policy guided prefixes }
Many thanks to Naiming Shen for some of the early discussions around the topic of using IGPs for routing in topologies related to Clos. Adrian Farrel and Jeffrey Zhang provided thoughtful comments that improved the readability of the document and found good amount of corners where the light failed to shine.
[CLOS] | Yuan, X., "On Nonblocking Folded-Clos Networks in Computer Communication Environments", IEEE International Parallel & Distributed Processing Symposium, 2011. |
[DIJKSTRA] | Dijkstra, E., "A Note on Two Problems in Connexion with Graphs", Journal Numer. Math. , 1959. |
[DYNAMO] | De Candia et al., G., "Dynamo: amazon's highly available key-value store", ACM SIGOPS symposium on Operating systems principles (SOSP '07), 2007. |
[FATTREE] | Leiserson, C., "Fat-Trees: Universal Networks for Hardware-Efficient Supercomputing", 1985. |
[QUIC] | Iyengar et al., J., "QUIC: A UDP-Based Multiplexed and Secure Transport", 2016. |
[VAHDAT08] | Al-Fares, M., Loukissas, A. and A. Vahdat, "A Scalable, Commodity Data Center Network Architecture", SIGCOMM , 2008. |