RIFT Working Group | A. Przygienda, Ed. |
Internet-Draft | Juniper |
Intended status: Standards Track | A. Sharma |
Expires: September 11, 2020 | Comcast |
P. Thubert | |
Cisco | |
Bruno. Rijsman | |
Individual | |
Dmitry. Afanasiev | |
Yandex | |
March 10, 2020 |
RIFT: Routing in Fat Trees
draft-ietf-rift-rift-11
This document defines a specialized, dynamic routing protocol for Clos and fat-tree network topologies optimized towards minimization of configuration and operational complexity. The protocol
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 11, 2020.
Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
This work is a product of a list of individuals which are all to be considered major contributors independent of the fact whether their name made it to the limited boilerplate author's list or not.
Tony Przygienda, Ed. | | | Alankar Sharma | | | Pascal Thubert |
Juniper Networks | | | Comcast | | | Cisco |
Bruno Rijsman | | | Ilya Vershkov | | | Dmitry Afanasiev |
Individual | | | Mellanox | | | Yandex |
Don Fedyk | | | Alia Atlas | | | John Drake |
Individual | | | Individual | | | Juniper |
Clos and Fat-Tree topologies have gained prominence in today's networking, primarily as result of the paradigm shift towards a centralized data-center based architecture that is poised to deliver a majority of computation and storage services in the future. Today's current routing protocols were geared towards a network with an irregular topology and low degree of connectivity originally but given they were the only available options, consequently several attempts to apply those protocols to Clos have been made. Most successfully BGP [RFC7938] has been extended to this purpose, not as much due to its inherent suitability but rather because the perceived capability to easily modify BGP and the immanent difficulties with link-state based protocols to optimize topology exchange and converge quickly in large scale densely meshed topologies. The incumbent protocols precondition normally extensive configuration or provisioning during bring up and re-dimensioning. This tends to be viable only for a set of organizations with according networking operation skills and budgets. For many IP fabric builders a desirable protocol would be one that auto-configures itself and deals with failures and mis-configurations with a minimum of human intervention only. Such a solution would allow local IP fabric bandwidth to be consumed in a 'standard component' fashion, i.e. provision it much faster and operate it at much lower costs than today, much like compute or storage is consumed already.
In looking at the problem through the lens of data center requirements, RIFT addresses challenges in IP fabric routing not through an incremental modification of either a link-state (distributed computation) or distance-vector (diffused computation) but rather a mixture of both, colloquially best described as "link-state towards the spine" and "distance vector towards the leaves". In other words, "bottom" levels are flooding their link-state information in the "northern" direction while each node generates under normal conditions a "default route" and floods it in the "southern" direction. This type of protocol allows naturally for highly desirable aggregation. Alas, such aggregation could blackhole traffic in cases of misconfiguration or while failures are being resolved or even cause partial network partitioning and this has to be addressed by some adequate mechanism. The approach RIFT takes is described in Section 4.2.5 and is basically based on automatic, sufficient disaggregation of prefixes in case of link and node failures.
For the visually oriented reader, Figure 1 presents a first level simplified view of the resulting information and routes on a RIFT fabric. The top of the fabric is holding in its link-state database the nodes below it and the routes to them. In the second row of the database table we indicate that partial information of other nodes in the same level is available as well. The details of how this is achieved will be postponed for the moment. When we look at the "bottom" of the fabric, the leaves, we see that the topology is basically empty and they only hold a load balanced default route to the next level under normal conditions.
The balance of this document details a dedicated IP fabric routing protocol, fills in the specification details and ultimately includes resulting security considerations.
. [A,B,C,D] . [E] . +-----+ +-----+ . | E | | F | A/32 @ [C,D] . +-+-+-+ +-+-+-+ B/32 @ [C,D] . | | | | C/32 @ C . | | +-----+ | D/32 @ D . | | | | . | +------+ | . | | | | . [A,B] +-+---+ | | +---+-+ [A,B] . [D] | C +--+ +-+ D | [C] . +-+-+-+ +-+-+-+ . 0/0 @ [E,F] | | | | 0/0 @ [E,F] . A/32 @ A | | +-----+ | A/32 @ A . B/32 @ B | | | | B/32 @ B . | +------+ | . | | | | . +-+---+ | | +---+-+ . | A +--+ +-+ B | . 0/0 @ [C,D] +-----+ +-----+ 0/0 @ [C,D]
Figure 1: RIFT Information Distribution
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 8174.
This section presents the terminology used in this document. It is assumed that the reader is thoroughly familiar with the terms and concepts used in OSPF and IS-IS, [ISO10589] as well as the according graph theoretical concepts of shortest path first (SPF) computation and DAGs.
^ N +--------+ +--------+ Level 2 | |ToF 21| |ToF 22| E <-*-> W ++-+--+-++ ++-+--+-++ | | | | | | | | | S v P111/2 P121/2 | | | | ^ ^ ^ ^ | | | | | | | | | | | | +--------------+ | +-----------+ | | | +---------------+ | | | | | | | | South +-----------------------------+ | | ^ | | | | | | | All TIEs 0/0 0/0 0/0 +-----------------------------+ | v v v | | | | | | | +-+ +<-0/0----------+ | | | | | | | | | | +-+----++ optional +-+----++ ++----+-+ ++-----++ Level 1 | | E/W link | | | | | | |Spin111+----------+Spin112| |Spin121| |Spin122| +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ | | | South | | | | | +---0/0--->-----+ 0/0 | +----------------+ | 0/0 | | | | | | | | +---<-0/0-----+ | v | +--------------+ | | v | | | | | | | +-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ Level 0 | | (L2L) | | | | | | |Leaf111+~~~~~~~~~~+Leaf112| |Leaf121| |Leaf122| +-+-----+ +-+---+-+ +--+--+-+ +-+-----+ + + \ / + + Prefix111 Prefix112 \ / Prefix121 Prefix122 multi-homed Prefix +---------- PoD 1 ---------+ +---------- PoD 2 ---------+
Figure 2: A Three Level Spine-and-Leaf Topology
.+--------+ +--------+ +--------+ +--------+ .|ToF A1| |ToF B1| |ToF B2| |ToF A2| .++-+-----+ ++-+-----+ ++-+-----+ ++-+-----+ . | | | | | | | | . | | | | | +---------------+ . | | | | | | | | . | | | +-------------------------+ | . | | | | | | | | . | +-----------------------+ | | | | . | | | | | | | | . | | +---------+ | +---------+ | | . | | | | | | | | . | +---------------------------------+ | | . | | | | | | | | .++-+-----+ ++-+-----+ +--+-+---+ +----+-+-+ .|Spine111| |Spine112| |Spine121| |Spine122| .+-+---+--+ ++----+--+ +-+---+--+ ++---+---+ . | | | | | | | | . | +--------+ | | +--------+ | . | | | | | | | | . | -------+ | | | +------+ | | . | | | | | | | | .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ .|Leaf111| |Leaf112| |Leaf121| |Leaf122| .+-------+ +-------+ +-------+ +-------+
Figure 3: Topology with Multiple Planes
We will use topology in Figure 2 (called commonly a fat tree/network in modern IP fabric considerations [VAHDAT08] as homonym to the original definition of the term) in all further considerations. This figure depicts a generic "single plane fat-tree" and the concepts explained using three levels apply by induction to further levels and higher degrees of connectivity. Further, this document will deal also with designs that provide only sparser connectivity and "partitioned spines" as shown in Figure 3 and explained further in Section 4.1.2.
We present here a detailed outline of a protocol optimized for Routing in Fat Trees (RIFT) that in most abstract terms has many properties of a modified link-state protocol [RFC2328][ISO10589-Second-Edition] when distributing information northbound and distance vector [RFC4271] protocol when distributing information southbound. While this is an unusual combination, it does quite naturally exhibit the desirable properties we seek.
The most singular property of RIFT is that it floods flat link-state information northbound only so that each level obtains the full topology of levels south of it. Link-State information is, with some exceptions, never flooded East-West or back South again. Exceptions like south reflection is explained in detail in Section 4.2.5.1 and east-west flooding at ToF level in multi-plane fabrics is outlined in Section 4.1.2. In southbound direction, the protocol operates like a "fully summarizing, unidirectional" path vector protocol or rather a distance vector with implicit split horizon. Routing information, normally just the default route, propagates one hop south and is 're-advertised' by nodes at next lower level. However, RIFT uses flooding in the southern direction as well to avoid the overhead of building an update per adjacency. We omit describing the East-West direction for the moment.
Those information flow constraints create not only an anisotropic protocol (i.e. the information is not distributed "evenly" or "clumped" but summarized along the N-S gradient) but also a "smooth" information propagation where nodes do not receive the same information from multiple directions at the same time. Normally, accepting the same reachability on any link, without understanding its topological significance, forces tie-breaking on some kind of distance metric. And such tie-breaking leads ultimately in hop-by-hop forwarding to shortest paths only. In constrast to that, RIFT, under normal conditions, does not need to tie-break same reachability information from multiple directions. Its computation principles (south forwarding direction is always preferred) leads to valley-free forwarding behavior. And since valley free routing is loop-free, it can use all feasible paths which is another highly desirable property if available bandwidth should be utilized to the maximum extent possible.
To account for the "northern" and the "southern" information split the link state database is partitioned accordingly into "north representation" and "south representation" TIEs. In simplest terms the North TIEs contain a link state topology description of lower levels and and South TIEs carry simply default routes towards the level above. This oversimplified view will be refined gradually in following sections while introducing protocol procedures and state machines at the same time.
This section will shed some light on the topologies RIFT addresses, including multi plane fabrics and their implications. Readers that are only interested in single plane designs, i.e. all top-of-fabric nodes being topologically equal and initially connected to all the switches at the level below them, can skip the rest of Section 4.1.2 and resulting Section 4.2.5.2 as well.
It is quite difficult to visualize multi plane design, which are effectively multi-dimensional switching matrices. To cope with that, we will introduce a methodology allowing us to depict the connectivity in two-dimensional pictures. Further, we will leverage the fact that we are dealing basically with stacked crossbar fabrics where ports align "on top of each other" in a regular fashion.
A word of caution to the reader; at this point it should be observed that the language used to describe Clos variations, especially in multi-plane designs, varies widely between sources. This description follows the terminology introduced in Section 3.1. It is unavoidable to have it present to be able to follow the rest of this section correctly.
This section describes the terminology and acronyms used in the rest of the text.
The typical topology for which RIFT is defined is built of P number of PoDs and connected together by S number of ToF nodes. A PoD node has K number of ports (also called Radix). We consider half of them (K=Radix/2) as connecting host devices from the south, and the other half connecting to interleaved PoD Top-Level switches to the north. Ratio K can be chosen differently without loss of generality when port speeds differ or the fabric is oversubscribed but K=R/2 allows for more readable representation whereby there are as many ports facing north as south on any intermediate node. We represent a node hence in a schematic fashion with ports "sticking out" to its north and south rather than by the usual real-world front faceplate designs of the day.
Figure 4 provides a view of a leaf node as seen from the north, i.e. showing ports that connect northbound. For lack of a better symbol, we have chosen to use the "o" as ASCII visualisation of a single port. In this example, K_LEAF has 6 ports. Observe that the number of PoDs is not related to Radix unless the ToF Nodes are constrained to be the same as the PoD nodes in a particular deployment.
Top view +---+ | | | o | e.g., Radix = 12, K_LEAF = 6 | | | o | | | ------------------------- | o ------- Physical Port (Ethernet) ----+ | | ------------------------- | | o | | | | | | o | | | | | | o | | | | | +---+ | || || || || || || || +----+ +------------------------------------------------+ | | | | +----+ +------------------------------------------------+ || || || || || || || Side views
Figure 4: A Leaf Node, K_LEAF=6
The Radix of a PoD's topnode may be different than that of the leaf node. Though, more often than not, a same type of node is used for both, effectively forming a square (K*K). In general case, we could have switches with K_TOP southern ports on nodes at the top of the PoD which are not necessarily the same as K_LEAF. For instance, in the representations below, we pick a 6 port K_LEAF and a 8 port K_TOP. In order to form a crossbar, we need K_TOP Leaf Nodes as illustrated in Figure 5.
+---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | | | | | | | | | | | | | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | | | | | | | | | | | | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | | | | | | | | | | | | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | | | | | | | | | | | | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | | | | | | | | | | | | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | | | | | | | | | | | | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | | | | | | | | | | | | | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+
Figure 5: Southern View of a PoD, K_TOP=8
As further visualized in Figure 6 the K_TOP Leaf Nodes are fully interconnected with the K_LEAF PoD-top nodes, providing connectivity that can be represented as a crossbar when "looked at" from the north. The result is that, in the absence of a failure, a packet entering the PoD from the north on any port can be routed to any port in the south of the PoD and vice versa. And that is precisely why it makes sense to talk about a "switching matrix".
E<-*->W +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | | | | | | | | | | | | | | | +--------------------------------------------------------+ | o o o o o o o o | +--------------------------------------------------------+ +--------------------------------------------------------+ | o o o o o o o o | +--------------------------------------------------------+ +--------------------------------------------------------+ | o o o o o o o o | +--------------------------------------------------------+ +--------------------------------------------------------+ | o o o o o o o o | +--------------------------------------------------------+ +--------------------------------------------------------+ | o o o o o o o o |<-+ +--------------------------------------------------------+ | +--------------------------------------------------------+ | | o o o o o o o o | | +--------------------------------------------------------+ | | | | | | | | | | | | | | | | | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | ^ | | | | ---------- --------------------- | +----- Leaf Node PoD top Node (Spine) --+ ---------- ---------------------
Figure 6: Northern View of a PoD's Spines, K_TOP=8
Side views of this PoD is illustrated in Figure 7 and Figure 8.
Connecting to Spine || || || || || || || || +----------------------------------------------------------------+ N | PoD top Node seen sideways | ^ +----------------------------------------------------------------+ | || || || || || || || || * +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | | | | | | | | | | | | | | | | | v +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ S || || || || || || || || Connecting to Client nodes
Figure 7: Side View of a PoD, K_TOP=8, K_LEAF=6
Connecting to Spine || || || || || || +----+ +----+ +----+ +----+ +----+ +----+ N | | | | | | | | | | | PoD top Nodes ^ +----+ +----+ +----+ +----+ +----+ +----+ | || || || || || || * +------------------------------------------------+ | | Leaf seen sideways | v +------------------------------------------------+ S || || || || || || Connecting to Client nodes
Figure 8: Other Side View of a PoD, K_TOP=8, K_LEAF=6, 90° turn in E-W Plane
As next step, let us observe that a resulting PoD can be abstracted as a bigger node with a number K of K_POD= K_TOP * K_LEAF, and the design can recurse.
It will be critical at this point that, before progressing further, the concept and the picture of "crossed crossbars" is clear. Else, the following considerations might be difficult to comprehend.
To continue, the PoDs are interconnected with each other through a Top-of-Fabric (ToF) node at the very top or the north edge of the fabric. The resulting ToF is NOT partitioned if, and only if (IIF), every PoD top level node (spine) is connected to every ToF Node. This topology is also referred to as a single plane configuration and is quite popular due to its simplicity. In order to reach a 1:1 connectivity ratio between the ToF and the leaves, it results that there are K_TOP ToF nodes, because each port of a ToP node connects to a different ToF node, and K_LEAF ToP nodes for the same reason. Consequently, it will take (P * K_LEAF) ports on a ToF node to connect to each of the K_LEAF ToP nodes of the P PoDs, as shown in Figure 9.
[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] <-----+ | | | | | | | | | [=================================] | ----------- | | | | | | | | +----- Top-of-Fabric [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] +----- Node -------+ | ----------- | | v +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ <-----+ +-+ | | | | | | | | | | | | | | | | | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] ------------------------- | | [ |o| |o| |o| |o| |o| |o| |o| |o<--- Physical Port (Ethernet) | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] ------------------------- | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | | | | | | | | | | | | | | | | | | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] -------------- | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] <--- PoD top level | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] node (Spine) ---+ | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] -------------- | | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | | | | | | | | | | | | | | | | | | -+ +- +-+ v | | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | --| |--[ ]--| | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | ----- | --| |--[ ]--| | [ |o| |o| |o| |o| |o| |o| |o| |o| ] +--- PoD ---+ --| |--[ ]--| | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | ----- | --| |--[ ]--| | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | --| |--[ ]--| | [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | --| |--[ ]--| | | | | | | | | | | | | | | | | | -+ +- +-+ | | +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
Figure 9: Fabric Spines and TOFs in Single Plane Design, 3 PoDs
The top view can be collapsed into a third dimension where the hidden depth index is representing the PoD number. We can then show one PoD as a class of PoDs and hence save one dimension in our representation. The Spine Node expands in the depth and the vertical dimensions, whereas the PoD top level Nodes are constrained, in horizontal dimension. A port in the 2-D representation represents effectively the class of all the ports at the same position in all the PoDs that are projected in its position along the depth axis. This is shown in Figure 10.
/ / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / ] +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ ]] | | | | | | | | | | | | | | | | ] --------------------------- [ |o| |o| |o| |o| |o| |o| |o| |o| ] <-- PoD top level node (Spine) [ |o| |o| |o| |o| |o| |o| |o| |o| ] --------------------------- [ |o| |o| |o| |o| |o| |o| |o| |o| ]]]] [ |o| |o| |o| |o| |o| |o| |o| |o| ]]] ^^ [ |o| |o| |o| |o| |o| |o| |o| |o| ]] // PoDs [ |o| |o| |o| |o| |o| |o| |o| |o| ] // (in depth) | |/| |/| |/| |/| |/| |/| |/| |/ // +-+ +-+ +-+/+-+/+-+ +-+ +-+ +-+ // ^ | ---------------- +----- Top-of-Fabric Node ----------------
Figure 10: Collapsed Northern View of a Fabric for Any Number of PoDs
As simple as single plane deployment is it introduces a limit due to the bound on the available radix of the ToF nodes that has to be at least P * K_LEAF. Nevertheless, we will see that a distinct advantage of a connected or non-partitioned Top-of-Fabric is that all failures can be resolved by simple, non-transitive, positive disaggregation (i.e. nodes advertising more specific prefixes with the default to the level below them that is however not propagated further down the fabric) as described in Section 4.2.5.1 . In other words; non-partitioned ToF nodes can always reach nodes below or withdraw the routes from PoDs they cannot reach unambiguously. And with this, positive disaggregation can heal all failures and still allow all the ToF nodes to see each other via south reflection. Disaggregation will be explained in further detail in Section 4.2.5.
In order to scale beyond the "single plane limit", the Top-of-Fabric can be partitioned by a N number of identically wired planes where N is an integer divider of K_LEAF. The 1:1 ratio and the desired symmetry are still served, this time with (K_TOP * N) ToF nodes, each of (P * K_LEAF / N) ports. N=1 represents a non-partitioned Spine and N=K_LEAF is a maximally partitioned Spine. Further, if R is any integer divisor of K_LEAF, then N=K_LEAF/R is a feasible number of planes and R a redundancy factor. If proves convenient for deployments to use a radix for the leaf nodes that is a power of 2 so they can pick a number of planes that is a lower power of 2. The example in Figure 11 splits the Spine in 2 planes with a redundancy factor R=3, meaning that there are 3 non-intersecting paths between any leaf node and any ToF node. A ToF node must have, in this case, at least 3*P ports, and be directly connected to 3 of the 6 PoD-ToP nodes (spines) in each PoD.
+---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | o | | o | | o | | o | | o | | o | | o | | o | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | o | | o | | o | | o | | o | | o | | o | | o | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | o | | o | | o | | o | | o | | o | | o | | o | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ Plane 1 ----------- . ------------ . ------------ . ------------ . -------- Plane 2 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | o | | o | | o | | o | | o | | o | | o | | o | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | o | | o | | o | | o | | o | | o | | o | | o | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | o | | o | | o | | o | | o | | o | | o | | o | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ ^ | | ---------------- +----- Top-of-Fabric node "across" depth ----------------
Figure 11: Northern View of a Multi-Plane ToF Level, K_LEAF=6, N=2
At the extreme end of the spectrum it is even possible to fully partition the spine with N = K_LEAF and R=1, while maintaining connectivity between each leaf node and each Top-of-Fabric node. In that case the ToF node connects to a single Port per PoD, so it appears as a single port in the projected view represented in Figure 12. The number of ports required on the Spine Node is more or equal to P, the number of PoDs.
Plane 1 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ -+ +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | | o | | o | | o | | o | | o | | o | | o | | o | | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | ----------- . ------------------- . ------------ . -------- | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | | o | | o | | o | | o | | o | | o | | o | | o | | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | ----------- . ------------ . ---- . ------------ . -------- | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | | o | | o | | o | | o | | o | | o | | o | | o | | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | ----------- . ------------ . ------------------- . -------- +<-+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | ----------- . ------------ . ------------ . ---- . -------- | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | ----------- . ------------ . ------------ . --------------- | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | | | o | | o | | o | | o | | o | | o | | o | | o | | | | +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ -+ | Plane 6 ^ | | | | ---------------- ------------- | +----- ToF Node Class of PoDs ---+ ---------------- -------------
Figure 12: Northern View of a Maximally Partitioned ToF Level, R=1
As mentioned earlier, RIFT exhibits an anisotropic behaviour tailored for fabrics with a North / South orientation and a high level of interleaving paths. A non-partitioned fabric makes a total loss of connectivity between a Top-of-Fabric node at the north and a leaf node at the south a very rare but yet possible occasion that is fully healed by positive disaggregation as described in Section 4.2.5.1. In large fabrics or fabrics built from switches with low radix, the ToF ends often being partitioned in planes which makes the occurrence of having a given leaf being only reachable from a subset of the ToF nodes more likely to happen. This makes some further considerations necessary.
We define a "Fallen Leaf" as a leaf that can be reached by only a subset, but not all, of Top-of-Fabric nodes due to missing connectivity. If R is the redundancy factor, then it takes at least R breakages to reach a "Fallen Leaf" situation.
In a maximally partitioned fabric, the redundancy factor is R= 1, so any breakage in the fabric may cause one or more fallen leaves. However, not all cases require disaggregation. The following cases do not require particular action in such scenario:
In a general manner, the mechanism of non-transitive positive disaggregation is sufficient when the disaggregating ToF nodes collectively connect to all the ToP nodes in the broken plane. This happens in the following case:
On the other hand, there is a need to disaggregate the routes to Fallen Leaves in a transitive fashion, all the way to the other leaves in the following cases:
For the sake of easy comprehension let us roll the abstractions back into a simple example and observe that in Figure 3 the loss of link Spine 122 to Leaf 122 will make Leaf 122 a fallen leaf for Top-of-Fabric plane B. Worse, if the cabling was never present in first place, plane B will not even be able to know that such a fallen leaf exists. Hence partitioning without further treatment results in two grave problems:
As illustrated later, and without further proof, the way to deal with fallen leaves in multi-plane designs, when aggregation is used, is that RIFT requires all the ToF nodes to share the same north topology database. This happens naturally in single plane design by the means of northbound flooding and south reflection but needs additional considerations in multi-plane fabrics. To satisfy this RIFT, in multi-plane designs, relies at the ToF level on ring interconnection of switches in multiple planes. Other solutions are possible but they either need more cabling or end up having much longer flooding paths and/or single points of failure.
In detail, by reserving two ports on each Top-of-Fabric node it is possible to connect them together by interplane bi-directional rings as illustrated in Figure 13. The rings will be used to exchange full north topology information between planes. All ToFs having same north topology allows by the means of transitive, negative disaggregation described in Section 4.2.5.2 to efficiently fix any possible fallen leaf scenario. Somewhat as a side-effect, the exchange of information fulfills the ask to present full view of the fabric topology at the Top-of-Fabric level, without the need to collate it from multiple points by additional complexity of technologies like [RFC7752].
+---+ +---+ +---+ +---+ +---+ +---+ +--------+ | | | | | | | | | | | | | | | | | | | | | | +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | +-| |--| |--| |--| |--| |--| |--| |-+ | | | o | | o | | o | | o | | o | | o | | o | | | Plane A +-| |--| |--| |--| |--| |--| |--| |-+ | +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | | | | | | | | | +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | +-| |--| |--| |--| |--| |--| |--| |-+ | | | o | | o | | o | | o | | o | | o | | o | | | Plane B +-| |--| |--| |--| |--| |--| |--| |-+ | +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | | | | | | | | | ... | | | | | | | | | +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | +-| |--| |--| |--| |--| |--| |--| |-+ | | | o | | o | | o | | o | | o | | o | | o | | | Plane X +-| |--| |--| |--| |--| |--| |--| |-+ | +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | | | | | | | | | | | | | | | | | | | | | | | +---+ +---+ +---+ +---+ +---+ +---+ +--------+ Rings 1 2 3 4 5 6 7
Figure 13: Connecting Top-of-Fabric Nodes Across Planes by Rings
One consequence of the "Fallen Leaf" problem is that some prefixes attached to the fallen leaf become unreachable from some of the ToF nodes. RIFT proposes two methods to address this issue, the positive and the negative disaggregation. Both methods flood South TIEs to advertise the impacted prefix(es).
When used for the operation of disaggregation, a positive South TIE, as usual, indicates reachability to a prefix of given length and all addresses subsumed by it. In contrast, a negative route advertisement indicates that the origin cannot route to the advertised prefix.
The positive disaggregation is originated by a router that can still reach the advertised prefix, and the operation is not transitive. In other words, the receiver does not generate its own flooding south as a consequence of receiving positive disaggregation advertisements from a higher level node. The effect of a positive disaggregation is that the traffic to the impacted prefix will follow the longest match and will be limited to the northbound routers that advertised the more specific route.
In contrast, the negative disaggregation can be transitive, and is propagated south when all the possible routes have been advertised as negative exceptions. A negative route advertisement is only actionable when the negative prefix is aggregated by a positive route advertisement for a shorter prefix. In such case, the negative advertisement "punches out a hole" in the positive route in the routing table, making the positive prefix reachable through the originator with the special consideration of the negative prefix removing certain next hop neighbors.
When the ToF is not partitioned, the collective southern flooding of the positive disaggregation by the ToF nodes that can still reach the impacted prefix is in general enough to cover all the switches at the next level south, typically the ToP nodes. If all those switches are aware of the disaggregation, they collectively create a ceiling that intercepts all the traffic north and forwards it to the ToF nodes that advertised the more specific route. In that case, the positive disaggregation alone is sufficient to solve the fallen leaf problem.
On the other hand, when the fabric is partitioned in planes, the positive disaggregation from ToF nodes in different planes do not reach the ToP switches in the affected plane and cannot solve the fallen leaves problem. In other words, a breakage in a plane can only be solved in that plane. Also, the selection of the plane for a packet typically occurs at the leaf level and the disaggregation must be transitive and reach all the leaves. In that case, the negative disaggregation is necessary. The details on the RIFT approach to deal with fallen leaves in an optimal way are specified in Section 4.2.5.2.
This section specifies the protocol in a normative fashion by either prescriptive procedures or behavior defined by Finite State Machines (FSM).
Some FSM figures are provided as [DOT] description due to limitations of ASCII art.
"On Entry" actions on FSM state are performed every time and right before the according state is entered, i.e. after any transitions from previous state.
"On Exit" actions are performed every time and immediately when a state is exited, i.e. before any transitions towards target state are performed.
Any attempt to transition from a state towards another on reception of an event where no action is specified MUST be considered an unrecoverable error.
The FSMs and procedures are normative in the sense that an implementation MUST implement them either literally or an implementation MUST exhibit externally observable behavior that is identical to the execution of the specified FSMs.
Where a FSM representation is inconvenient, i.e. the amount of procedures and kept state exceeds the amount of transitions, we defer to a more procedural description on data structures.
All packet formats are defined in Thrift models in Appendix B.
The serialized model is carried in an envelope within a UDP frame that provides security and allows validation/modification of several important fields without de-serialization for performance and security reasons.
RIFT LIE exchange auto-discovers neighbors, negotiates ZTP parameters and discovers miscablings. It uses a three-way handshake mechanism which is a cleaned up version of [RFC5303]. Observe that for easier comprehension the terminology of one/two and three-way states does NOT align with OSPF or ISIS FSMs albeit they use roughly same mechanisms. The formation progresses under normal conditions from one-way to two-way and then three-way state at which point it is ready to exchange TIEs per Section 4.2.3.
LIE exchange happens over well-known administratively locally scoped and configured or otherwise well-known IPv4 multicast address [RFC2365] and/or link-local multicast scope [RFC4291] for IPv6 [RFC8200] using a configured or otherwise a well-known destination UDP port defined in Appendix C.1. LIEs SHOULD be sent with an IPv4 Time to Live (TTL) / IPv6 Hop Limit (HL) of 1 to prevent RIFT information reaching beyond a single L3 next-hop in the topology. LIEs SHOULD be sent with network control precedence.
Originating port of the LIE has no further significance other than identifying the origination point. LIEs are exchanged over all links running RIFT.
An implementation MAY listen and send LIEs on IPv4 and/or IPv6 multicast addresses. A node MUST NOT originate LIEs on an address family if it does not process received LIEs on that family. LIEs on same link are considered part of the same negotiation independent of the address family they arrive on. Observe further that the LIE source address may not identify the peer uniquely in unnumbered or link-local address cases so the response transmission MUST occur over the same interface the LIEs have been received on. A node MAY use any of the adjacency's source addresses it saw in LIEs on the specific interface during adjacency formation to send TIEs. That implies that an implementation MUST be ready to accept TIEs on all addresses it used as source of LIE frames.
A three-way adjacency over any address family implies support for IPv4 forwarding if the `v4_forwarding_capable` flag is set to true and a node can use [RFC5549] type of forwarding in such a situation. It is expected that the whole fabric supports the same type of forwarding of address families on all the links. Operation of a fabric where only some of the links are supporting forwarding on an address family and others do not is outside the scope of this specification.
The protocol does NOT support selective disabling of address families, disabling v4 forwarding capability or any local address changes in three-way state, i.e. if a link has entered three-way IPv4 and/or IPv6 with a neighbor on an adjacency and it wants to stop supporting one of the families or change any of its local addresses or stop v4 forwarding, it has to tear down and rebuild the adjacency. It also has to remove any information it stored about the adjacency such as LIE source addresses seen.
Unless ZTP as described in Section 4.2.7 is used, each node is provisioned with the level at which it is operating. It MAY be also provisioned with its PoD. If any of those values is undefined, then accordingly a default level and/or an "undefined" PoD are assumed. This means that leaves do not need to be configured at all if initial configuration values are all left at "undefined" value. Nodes above ToP MUST remain at "any" PoD value which has the same value as "undefined" PoD. This information is propagated in the LIEs exchanged.
Further definitions of leaf flags are found in Section 4.2.7 given they have implications in terms of level and adjacency forming here.
A node tries to form a three-way adjacency if and only if
].
The rules checking PoD numbering MAY be optionally disregarded by a node if PoD detection is undesirable or has to be ignored. This will not affect the correctness of the protocol except preventing detection of certain miscabling cases.
A node configured with "undefined" PoD membership MUST, after building first northbound three way adjacencies to a node being in a defined PoD, advertise that PoD as part of its LIEs. In case that adjacency is lost, from all available northbound three way adjacencies the node with the highest System ID and defined PoD is chosen. That way the northmost defined PoD value (normally the ToP nodes) can diffuse southbound towards the leaves "forcing" the PoD value on any node with "undefined" PoD.
LIEs arriving with IPv4 Time to Live (TTL) / IPv6 Hop Limit (HL) larger than 1 MUST be ignored.
A node SHOULD NOT send out LIEs without defined level in the header but in certain scenarios it may be beneficial for trouble-shooting purposes.
This section specifies the precise, normative LIE FSM and can be omitted unless the reader is pursuing an implementation of the protocol.
Initial state is `OneWay`.
Event `MultipleNeighbors` occurs normally when more than two nodes see each other on the same link or a remote node is quickly reconfigured or rebooted without regressing to `OneWay` first. Each occurrence of the event SHOULD generate a clear, according notification to help operational deployments.
The machine sends LIEs on several transitions to accelerate adjacency bring-up without waiting for the timer tic.
Enter | V +-----------+ | OneWay |<----+ | | | HALChanged [StoreHAL] | Entry: | | HALSChanged [StoreHALS] | [CleanUp] | | HATChanged [StoreHAT] | | | HoldTimerExpired [-] | | | InstanceNameMismatch [-] | | | LevelChanged [UpdateLevel, PUSH SendLie] | | | LieReceived [ProcessLIE] | | | MTUMismatch [-] | | | NeighborAddressAdded [-] | | | NeighborChangedAddress [-] | | | NeighborChangedLevel [-] | | | NeighborChangedMinorFields [-] | | | NeighborDroppedReflection [-] | | | PODMismatch [-] | | | SendLIE [SendLIE] | | | TimerTick [PUSH SendLIE] | | | UnacceptableHeader | | | UpdateZTPOffer [SendOfferToZTPFSM] | |-----+ | | | |<--------------------- (ThreeWay) | |---------------------> | | ValidReflection [-] | | | |---------------------> (Multiple | | MultipleNeighbors Neighbors +-----------+ [StartMulNeighTimer] Wait) ^ | | | | | NewNeighbor [PUSH SendLIE] | V (TwoWay)
LIE FSM
(OneWay) | ^ | | HoldTimeExpired [-] | | InstanceNameMismatch [-] | | LevelChanged [StoreLevel] | | MTUMismatch [-] | | NeighborChangedAddress [-] | | NeighborChangedLevel [-] | | PODMismatch [-] | | UnacceptableHeader [-] V | +-----------+ | TwoWay |<----+ | | | HALChanged [StoreHAL] | | | HALSChanged [StoreHALS] | | | HATChanged [StoreHAT] | | | LevelChanged [StoreLevel] | | | LIERcvd [ProcessLIE] | | | SendLIE [SendLIE] | | | TimerTick [PUSH SendLIE, | | | IF HoldTimer expired | | | PUSH HoldTimerExpired] | | | UpdateZTPOffer [SendOfferToZTPFSM] | |-----+ | | | |<---------------------- | |----------------------> (Multiple | | NewNeighbor Neighbors | | [StartMulNeighTimer] Wait) | | MultipleNeighbors +-----------+ [StartMulNeighTimer] ^ | | | ValidReflection [-] | V (ThreeWay)
LIE FSM (continued)
(TwoWay) (OneWay) ^ | ^ | | | HoldTimerExpired [-] | | | InstanceNameMismatch [-] | | | LevelChanged [UpdateLevel] | | | MTUMismatch [-] | | | NeighborChangedAddress [-] | | | NeighborChangedLevel [-] NeighborDropped- | | | PODMismatch [-] Reflection [-] | | | UnacceptableHeader [-] | V | +-----------+ | | ThreeWay |-----+ | | | |<----+ | | | HALChanged [StoreHAL] | | | HALSChanged [StoreHALS] | | | HATChanged [StoreHAT] | | | LieReceived [ProcessLIE] | | | SendLIE [SendLIE] | | | TimerTick [PUSH SendLie, | | | IF HoldTimer expired | | | PUSH HoldTimerExpired] | | | UpdateZTPOffer [SendOfferToZTPFSM] | | | ValidReflection [-] | |-----+ | |----------------------> (Multiple | | MultipleNeighbors Neighbors +-----------+ [StartMulNeighTimer] Wait)
LIE FSM (continued)
(TwoWay) (ThreeWay) | | V V +------------+ | Multiple |<----+ | Neighbors | | HALChanged [StoreHAL] | Wait | | HALSChanged [StoreHALS] | | | HATChanged [StoreHAT] | | | MultipleNeighbors | | | [StartMultipleNeighborsTimer] | | | TimerTick [IF MulNeighTimer expired | | | PUSH MultipleNeighborsDone] | | | UpdateZTPOffer [SendOfferToZTP] | |-----+ | | | |<--------------------------- | |---------------------------> (OneWay) | | LevelChanged [StoreLevel] +------------+ MultipleNeighborsDone [-]
LIE FSM (continued)
Events
Actions
Following words are used for well known procedures:
Topology and reachability information in RIFT is conveyed by the means of TIEs which have good amount of commonalities with LSAs in OSPF.
The TIE exchange mechanism uses the port indicated by each node in the LIE exchange and the interface on which the adjacency has been formed as destination. It SHOULD use TTL of 1 as well and set inter-network control precedence on according packets.
TIEs contain sequence numbers, lifetimes and a type. Each type has ample identifying number space and information is spread across possibly many TIEs of a certain type by the means of a hash function that a node or deployment can individually determine. One extreme design choice is a prefix per TIE which leads to more BGP-like behavior where small increments are only advertised on route changes vs. deploying with dense prefix packing into few TIEs leading to more traditional IGP trade-off with fewer TIEs. An implementation may even rehash prefix to TIE mapping at any time at the cost of significant amount of re-advertisements of TIEs.
More information about the TIE structure can be found in the schema in Appendix B.
A central concept of RIFT is that each node represents itself differently depending on the direction in which it is advertising information. More precisely, a spine node represents two different databases over its adjacencies depending whether it advertises TIEs to the north or to the south/sideways. We call those differing TIE databases either south- or northbound (South TIEs and North TIEs) depending on the direction of distribution.
The North TIEs hold all of the node's adjacencies and local prefixes while the South TIEs hold only all of the node's adjacencies, the default prefix with necessary disaggregated prefixes and local prefixes. We will explain this in detail further in Section 4.2.5.
The TIE types are mostly symmetric in both directions and Table 2 provides a quick reference to main TIE types including direction and their function.
TIE-Type | Content |
---|---|
Node North TIE | node properties and adjacencies |
Node South TIE | same content as node North TIE |
Prefix North TIE | contains nodes' directly reachable prefixes |
Prefix South TIE | contains originated defaults and directly reachable prefixes |
Positive Disaggregation South TIE | contains disaggregated prefixes |
Negative Disaggregation South TIE | contains special, negatively disaggregated prefixes to support multi-plane designs |
External Prefix North TIE | contains external prefixes |
Key-Value North TIE | contains nodes northbound KVs |
Key-Value South TIE | contains nodes southbound KVs |
As an example illustrating a databases holding both representations, consider the topology in Figure 2 with the optional link between spine 111 and spine 112 (so that the flooding on an East-West link can be shown). This example assumes unnumbered interfaces. First, here are the TIEs generated by some nodes. For simplicity, the key value elements which may be included in their South TIEs or North TIEs are not shown.
ToF 21 South TIEs: Node South TIE: NodeElement(level=2, neighbors((Spine 111, level 1, cost 1), (Spine 112, level 1, cost 1), (Spine 121, level 1, cost 1), (Spine 122, level 1, cost 1))) Prefix South TIE: SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) Spine 111 South TIEs: Node South TIE: NodeElement(level=1, neighbors((ToF 21, level 2, cost 1, links(...)), (ToF 22, level 2, cost 1, links(...)), (Spine 112, level 1, cost 1, links(...)), (Leaf111, level 0, cost 1, links(...)), (Leaf112, level 0, cost 1, links(...)))) Prefix South TIE: SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) Spine 111 North TIEs: Node North TIE: NodeElement(level=1, neighbors((ToF 21, level 2, cost 1, links(...)), (ToF 22, level 2, cost 1, links(...)), (Spine 112, level 1, cost 1, links(...)), (Leaf111, level 0, cost 1, links(...)), (Leaf112, level 0, cost 1, links(...)))) Prefix North TIE: NorthPrefixesElement(prefixes(Spine 111.loopback) Spine 121 South TIEs: Node South TIE: NodeElement(level=1, neighbors((ToF 21,level 2,cost 1), (ToF 22, level 2, cost 1), (Leaf121, level 0, cost 1), (Leaf122, level 0, cost 1))) Prefix South TIE: SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) Spine 121 North TIEs: Node North TIE: NodeElement(level=1, neighbors((ToF 21, level 2, cost 1, links(...)), (ToF 22, level 2, cost 1, links(...)), (Leaf121, level 0, cost 1, links(...)), (Leaf122, level 0, cost 1, links(...)))) Prefix North TIE: NorthPrefixesElement(prefixes(Spine 121.loopback) Leaf112 North TIEs: Node North TIE: NodeElement(level=0, neighbors((Spine 111, level 1, cost 1, links(...)), (Spine 112, level 1, cost 1, links(...)))) Prefix North TIE: NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112, Prefix_MH))
Figure 14: Example TIES Generated in a 2 Level Spine-and-Leaf Topology
It may be here not necessarily obvious why the node South TIEs contain all the adjacencies of the according node. This will be necessary for algorithms given in Section 4.2.3.9 and Section 4.3.6.
The mechanism used to distribute TIEs is the well-known (albeit modified in several respects to take advantage of fat tree topology) flooding mechanism used by today's link-state protocols. Although flooding is initially more demanding to implement it avoids many problems with update style used in diffused computation such as distance vector protocols. Since flooding tends to present an unscalable burden in large, densely meshed topologies (fat trees being unfortunately such a topology) we provide as solution close to optimal global flood reduction and load balancing optimization in Section 4.2.3.9.
As described before, TIEs themselves are transported over UDP with the ports indicated in the LIE exchanges and using the destination address on which the LIE adjacency has been formed. For unnumbered IPv4 interfaces same considerations apply as in equivalent OSPF case.
On reception of a TIE with an undefined level value in the packet header the node SHOULD issue a warning and indiscriminately discard the packet.
This section specifies the precise, normative flooding mechanism and can be omitted unless the reader is pursuing an implementation of the protocol.
Flooding Procedures are described in terms of a flooding state of an adjacency and resulting operations on it driven by packet arrivals. The FSM itself has basically just a single state and is not well suited to represent the behavior. An implementation MUST behave on the wire in the same way as the provided normative procedures of this paragraph.
RIFT does not specify any kind of flood rate limiting since such specifications always assume particular points in available technology speeds and feeds and those points are shifting at faster and faster rate (speed of light holding for the moment). The encoded packets provide hints to react accordingly to losses or overruns.
Flooding of all according topology exchange elements SHOULD be performed at highest feasible rate whereas the rate of transmission MUST be throttled by reacting to adequate features of the system such as e.g. queue lengths or congestion indications in the protocol packets.
A node SHOULD NOT send out any topology information elements if the adjacency is not in a "three-way" state. No further tightening of this rule is possible due to possible link buffering and re-ordering of LIEs and TIEs/TIDEs/TIREs.
A node MUST drop any received TIEs/TIDEs/TIREs unless it is in three-way state.
TIDEs and TIREs MUST NOT be re-flooded the way TIEs of other nodes are are MUST be always generated by the node itself and cross only to the neighboring node.
The structure contains conceptually the following elements. The word collection or queue indicates a set of elements that can be iterated:
Following words are used for well known procedures operating on this structure:
The collection SHOULD be served with following priorities if the system cannot process all the collections in real time:
`TIEID` and `TIEHeader` space forms a strict total order (modulo incomparable sequence numbers in the very unlikely event that can occur if a TIE is "stuck" in a part of a network while the originator reboots and reissues TIEs many times to the point its sequence# rolls over and forms incomparable distance to the "stuck" copy) which implies that a comparison relation is possible between two elements. With that it is implicitly possible to compare TIEs, TIEHeaders and TIEIDs to each other whereas the shortest viable key is always implied.
When generating and sending TIDEs an implementation SHOULD ensure that enough bandwidth is left to send elements of Floodstate structure.
As given by timer constant, periodically generate TIDEs by:
The constant `TIRDEs_PER_PKT` SHOULD be generated and used by the implementation to limit the amount of TIE headers per TIDE so the sent TIDE PDU does not exceed interface MTU.
TIDE PDUs SHOULD be spaced on sending to prevent packet drops.
On reception of TIDEs the following processing is performed:
There is not much to say here. Elements from both TIES_REQ and TIES_ACK MUST be collected and sent out as fast as feasible as TIREs. When sending TIREs with elements from TIES_REQ the `lifetime` field MUST be set to 0 to force reflooding from the neighbor even if the TIEs seem to be same.
On reception of TIREs the following processing is performed:
On reception of TIEs the following processing is performed:
else
The Link State Database can be considered to be a switchboard that does not need any flooding procedures but can be given new versions of TIEs by a peer. Consecutively, a peer receives from the LSDB newer versions of TIEs received by other peers and processes them (without any filtering) just like receiving TIEs from its remote peer. This publisher model can be implemented in many ways.
On a periodic basis all TIEs with lifetime left > 0 MUST be sent out on the adjacency, removed from TIES_TX list and requeued onto TIES_RTX list.
In a somewhat analogous fashion to link-local, area and domain flooding scopes, RIFT defines several complex "flooding scopes" depending on the direction and type of TIE propagated.
Every North TIE is flooded northbound, providing a node at a given level with the complete topology of the Clos or Fat Tree network that is reachable southwards of it, including all specific prefixes. This means that a packet received from a node at the same or lower level whose destination is covered by one of those specific prefixes will be routed directly towards the node advertising that prefix rather than sending the packet to a node at a higher level.
A node's Node South TIEs, consisting of all node's adjacencies and prefix South TIEs limited to those related to default IP prefix and disaggregated prefixes, are flooded southbound in order to allow the nodes one level down to see connectivity of the higher level as well as reachability to the rest of the fabric. In order to allow an E-W disconnected node in a given level to receive the South TIEs of other nodes at its level, every *NODE* South TIE is "reflected" northbound to level from which it was received. It should be noted that East-West links are included in South TIE flooding (except at ToF level); those TIEs need to be flooded to satisfy algorithms in Section 4.2.4. In that way nodes at same level can learn about each other without a lower level, e.g. in case of leaf level. The precise, normative flooding scopes are given in Table 3. Those rules govern as well what SHOULD be included in TIDEs on the adjacency. Again, East-West flooding scopes are identical to South flooding scopes except in case of ToF East-West links (rings) which are basically performing northbound flooding.
Node South TIE "south reflection" allows to support positive disaggregation on failures describes in Section 4.2.5 and flooding reduction in Section 4.2.3.9.
Type / Direction | South | North | East-West |
---|---|---|---|
node South TIE | flood if level of originator is equal to this node | flood if level of originator is higher than this node | flood only if this node is not ToF |
non-node South TIE | flood self-originated only | flood only if neighbor is originator of TIE | flood only if self-originated and this node is not ToF |
all North TIEs | never flood | flood always | flood only if this node is ToF |
TIDE | include at least all non-self originated North TIE headers and self-originated South TIE headers and node South TIEs of nodes at same level | include at least all node South TIEs and all South TIEs originated by peer and all North TIEs | if this node is ToF then include all North TIEs, otherwise only self-originated TIEs |
TIRE as Request | request all North TIEs and all peer's self-originated TIEs and all node South TIEs | request all South TIEs | if this node is ToF then apply North scope rules, otherwise South scope rules |
TIRE as Ack | Ack all received TIEs | Ack all received TIEs | Ack all received TIEs |
If the TIDE includes additional TIE headers beside the ones specified, the receiving neighbor must apply according filter to the received TIDE strictly and MUST NOT request the extra TIE headers that were not allowed by the flooding scope rules in its direction.
As an example to illustrate these rules, consider using the topology in Figure 2, with the optional link between spine 111 and spine 112, and the associated TIEs given in Figure 14. The flooding from particular nodes of the TIEs is given in Table 4.
Router floods to | Neighbor | TIEs |
---|---|---|
Leaf111 | Spine 112 | Leaf111 North TIEs, Spine 111 node South TIE |
Leaf111 | Spine 111 | Leaf111 North TIEs, Spine 112 node South TIE |
Spine 111 | Leaf111 | Spine 111 South TIEs |
Spine 111 | Leaf112 | Spine 111 South TIEs |
Spine 111 | Spine 112 | Spine 111 South TIEs |
Spine 111 | ToF 21 | Spine 111 North TIEs, Leaf111 North TIEs, Leaf112 North TIEs, ToF 22 node South TIE |
Spine 111 | ToF 22 | Spine 111 North TIEs, Leaf111 North TIEs, Leaf112 North TIEs, ToF 21 node South TIE |
... | ... | ... |
ToF 21 | Spine 111 | ToF 21 South TIEs |
ToF 21 | Spine 112 | ToF 21 South TIEs |
ToF 21 | Spine 121 | ToF 21 South TIEs |
ToF 21 | Spine 122 | ToF 21 South TIEs |
... | ... | ... |
RIFT includes an optional ECN mechanism to prevent "flooding inrush" on restart or bring-up with many southbound neighbors. A node MAY set on its LIEs the according bit to indicate to the neighbor that it should temporarily flood node TIEs only to it. It SHOULD only set it in the southbound direction. The receiving node SHOULD accommodate the request to lessen the flooding load on the affected node if south of the sender and SHOULD ignore the bit if northbound.
Obviously this mechanism is most useful in southbound direction. The distribution of node TIEs guarantees correct behavior of algorithms like disaggregation or default route origination. Furthermore though, the use of this bit presents an inherent trade-off between processing load and convergence speed since suppressing flooding of northbound prefixes from neighbors will lead to blackholes.
The initial exchange of RIFT is modeled after ISIS with TIDE being equivalent to CSNP and TIRE playing the role of PSNP. The content of TIDEs and TIREs is governed by Table 3.
When a node exits the network, if "unpurged", residual stale TIEs may exist in the network until their lifetimes expire (which in case of RIFT is by default a rather long period to prevent ongoing re-origination of TIEs in very large topologies). RIFT does however not have a "purging mechanism" in the traditional sense based on sending specialized "purge" packets. In other routing protocols such mechanism has proven to be complex and fragile based on many years of experience. RIFT simply issues a new, empty version of the TIE with a short lifetime and relies on each node to age out and delete such TIE copy independently. Abundant amounts of memory are available today even on low-end platforms and hence keeping those relatively short-lived extra copies for a while is acceptable. The information will age out and in the meantime all computations will deliver correct results if a node leaves the network due to the new information distributed by its adjacent nodes breaking bi-directional connectivity checks in different computations.
Once a RIFT node issues a TIE with an ID, it SHOULD preserve the ID as long as feasible (also when the protocol restarts), even if the TIE looses all content. The re-advertisement of empty TIE fulfills the purpose of purging any information advertised in previous versions. The originator is free to not re-originate the according empty TIE again or originate an empty TIE with relatively short lifetime to prevent large number of long-lived empty stubs polluting the network. Each node MUST timeout and clean up the according empty TIEs independently.
Upon restart a node MUST, as any link-state implementation, be prepared to receive TIEs with its own system ID and supersede them with equivalent, newly generated, empty TIEs with a higher sequence number. As above, the lifetime can be relatively short since it only needs to exceed the necessary propagation and processing delay by all the nodes that are within the TIE's flooding scope.
TIE sequence numbers are rolled over using the method described in Appendix A. First sequence number of any spontaneously originated TIE (i.e. not originated to override a detected older copy in the network) MUST be a reasonably unpredictable random number in the interval [0, 2^30-1] which will prevent otherwise identical TIE headers to remain "stuck" in the network with content different from TIE originated after reboot. In traditional link-state protocols this is delegated to a 16-bit checksum on packet content. RIFT avoids this design due to the CPU burden presented by computation of such checksums and additional complications tied to the fact that the checksum must be "patched" into the packet after the computation, a difficult proposition in binary hand-crafted formats already and highly incompatible with model-based, serialized formats. The sequence number space is hence consciously chosen to be 64-bits wide to make the occurence of a TIE with same sequence number but different content as much or even more unlikely than the checksum method. To emulate the "checksum behavior" an implementation could e.g. choose to compute 64-bit checksum over the packet content and use that as first sequence number after reboot.
Under certain conditions nodes issue a default route in their South Prefix TIEs with costs as computed in Section 4.3.6.1.
A node X that
originates in its south prefix TIE such a default route IIF
The term "all other nodes at X's' level" describes obviously just the nodes at the same level in the PoD with a viable lower level (otherwise the node South TIEs cannot be reflected and the nodes in e.g. PoD 1 and PoD 2 are "invisible" to each other).
A node originating a southbound default route MUST install a default discard route if it did not compute a default route during N-SPF.
Section 1.4 of the Optimized Link State Routing Protocol (OLSR) introduces the concept of a "multipoint relay" (MPR) that minimize the overhead of flooding messages in the network by reducing redundant retransmissions in the same region.
A similar technique is applied to RIFT to control northbound flooding. Important observations first:
In a fully connected Clos Network, this means that a node selects one arbitrary parent as FR and then a second one for redundancy. The computation can be kept relatively simple and completely distributed without any need for synchronization amongst nodes. In a "PoD" structure, where the Level L+2 is partitioned in silos of equivalent grandparents that are only reachable from respective parents, this means treating each silo as a fully connected Clos Network and solve the problem within the silo.
In terms of signaling, a node has enough information to select its set of FRs; this information is derived from the node's parents' Node South TIEs, which indicate the parent's reachable northbound adjacencies to its own parents, i.e. the node's grandparents. A node may send a LIE to a northbound neighbor with the optional boolean field `you_are_flood_repeater` set to false, to indicate that the northbound neighbor is not a flood repeater for the node that sent the LIE. In that case the northbound neighbor SHOULD NOT reflood northbound TIEs received from the node that sent the LIE. If the `you_are_flood_repeater` is absent or if `you_are_flood_repeater` is set to true, then the northbound neighbor is a flood repeater for the node that sent the LIE and MUST reflood northbound TIEs received from that node.
This specification proposes a simple default algorithm that SHOULD be implemented and used by default on every RIFT node.
The algorithm consists of the following steps:
Additional rules for flooding reduction:
First, due to the distributed, asynchronous nature of ZTP, it can create temporary convergence anomalies where nodes at higher levels of the fabric temporarily see themselves lower than they belong. Since flooding can begin before ZTP is "finished" and in fact must do so given there is no global termination criteria, information may end up in wrong layers. A special clause when changing level takes care of that.
More difficult is a condition where a node (e.g. a leaf) floods a TIE north towards its grandparent, then its parent reboots, in fact partitioning the grandparent from leaf directly and then the leaf itself reboots. That can leave the grandparent holding the "primary copy" of the leaf's TIE. Normally this condition is resolved easily by the leaf re-originating its TIE with a higher sequence number than it sees in northbound TIEs, here however, when the parent comes back it won't be able to obtain leaf's North TIE from the grandparent easily and with that the leaf may not issue the TIE with a higher sequence number that can reach the grandparent for a long time. Flooding procedures are extended to deal with the problem by the means of special clauses that override the database of a lower level with headers of newer TIEs seen in TIDEs coming from the north.
A node has three possible sources of relevant information for reachability computation. A node knows the full topology south of it from the received North Node TIEs or alternately north of it from the South Node TIEs. A node has the set of prefixes with their associated distances and bandwidths from corresponding prefix TIEs.
To compute prefix reachability, a node runs conceptually a northbound and a southbound SPF. We call that N-SPF and S-SPF denoting the direction in which the computation front is progressing.
Since neither computation can "loop", it is possible to compute non-equal-cost or even k-shortest paths and "saturate" the fabric to the extent desired but we use simple, familiar SPF algorithms and concepts here as example due to their prevalence in today's routing.
N-SPF MUST use exclusively northbound and East-West adjacencies in the computing node's node North TIEs (since if the node is a leaf it may not have generated a node South TIE) when starting SPF. Observe that N-SPF is really just a one hop variety since Node South TIEs are not re-flooded southbound beyond a single level (or East-West) and with that the computation cannot progress beyond adjacent nodes.
Once progressing, we are using the next higher level's node South TIEs to find according adjacencies to verify backlink connectivity. Just as in case of IS-IS or OSPF, two unidirectional links MUST be associated together to confirm bidirectional connectivity. Particular care MUST be paid that the Node TIEs do not only contain the correct system IDs but matching levels as well.
Default route found when crossing an E-W link SHOULD be used IIF
This rule forms a "one-hop default route split-horizon" and prevents looping over default routes while allowing for "one-hop protection" of nodes that lost all northbound adjacencies except at Top-of-Fabric where the links are used exclusively to flood topology information in multi-plane designs.
Other south prefixes found when crossing E-W link MAY be used IIF
i.e. the E-W link can be used as a gateway of last resort for a specific prefix only. Using south prefixes across E-W link can be beneficial e.g. on automatic de-aggregation in pathological fabric partitioning scenarios.
A detailed example can be found in Section 5.4.
S-SPF MUST use exclusively the southbound adjacencies in the node South TIEs, i.e. progresses towards nodes at lower levels. Observe that E-W adjacencies are NEVER used in the computation. This enforces the requirement that a packet traversing in a southbound direction must never change its direction.
S-SPF MUST use northbound adjacencies in node North TIEs to verify backlink connectivity by checking for presence of the link beside correct SystemID and level.
Using south prefixes over horizontal links MAY occur if the N-SPF includes East-West adjacencies in computation. It can protect against pathological fabric partitioning cases that leave only paths to destinations that would necessitate multiple changes of forwarding direction between north and south.
E-W ToF links behave in terms of flooding scopes defined in Section 4.2.3.4 like northbound links and MUST be used exclusively for control plane information flooding. Even though a ToF node could be tempted to use those links during southbound SPF and carry traffic over them this MUST NOT be attempted since it may lead in, e.g. anycast cases to routing loops. An implementation MAY try to resolve the looping problem by following on the ring strictly tie-broken shortest-paths only but the details are outside this specification. And even then, the problem of proper capacity provisioning of such links when they become traffic-bearing in case of failures is vexing.
Under normal circumstances, node's South TIEs contain just the adjacencies and a default route. However, if a node detects that its default IP prefix covers one or more prefixes that are reachable through it but not through one or more other nodes at the same level, then it MUST explicitly advertise those prefixes in an South TIE. Otherwise, some percentage of the northbound traffic for those prefixes would be sent to nodes without according reachability, causing it to be black-holed. Even when not black-holing, the resulting forwarding could 'backhaul' packets through the higher level spines, clearly an undesirable condition affecting the blocking probabilities of the fabric.
We refer to the process of advertising additional prefixes southbound as 'positive de-aggregation' or 'positive dis-aggregation'. Such dis-aggregation is non-transitive, i.e. its' effects are always contained to a single level of the fabric only. Naturally, multiple node or link failures can lead to several independent instances of positive dis-aggregation necessary to prevent looping or bow-tying the fabric.
A node determines the set of prefixes needing de-aggregation using the following steps:
To summarize the above in simplest terms: if a node detects that its default route encompasses prefixes for which one of the other nodes in its level has no possible next-hops in the level below, it has to disaggregate it to prevent black-holing or suboptimal routing through such nodes. Hence a node X needs to determine if it can reach a different set of south neighbors than other nodes at the same level, which are connected to it via at least one common south neighbor. If it can, then prefix disaggregation may be required. If it can't, then no prefix disaggregation is needed. An example of disaggregation is provided in Section 5.3.
A possible algorithm is described last:
A node X computes reachability to all nodes below it based upon the received North TIEs first. This results in a set of routes, each categorized by (prefix, path_distance, next-hop set). Alternately, for clarity in the following procedure, these can be organized by next-hop set as ( (next-hops), {(prefix, path_distance)}). If partial_neighbors isn't empty, then the following procedure describes how to identify prefixes to disaggregate.
disaggregated_prefixes = { empty } nodes_same_level = { empty } for each South TIE if (South TIE.level == X.level and X shares at least one S-neighbor with X) add South TIE.originator to nodes_same_level end if end for for each next-hop-set NHS isolated_nodes = nodes_same_level for each NH in NHS if NH in partial_neighbors isolated_nodes = intersection(isolated_nodes, partial_neighbors[NH].nodes) end if end for if isolated_nodes is not empty for each prefix using NHS add (prefix, distance) to disaggregated_prefixes end for end if end for copy disaggregated_prefixes to X's South TIE if X's South TIE is different schedule South TIE for flooding end if
Figure 15: Computation of Disaggregated Prefixes
Each disaggregated prefix is sent with the according path_distance. This allows a node to send the same South TIE to each south neighbor. The south neighbor which is connected to that prefix will thus have a shorter path.
Finally, to summarize the less obvious points partially omitted in the algorithms to keep them more tractable:
In case positive disaggregation is triggered and due to the very stable but un-synchronized nature of the algorithm the nodes may issue the necessary disaggregated prefixes at different points in time. This can lead for a short time to an "incast" behavior where the first advertising router based on the nature of longest prefix match will attract all the traffic. An implementation MAY hence choose different strategies to address this behavior if needed.
To close this section it is worth to observe that in a single plane ToF this disaggregation prevents blackholing up to (K_LEAF * P) link failures in terms of Section 4.1.2 or in other terms, it takes at minimum that many link failures to partition the ToF into multiple planes.
As explained in Section 4.1.3 failures in multi-plane Top-of-Fabric or more than (K_LEAF * P) links failing in single plane design can generate fallen leaves. Such scenario cannot be addressed by positive disaggregation only and needs a further mechanism.
Let us return in this section to designs with multiple planes as shown in Figure 3. Figure 16 highlights how the ToF is cabled in case of two planes by the means of dual-rings to distribute all the North TIEs within both planes. For people familiar with traditional link-state routing protocols ToF level can be considered equivalent to area 0 in OSPF or level-2 in ISIS which need to be "connected" as well for the protocol to operate correctly.
. ++==========++ ++==========++ . II II II II .+----++--+ +----++--+ +----++--+ +----++--+ .|ToF A1| |ToF B1| |ToF B2| |ToF A2| .++-+-++--+ ++-+-++--+ ++-+-++--+ ++-+-++--+ . | | II | | II | | II | | II . | | ++==========++ | | ++==========++ . | | | | | | | | . . ~~~ Highlighted ToF of the previous multi-plane figure ~~
Figure 16: Topologically Connected Planes
As described in Section 4.1.3 failures in multi-plane fabrics can lead to blackholes which normal positive disaggregation cannot fix. The mechanism of negative, transitive disaggregation incorporated in RIFT provides the according solution.
A ToF node that discovers that it cannot reach a fallen leaf disaggregates all the prefixes of such leaves. It uses for that purpose negative prefix South TIEs that are, as usual, flooded southwards with the scope defined in Section 4.2.3.4.
Transitively, a node explicitly loses connectivity to a prefix when none of its children advertises it and when the prefix is negatively disaggregated by all of its parents. When that happens, the node originates the negative prefix further down south. Since the mechanism applies recursively south the negative prefix may propagate transitively all the way down to the leaf. This is necessary since leaves connected to multiple planes by means of disjoint paths may have to choose the correct plane already at the very bottom of the fabric to make sure that they don't send traffic towards another leaf using a plane where it is "fallen" at which in point a blackhole is unavoidable.
When the connectivity is restored, a node that disaggregated a prefix withdraws the negative disaggregation by the usual mechanism of re-advertising TIEs omitting the negative prefix.
The document omitted so far the description of the computation necessary to generate the correct set of negative prefixes. Negative prefixes can in fact be advertised due to two different triggers. We describe them consecutively.
The first origination reason is a computation that uses all the node North TIEs to build the set of all reachable nodes by reachability computation over the complete graph and including ToF links. The computation uses the node itself as root. This is compared with the result of the normal southbound SPF as described in Section 4.2.4.2. The difference are the fallen leaves and all their attached prefixes are advertised as negative prefixes southbound if the node does not see the prefix being reachable within southbound SPF.
The second mechanism hinges on the understanding how the negative prefixes are used within the computation as described in Figure 17. When attaching the negative prefixes at certain point in time the negative prefix may find itself with all the viable nodes from the shorter match nexthop being pruned. In other words, all its northbound neighbors provided a negative prefix advertisement. This is the trigger to advertise this negative prefix transitively south and normally caused by the node being in a plane where the prefix belongs to a fabric leaf that has "fallen" in this plane. Obviously, when one of the northbound switches withdraws its negative advertisement, the node has to withdraw its transitively provided negative prefix as well.
After SPF is run, it is necessary to attach the resulting reachability information in form of prefixes. For S-SPF, prefixes from an North TIE are attached to the originating node with that node's next-hop set and a distance equal to the prefix's cost plus the node's minimized path distance. The RIFT route database, a set of (prefix, prefix-type, attributes, path_distance, next-hop set), accumulates these results.
In case of N-SPF prefixes from each South TIE need to also be added to the RIFT route database. The N-SPF is really just a stub so the computing node needs simply to determine, for each prefix in an South TIE that originated from adjacent node, what next-hops to use to reach that node. Since there may be parallel links, the next-hops to use can be a set; presence of the computing node in the associated Node South TIE is sufficient to verify that at least one link has bidirectional connectivity. The set of minimum cost next-hops from the computing node X to the originating adjacent node is determined.
Each prefix has its cost adjusted before being added into the RIFT route database. The cost of the prefix is set to the cost received plus the cost of the minimum distance next-hop to that neighbor while taking into account its attributes such as mobility per Section 4.3.3. Then each prefix can be added into the RIFT route database with the next-hop set; ties are broken based upon type first and then distance and further on `PrefixAttributes` and only the best combination is used for forwarding. RIFT route preferences are normalized by the according Thrift model type.
for each South TIE if South TIE.level > X.level next_hop_set = set of minimum cost links to the South TIE.originator next_hop_cost = minimum cost link to South TIE.originator end if for each prefix P in the South TIE P.cost = P.cost + next_hop_cost if P not in route_database: add (P, P.cost, P.type, P.attributes, next_hop_set) to route_database end if if (P in route_database): if route_database[P].cost > P.cost or route_database[P].type > P.type: update route_database[P] with (P, P.type, P.cost, P.attributes, next_hop_set) else if route_database[P].cost == P.cost and route_database[P].type == P.type: update route_database[P] with (P, P.type, P.cost, P.attributes, merge(next_hop_set, route_database[P].next_hop_set)) else // Not preferred route so ignore end if end if end for end for
Figure 17: Adding Routes from South TIE Positive and Negative Prefixes
An example implementation for node X follows:
After the positive prefixes are attached and tie-broken, negative prefixes are attached and used in case of northbound computation, ideally from the shortest length to the longest. The nexthop adjacencies for a negative prefix are inherited from the longest positive prefix that aggregates it, and subsequently adjacencies to nodes that advertised negative for this prefix are removed.
The rule of inheritance MUST be maintained when the nexthop list for a prefix is modified, as the modification may affect the entries for matching negative prefixes of immediate longer prefix length. For instance, if a nexthop is added, then by inheritance it must be added to all the negative routes of immediate longer prefixes length unless it is pruned due to a negative advertisement for the same next hop. Similarly, if a nexthop is deleted for a given prefix, then it is deleted for all the immediately aggregated negative routes. This will recurse in the case of nested negative prefix aggregations.
The rule of inheritance must also be maintained when a new prefix of intermediate length is inserted, or when the immediately aggregating prefix is deleted from the routing table, making an even shorter aggregating prefix the one from which the negative routes now inherit their adjacencies. As the aggregating prefix changes, all the negative routes must be recomputed, and then again the process may recurse in case of nested negative prefix aggregations.
Although these operations can be computationally expensive, the overall load on devices in the network is low because these computations are not run very often, as positive route advertisements are always preferred over negative ones. This prevents recursion in most cases because positive reachability information never inherits next hops.
To make the negative disaggregation less abstract and provide an example let us consider a ToP node T1 with 4 ToF parents S1..S4 as represented in Figure 18:
+----+ +----+ +----+ +----+ N | S1 | | S1 | | S1 | | S1 | ^ +----+ +----+ +----+ +----+ W< + >E | | | | v |+--------+ | | S ||+-----------------+ | |||+----------------+---------+ |||| +----+ | T1 | +----+
Figure 18: A ToP Node with 4 Parents
If all ToF nodes can reach all the prefixes in the network; with RIFT, they will normally advertise a default route south. An abstract Routing Information Base (RIB), more commonly known as a routing table, stores all types of maintained routes including the negative ones and "tie-breaks" for the best one, whereas an abstract Forwarding table (FIB) retains only the ultimately computed "positive" routing instructions. In T1, those tables would look as illustrated in Figure 19:
+---------+ | Default | +---------+ | | +--------+ +---> | Via S1 | | +--------+ | | +--------+ +---> | Via S2 | | +--------+ | | +--------+ +---> | Via S3 | | +---------+ | | +--------+ +---> | Via S4 | +--------+
Figure 19: Abstract RIB
In case T1 receives a negative advertisement for prefix 2001:db8::/32 from S1 a negative route is stored in the RIB (indicated by a ~ sign), while the more specific routes to the complementing ToF nodes are installed in FIB. RIB and FIB in T1 now look as illustrated in Figure 20 and Figure 21, respectively:
+---------+ +-----------------+ | Default | <-------------- | ~2001:db8::/32 | +---------+ +-----------------+ | | | +--------+ | +--------+ +---> | Via S1 | +---> | Via S1 | | +--------+ +--------+ | | +--------+ +---> | Via S2 | | +--------+ | | +--------+ +---> | Via S3 | | +---------+ | | +--------+ +---> | Via S4 | +--------+
Figure 20: Abstract RIB after Negative 2001:db8::/32 from S1
The negative 2001:db8::/32 prefix entry inherits from ::/0, so the positive more specific routes are the complements to S1 in the set of next-hops for the default route. That entry is composed of S2, S3, and S4, or, in other words, it uses all entries the the default route with a "hole punched" for S1 into them. These are the next hops that are still available to reach 2001:db8::/32, now that S1 advertised that it will not forward 2001:db8::/32 anymore. Ultimately, those resulting next-hops are installed in FIB for the more specific route to 2001:db8::/32 as illustrated below:
+---------+ +---------------+ | Default | | 2001:db8::/32 | +---------+ +---------------+ | | | +--------+ | +---> | Via S1 | | | +--------+ | | | | +--------+ | +--------+ +---> | Via S2 | +---> | Via S2 | | +--------+ | +--------+ | | | +--------+ | +--------+ +---> | Via S3 | +---> | Via S3 | | +--------+ | +--------+ | | | +--------+ | +--------+ +---> | Via S4 | +---> | Via S4 | +--------+ +--------+
Figure 21: Abstract FIB after Negative 2001:db8::/32 from S1
To illustrate matters further let us consider T1 receiving a negative advertisement for prefix 2001:db8:1::/48 from S2, which is stored in RIB again. After the update, the RIB in T1 is illustrated in Figure 22:
+---------+ +----------------+ +------------------+ | Default | <----- | ~2001:db8::/32 | <------ | ~2001:db8:1::/48 | +---------+ +----------------+ +------------------+ | | | | +--------+ | +--------+ | +---> | Via S1 | +---> | Via S1 | | | +--------+ +--------+ | | | | +--------+ | +--------+ +---> | Via S2 | +---> | Via S2 | | +--------+ +--------+ | | +--------+ +---> | Via S3 | | +---------+ | | +--------+ +---> | Via S4 | +--------+
Figure 22: Abstract RIB after Negative 2001:db8:1::/48 from S2
Negative 2001:db8:1::/48 inherits from 2001:db8::/32 now, so the positive more specific routes are the complements to S2 in the set of next hops for 2001:db8::/32, which are S3 and S4, or, in other words, all entries of the parent with the negative holes "punched in" again. After the update, the FIB in T1 shows as illustrated in Figure 23:
+---------+ +---------------+ +-----------------+ | Default | | 2001:db8::/32 | | 2001:db8:1::/48 | +---------+ +---------------+ +-----------------+ | | | | +--------+ | | +---> | Via S1 | | | | +--------+ | | | | | | +--------+ | +--------+ | +---> | Via S2 | +---> | Via S2 | | | +--------+ | +--------+ | | | | | +--------+ | +--------+ | +--------+ +---> | Via S3 | +---> | Via S3 | +---> | Via S3 | | +--------+ | +--------+ | +--------+ | | | | +--------+ | +--------+ | +--------+ +---> | Via S4 | +---> | Via S4 | +---> | Via S4 | +--------+ +--------+ +--------+
Figure 23: Abstract FIB after Negative 2001:db8:1::/48 from S2
Further, let us say that S3 stops advertising its service as default gateway. The entry is removed from RIB as usual. In order to update the FIB, it is necessary to eliminate the FIB entry for the default route, as well as all the FIB entries that were created for negative routes pointing to the RIB entry being removed (::/0). This is done recursively for 2001:db8::/32 and then for, 2001:db8:1::/48. The related FIB entries via S3 are removed, as illustrated in Figure 24.
+---------+ +---------------+ +-----------------+ | Default | | 2001:db8::/32 | | 2001:db8:1::/48 | +---------+ +---------------+ +-----------------+ | | | | +--------+ | | +---> | Via S1 | | | | +--------+ | | | | | | +--------+ | +--------+ | +---> | Via S2 | +---> | Via S2 | | | +--------+ | +--------+ | | | | | | | | | | | | | | | | | +--------+ | +--------+ | +--------+ +---> | Via S4 | +---> | Via S4 | +---> | Via S4 | +--------+ +--------+ +--------+
Figure 24: Abstract FIB after Loss of S3
Say that at that time, S4 would also disaggregate prefix 2001:db8:1::/48. This would mean that the FIB entry for 2001:db8:1::/48 becomes a discard route, and that would be the signal for T1 to disaggregate prefix 2001:db8:1::/48 negatively in a transitive fashion with its own children.
Finally, let us look at the case where S3 becomes available again as a default gateway, and a negative advertisement is received from S4 about prefix 2001:db8:2::/48 as opposed to 2001:db8:1::/48. Again, a negative route is stored in the RIB, and the more specific route to the complementing ToF nodes are installed in FIB. Since 2001:db8:2::/48 inherits from 2001:db8::/32, the positive FIB routes are chosen by removing S4 from S2, S3, S4. The abstract FIB in T1 now shows as illustrated in Figure 25:
+-----------------+ | 2001:db8:2::/48 | +-----------------+ | +---------+ +---------------+ +-----------------+ | Default | | 2001:db8::/32 | | 2001:db8:1::/48 | +---------+ +---------------+ +-----------------+ | | | | | +--------+ | | | +--------+ +---> | Via S1 | | | +---> | Via S2 | | +--------+ | | | +--------+ | | | | | +--------+ | +--------+ | | +--------+ +---> | Via S2 | +---> | Via S2 | | +---> | Via S3 | | +--------+ | +--------+ | +--------+ | | | | +--------+ | +--------+ | +--------+ +---> | Via S3 | +---> | Via S3 | +---> | Via S3 | | +--------+ | +--------+ | +--------+ | | | | +--------+ | +--------+ | +--------+ +---> | Via S4 | +---> | Via S4 | +---> | Via S4 | +--------+ +--------+ +--------+
Figure 25: Abstract FIB after Negative 2001:db8:2::/48 from S4
Each RIFT node can operate in zero touch provisioning (ZTP) mode, i.e. it has no configuration (unless it is a Top-of-Fabric at the top of the topology or the must operate in the topology as leaf and/or support leaf-2-leaf procedures) and it will fully configure itself after being attached to the topology. Configured nodes and nodes operating in ZTP can be mixed and will form a valid topology if achievable.
The derivation of the level of each node happens based on offers received from its neighbors whereas each node (with possibly exceptions of configured leaves) tries to attach at the highest possible point in the fabric. This guarantees that even if the diffusion front reaches a node from "below" faster than from "above", it will greedily abandon already negotiated level derived from nodes topologically below it and properly peers with nodes above.
The fabric is very consciously numbered from the top to allow for PoDs of different heights and minimize number of provisioning necessary, in this case just a TOP_OF_FABRIC flag on every node at the top of the fabric.
This section describes the necessary concepts and procedures for ZTP operation.
The interdependencies between the different flags and the configured level can be somewhat vexing at first and it may take multiple reads of the glossary to comprehend them.
RIFT nodes require a 64 bit SystemID which SHOULD be derived as EUI-64 MA-L derive according to [EUI64]. The organizationally governed portion of this ID (24 bits) can be used to generate multiple IDs if required to indicate more than one RIFT instance."
As matter of operational concern, the router MUST ensure that such identifier is not changing very frequently (or at least not without sending all its TIEs with fairly short lifetimes) since otherwise the network may be left with large amounts of stale TIEs in other nodes (though this is not necessarily a serious problem if the procedures described in Section 7 are implemented).
ZTP forces us to think about miscabled or unusually cabled fabric and how such a topology can be forced into a "lattice" structure which a fabric represents (with further restrictions). Let us consider a necessary and sufficient physical cabling in Figure 26. We assume all nodes being in the same PoD.
. +---+ . | A | s = TOP_OF_FABRIC . | s | l = LEAF_ONLY . ++-++ l2l = LEAF_2_LEAF . | | . +--+ +--+ . | | . +--++ ++--+ . | E | | F | . | +-+ | +-----------+ . ++--+ | ++-++ | . | | | | | . | +-------+ | | . | | | | | . | | +----+ | | . | | | | | . ++-++ ++-++ | . | I +-----+ J | | . | | | +-+ | . ++-++ +--++ | | . | | | | | . +---------+ | +------+ | . | | | | | . +-----------------+ | | . | | | | | . ++-++ ++-++ | . | X +-----+ Y +-+ . |l2l| | l | . +---+ +---+
Figure 26: Generic ZTP Cabling Considerations
First, we must anchor the "top" of the cabling and that's what the TOP_OF_FABRIC flag at node A is for. Then things look smooth until we have to decide whether node Y is at the same level as I, J (and as consequence, X is south of it) or at the same level as X. This is unresolvable here until we "nail down the bottom" of the topology. To achieve that we choose to use in this example the leaf flags in X and Y. In case where Y would not have a leaf flag it will try to elect highest level offered and end up being in same level as I and J.
A node starting up with UNDEFINED_VALUE (i.e. without a CONFIGURED_LEVEL or any leaf or TOP_OF_FABRIC flag) MUST follow those additional procedures:
A node starting with LEVEL_VALUE being 0 (i.e. it assumes a leaf function by being configured with the appropriate flags or has a CONFIGURED_LEVEL of 0) MUST follow those additional procedures:
It MAY also follow modified procedures:
This section specifies the precise, normative ZTP FSM and can be omitted unless the reader is pursuing an implementation of the protocol.
Initial state is ComputeBestOffer.
Enter | v +------------------+ | ComputeBestOffer | | |<----+ | Entry: | | BetterHAL [LEVEL_COMPUTE] | [LEVEL_COMPUTE] | | BetterHAT [LEVEL_COMPUTE] | | | ChangeLocalConfiguredLevel [StoreConfigLevel, | | | LEVEL_COMPUTE] | | | ChangeLocalHierarchyIndications | | | [StoreLeafFlags, | | | LEVEL_COMPUTE] | | | LostHAT [LEVEL_COMPUTE] | | | NeighborOffer [IF NoLevelOffered | | | THEN REMOVE_OFFER | | | ELSE IF OfferedLevel > Leaf | | | THEN UPDATE_OFFER | | | ELSE REMOVE_OFFER | | | ShortTic [RemoveExpiredOffers] | |-----+ | | | |<--------------------- | |---------------------> (UpdatingClients) | | ComputationDone [-] +------------------+ ^ | | | LostHAL [IF AnySouthBoundAdjacenciesPresent | | THEN UpdateHoldDownTimerToNormalValue | | ELSE FireHoldDownTimerImmediately] | V (HoldingDown)
ZTP FSM FSM
(ComputeBestOffer) | ^ | | ChangeLocalConfiguredLevel [StoreConfiguredLevel] | | ChangeLocalHierarchyIndications [StoreLeafFlags] | | HoldDownExpired [PURGE_OFFERS] V | +------------------+ | HoldingDown | | |<----+ | | | BetterHAL [-] | | | BetterHAT [-] | | | ComputationDone [-] | | | LostHAL [-] | | | LostHat [-] | | | NeighborOffer [IF NoLevelOffered | | | THEN REMOVE_OFFER | | | ELSE IF OfferedLevel > Leaf | | | THEN UPDATE_OFFER | | | ELSE REMOVE_OFFER | | | ShortTic [RemoveExpiredOffers, | | | IF HoldDownTimer expired | | | THEN PUSH HoldDownExpired] | |-----+ +------------------+ ^ | (UpdatingClients)
ZTP FSM FSM (continued)
(ComputeBestOffer) | ^ | | BetterHAL [-] | | BetterHAT [-] | | LostHAT [-] | | ChangeLocalHierarchyIndications [StoreLeafFlags] | | ChangeLocalConfiguredLevel [StoreConfigLevel] V | +------------------+ | UpdatingClients | | |<----+ | Entry: | | | [UpdateAllLIE- | | NeighborOffer [IF NoLevelOffered | FSMsWith- | | THEN REMOVE_OFFER | Computation- | | ELSE IF OfferedLevel > Leaf | Results] | | THEN UPDATE_OFFER | | | ELSE REMOVE_OFFER | | | ShortTic [RemoveExpiredOffers] | |-----+ +------------------+ | | LostHAL [IF AnySouthBoundAdjacenciesPresent | THEN UpdateHoldDownTimerToNormalValue | ELSE FireHoldDownTimerImmediately] V (HoldingDown)
ZTP FSM FSM (continued)
Events
Actions
Following words are used for well known procedures:
The procedures defined in Section 4.2.7.4 will lead to the RIFT topology and levels depicted in Figure 27.
. +---+ . | As| . | 24| . ++-++ . | | . +--+ +--+ . | | . +--++ ++--+ . | E | | F | . | 23+-+ | 23+-----------+ . ++--+ | ++-++ | . | | | | | . | +-------+ | | . | | | | | . | | +----+ | | . | | | | | . ++-++ ++-++ | . | I +-----+ J | | . | 22| | 22| | . ++--+ +--++ | . | | | . +---------+ | | . | | | . ++-++ +---+ | . | X | | Y +-+ . | 0 | | 0 | . +---+ +---+
Figure 27: Generic ZTP Topology Autoconfigured
In case we imagine the LEAF_ONLY restriction on Y is removed the outcome would be very different however and result in Figure 28. This demonstrates basically that auto configuration makes miscabling detection hard and with that can lead to undesirable effects in cases where leaves are not "nailed" by the accordingly configured flags and arbitrarily cabled.
A node MAY analyze the outstanding level offers on its interfaces and generate warnings when its internal ruleset flags a possible miscabling. As an example, when a node's sees ZTP level offers that differ by more than one level from its chosen level (with proper accounting for leaf's being at level 0) this can indicate miscabling.
. +---+ . | As| . | 24| . ++-++ . | | . +--+ +--+ . | | . +--++ ++--+ . | E | | F | . | 23+-+ | 23+-------+ . ++--+ | ++-++ | . | | | | | . | +-------+ | | . | | | | | . | | +----+ | | . | | | | | . ++-++ ++-++ +-+-+ . | I +-----+ J +-----+ Y | . | 22| | 22| | 22| . ++-++ +--++ ++-++ . | | | | | . | +-----------------+ | . | | | . +---------+ | | . | | | . ++-++ | . | X +--------+ . | 0 | . +---+
Figure 28: Generic ZTP Topology Autoconfigured
The autoconfiguration mechanism computes a global maximum of levels by diffusion. The achieved equilibrium can be disturbed massively by all nodes with highest level either leaving or entering the domain (with some finer distinctions not explained further). It is therefore recommended that each node is multi-homed towards nodes with respective HAL offerings. Fortunately, this is the natural state of things for the topology variants considered in RIFT.
The overload bit MUST be respected by all necessary SPF computations. A node with the overload bit set SHOULD advertise all locally hosted prefixes both northbound and southbound, all other southbound prefixes SHOULD NOT be advertised.
Leaf nodes SHOULD set the overload bit on all originated Node TIEs. If spine nodes were to forward traffic not intended for the local node, the leaf node would not be able to prevent routing/forwarding loops as it does not have the necessary topology information to do so.
Leaf nodes only have visibility to directly connected nodes and therefore are not required to run "full" SPF computations. Instead, prefixes from neighboring nodes can be gathered to run a "partial" SPF computation in order to build the routing table.
Leaf nodes SHOULD only hold their own N-TIEs, and in cases of L2L implementations, the N-TIEs of their East/West neighbors. Leaf nodes MUST hold all S-TIEs from their neighbors.
Normally, a full network graph is created based on local N-TIEs and remote S-TIEs that it receives from neighbors, at which time, necessary SPF computations are performed. Instead, leaf nodes can simply compute the minimum cost and next-hop set of each leaf neighbor by examining its local adjacencies. Associated N-TIEs are used to determine bi-directionality and derive the next-hop set. Cost is then derived from the minimum cost of the local adjacency to the neighbor and the prefix cost.
Leaf nodes would then attach necessary prefixes as described in Section 4.2.6.
The RIFT control plane MUST maintain the real time status of every prefix, to which port it is attached, and to which leaf node that port belongs. This is still true in cases of IP mobility where the point of attachment may change several times a second.
There are two classic approaches to explicitly maintain this information:
RIFT supports a hybrid approach by using an optional 'PrefixSequenceType' attribute (that we also call a 'monotonic clock') that consists of a timestamp and optional sequence number field. When this attribute is present (observe that per data schema the attribute itself is optional but in case it is included the 'timestamp' field is required):
All monotonic clock values MUST be compared to each other using the following rules:
For attachment changes that occur less frequently (e.g. once per second), the timestamp that the RIFT infrastructure captures should be enough to determine the most current discovery. If the point of attachment changes faster than the maximum drift of the timestamping mechanism (i.e. MAXIMUM_CLOCK_DELTA), then a sequence number SHOULD be used to enable necessary precision to determine currency.
The sequence counter in [RFC8505] is encoded as one octet and wraps around using Appendix A.
Within the resolution of MAXIMUM_CLOCK_DELTA, sequence counter values captured during 2 sequential iterations of the same timestamp SHOULD be comparable. This means that with default values, a node may move up to 127 times in a 200 millisecond period and the clocks will remain comparable. This allows the RIFT infrastructure to explicitly assert the most up-to-date advertisement.
A unicast prefix can be attached to at most one leaf, whereas an anycast prefix may be reachable via more than one leaf.
If a monotonic clock attribute is provided on the prefix, then the prefix with the `newest` clock value is strictly preferred. An anycast prefix does not carry a clock or all clock attributes MUST be the same under the rules of Section 4.3.3.1.
Observe that it is important that in mobility events the leaf is re-flooding as quickly as possible the absence of the prefix that moved away.
Observe further that without support for [RFC8505] movements on the fabric within intervals smaller than 100msec will be seen as anycast.
RIFT is agnostic to any overlay technologies and their associated control and transports that run on top of it (e.g. VXLAN). It is expected that leaf nodes and possibly Top-of-Fabric nodes can perform necessary data plane encapsulation.
In the context of mobility, overlays provide another possible solution to avoid injecting mobile prefixes into the fabric as well as improving scalability of the deployment. It makes sense to consider overlays for mobility solutions in IP fabrics. As an example, a mobility protocol such as LISP may inform the ingress leaf of the location of the egress leaf in real time.
Another possibility is to consider that mobility as an underlay service and support it in RIFT to an extent. The load on the fabric augments with the amount of mobility obviously since a move forces flooding and computation on all nodes in the scope of the move so tunneling from leaf to the Top-of-Fabric may be desired to speed up convergence times.
RIFT supports the southbound distribution of key-value pairs that can be used to distribute information to facilitate higher levels of functionality (e.g. distribution of configuration information). KV South TIEs may arrive from multiple nodes and therefore MUST execute the following tie-breaking rules for each key:
Consider that if a node goes down, nodes south of it will lose associated adjacencies causing them to disregard corresponding KVs. New KV South TIEs are advertised to prevent stale information being used by nodes that are farther south. KV advertisements southbound are not a result of independent computation by every node over the same set of South TIEs, but a diffused computation.
Certain use cases necessitate distribution of essential KV information that is generated by the leaves in the northbound direction. Such information is flooded in KV North TIEs. Since the originator of the KV North TIEs is preserved during flooding, overlapping keys MAY be used. However, to avoid further protocol complexity, the same tie-breaking rules as used in southbound distribution SHOULD be used.
RIFT MAY incorporate BFD to react quickly to link failures. In such case following procedures are introduced:
A well understood problem in fabrics is that in case of link failures, it would be ideal to rebalance how much traffic is sent to switches in the next level based on available ingress and egress bandwidth.
RIFT supports a very light weight mechanism that can deal with the problem in an approximate way based on the fact that RIFT is loop-free.
Every RIFT node SHOULD compute the amount of northbound bandwidth available through neighbors at higher level and modify distance received on default route from this neighbor. Default routes with differing distances SHOULD be used to support weighted ECMP forwarding. We call such a distance Bandwidth Adjusted Distance or BAD. This is best illustrated by a simple example.
. 100 x 100 100 MBits . | x | | . +-+---+-+ +-+---+-+ . | | | | . |Spin111| |Spin112| . +-+---+++ ++----+++ . |x || || || . || |+---------------+ || . || +---------------+| || . || || || || . || || || || . -----All Links 10 MBit------- . || || || || . || || || || . || +------------+| || || . || |+------------+ || || . |x || || || . +-+---+++ +--++-+++ . | | | | . |Leaf111| |Leaf112| . +-------+ +-------+
Figure 29: Balancing Bandwidth
Figure 29 depicts an example topology where links between leaf and spine nodes are 10 MBit/s and links from spine nodes northbound are 100 MBit/s. Consider a parallel link failure between Leaf 111 and Spine 111 and as a result, Leaf 111 wants to forward more traffic toward Spine 112. Additionally, we consider an uplink failure on Spine 111.
The local modification of the received default route distance from upper level is achieved by running a relatively simple algorithm where the bandwidth is weighted exponentially, while the distance on the default route represents a multiplier for the bandwidth weight for easy operational adjustments.
On a node, L, use Node TIEs to compute from each non-overloaded northbound neighbor N to compute 3 values:
For all T_N_u determine the according M_N_u as log_2(next_power_2(T_N_u)) and determine MAX_M_N_u as maximum value of all such M_N_u values.
For each advertised default route from a node N modify the advertised distance D to BAD = D * (1 + MAX_M_N_u - M_N_u) and use BAD instead of distance D to weight balance default forwarding towards N.
For the example above, a simple table of values will help in understanding of the concept. We assume that all default route distances are advertised with D=1 and that OVERSUBSCRIPTION_CONSTANT = 1.
Node | N | T_N_u | M_N_u | BAD |
---|---|---|---|---|
Leaf111 | Spine 111 | 110 | 7 | 2 |
Leaf111 | Spine 112 | 220 | 8 | 1 |
Leaf112 | Spine 111 | 120 | 7 | 2 |
Leaf112 | Spine 112 | 220 | 8 | 1 |
If a calculation produces a result exceeding the range of the type, e.g. bandwidth, the result is set to the highest possible value for that type.
BAD SHOULD be only computed for default routes. A node MAY compute and use BAD for any disaggregated prefixes or other RIFT routes. A node MAY use a different algorithm to weight northbound traffic based on bandwidth. If a different algorithm is used, its successful behavior MUST NOT depend on uniformity of algorithm or synchronization of BAD computations across the fabric. E.g. it is conceivable that leaves could use real time link loads gathered by analytics to change the amount of traffic assigned to each default route next hop.
Furthermore, a change in available bandwidth will only affect, at most, two levels down in the fabric, i.e. the blast radius of bandwidth adjustments is constrained no matter the fabric's height.
Due to its loop free nature, during South SPF, a node MAY account for maximum available bandwidth on nodes in lower levels and modify the amount of traffic offered to the next level's southbound nodes. It is worth considering that such computations may be more effective if standardized, but do not have to be. As long as a packet continues to flow southbound, it will take some viable, loop-free path to reach its destination.
A node MAY advertise in its LIEs, a locally significant, downstream assigned, interface specific label. One use of such a label is a hop-by-hop encapsulation allowing forwarding planes to be easily distinguished among multiple RIFT instances.
RIFT implementations SHOULD support special East-West adjacencies between leaf nodes. Leaf nodes supporting these procedures MUST:
This will allow the E-W leaf nodes to exchange traffic strictly for the prefixes advertised in each other's north prefix TIEs (since the southbound computation will find the reverse direction in the other node's TIE and install its north prefixes).
Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC8202] concepts are used today in link-state routing protocols to support several domains on the same physical topology. RIFT supports this capability by carrying transport ports in the LIE protocol exchanges. Multiplexing of LIEs can be achieved by either choosing varying multicast addresses or ports on the same address.
BFD interactions in Section 4.3.5 are implementation dependent when multiple RIFT instances run on the same link.
RIFT does not require that nodes have reachable addresses in the fabric, though it is clearly desirable for operational purposes. Under normal operating conditions this can be easily achieved by injecting the node's loopback address into North and South Prefix TIEs or other implementation specific mechanisms.
Special considerations arise when a node loses all northbound adjacencies, but is not at the top of the fabric. These are outside the scope of this document and could be discussed in a separate document.
Based on the rules defined in Section 4.2.4, Section 4.2.3.8 and given presence of E-W links, RIFT can provide a one-hop protection for nodes that lost all their northbound links. This can also be applied to multi-plane designs where complex link set failures occur at the Top-of-Fabric when links are exclusively used for flooding topology information. Section 5.4 outlines this behavior.
An inherent property of any security and ZTP architecture is the resulting trade-off in regard to integrity verification of the information distributed through the fabric vs. provisioning and auto-configuration requirements. At a minimum the security of an established adjacency should be ensured. The stricter the security model the more provisioning must take over the role of ZTP.
RIFT supports the following security models to allow for flexible control by the operator.
In order to support the cases mentioned above, RIFT implementations supports, through operator control, mechanisms that allow for:
Operators may only choose to configure the level of each node, but not explicitly configure which connections are allowed. In this case, RIFT will only allow adjacencies to establish between nodes that are in adjacent levels. Operators with the lowest security requirements may not use any configuration to specify which connections are allowed. Nodes in such fabrics could rely fully on ZTP and only established adjacencies between nodes in adjacent levels. Figure 30 illustrates inherent tradeoffs between the different security models.
Some level of link quality verification may be required prior to an adjacency being used for forwarding. For example, an implementation may require that a BFD session comes up before advertising the adjacency.
For the cases outlined above, RIFT has two approaches to enforce that a local port is connected to the correct port on the correct remote node. One approach is to piggy-back on RIFT's authentication mechanism. Assuming the provisioning model (e.g. the YANG model) is flexible enough, operators can choose to provision a unique authentication key for:
The other approach is to rely on the system-id, port-id and level fields in the LIE message to validate an adjacency against the expected cabling topology, and optionally introduce some new rules in the FSM to allow the adjacency to come up if the expectations are met.
^ /\ | /|\ / \ | | / \ | | / PAM \ | Increasing / \ Increasing Integrity +----------+ Flexibility & / NAM \ & Increasing +--------------+ Less Provisioning / FAM \ Configuration | +------------------+ | | / Level Provisioning \ | | +----------------------+ \|/ | / Zero Configuration \ v +--------------------------+
Figure 30: Security Model
RIFT Security goals are to ensure:
Message confidentiality is a non-goal.
The model in the previous section allows a range of security key types that are analogous to the various security association models. PAM and NAM allow security associations at the port or node level using symmetric or asymmetric keys that are pre-installed. FAM argues for security associations to be applied only at a group level or to be refined once the topology has been established. RIFT does not specify how security keys are installed or updated, though it does specify how the key can be used to achieve security goals.
The protocol has provisions for "weak" nonces to prevent replay attacks and includes authentication mechanisms comparable to [RFC5709] and [RFC7987].
RIFT MUST be carried in a mandatory secure envelope illustrated in Figure 31. Any value in the packet following a security fingerprint MUST be used only after the appropriate fingerprint has been validated.
Local configuration MAY allow for the envelope's integrity checks to be skipped.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 UDP Header: +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Port | RIFT destination port | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | UDP Length | UDP Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Outer Security Envelope Header: +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | RIFT MAGIC | Packet Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reserved | RIFT Major | Outer Key ID | Fingerprint | | | Version | | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | ~ Security Fingerprint covers all following content ~ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Weak Nonce Local | Weak Nonce Remote | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Remaining TIE Lifetime (all 1s in case of LIE) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ TIE Origin Security Envelope Header: +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | TIE Origin Key ID | Fingerprint | | | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | ~ Security Fingerprint covers all following content ~ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Serialized RIFT Model Object +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | ~ Serialized RIFT Model Object ~ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 31: Security Envelope
Observe that due to the schema migration rules per Appendix B the contained model can be always decoded if the major version matches and the envelope integrity has been validated. Consequently, description of the TIE is available to flood it properly including unknown TIE types.
The protocol uses two 16 bit nonces to salt generated signatures. We use the term "nonce" a bit loosely since RIFT nonces are not being changed in every packet as common in cryptography. For efficiency purposes they are changed at a high enough frequency to dwarf practical replay attack attempts. Therefore, we call them "weak" nonces.
Any implementation including RIFT security MUST generate and wrap around local nonces properly. When a nonce increment leads to `undefined_nonce` value, the value MUST be incremented again immediately. All implementation MUST reflect the neighbor's nonces. An implementation SHOULD increment a chosen nonce on every LIE FSM transition that ends up in a different state from the previous and MUST increment its nonce at least every 5 minutes (such considerations allow for efficient implementations without opening a significant security risk). When flooding TIEs, the implementation MUST use recent (i.e. within allowed difference) nonces reflected in the LIE exchange. The schema specifies the maximum allowable nonce value difference on a packet compared to reflected nonces in the LIEs. Any packet received with nonces deviating more than the allowed delta MUST be discarded without further computation of signatures to prevent computation load attacks.
In cases where a secure implementation does not receive signatures or receives undefined nonces from a neighbor (indicating that it does not support or verify signatures), it is a matter of local policy as to how those packets are treated. A secure implementation MAY refuse forming an adjacency with an implementation that is not advertising signatures or valid nonces, or it MAY continue signing local packets while accepting a neighbor's packets without further security validation.
As a necessary exception, an implementation MUST advertise the remote nonce value as `undefined_nonce` when the FSM is not in two-way or three-way state and accept an `undefined_nonce` for its local nonce value on packets in any other state than three-way.
As optional optimization, an implementation MAY send one LIE with previously negotiated neighbor's nonce to try to speed up a neighbor's transition from three-way to one-way and MUST revert to sending `undefined_nonce` after that.
Protecting flooding lifetime may lead to an excessive number of security fingerprint computations and to avoid this the application generating the fingerprints for advertised TIEs, MAY round the value down to the next `rounddown_lifetime_interval`. Such an optimization in the presence of security hashes over advancing weak nonces, may not be feasible.
As outlined in Section Section 7, either a private shared key or a public/private key pair is used to authenticate the adjacency. Both the key distribution and key synchronization methods are out of scope for this document. Both nodes in the adjacency MUST share the same keys, key type, and algorithm for a given key ID. Mismatched keys will not inter-operate as their security envelopes will be unverifiable.
Key roll-over while the adjacency is active MAY be supported. The specific mechanism is well documented in [RFC6518].
There in no mechanism to convert a security envelope for the same key ID from one algorithm to another once the envelope is operational. The recommended procedure to change to a new algorithm is to take the adjacency down, make the necessary changes, and bring the adjacency back up. Obviously, an implementation MAY choose to stop verifying security envelope for the duration of algorithm change to keep the adjacency up but since this introduces a security vulnerability window, such roll-over SHOULD NOT be recommended.
^ N +--------+ +--------+ Level 2 | |ToF 21| |ToF 22| E <-*-> W ++-+--+-++ ++-+--+-++ | | | | | | | | | S v P111/2 |P121/2 | | | | ^ ^ ^ ^ | | | | | | | | | | | | +--------------+ | +-----------+ | | | +---------------+ | | | | | | | | South +-----------------------------+ | | ^ | | | | | | | All TIEs 0/0 0/0 0/0 +-----------------------------+ | v v v | | | | | | | +-+ +<-0/0----------+ | | | | | | | | | | +-+----++ +-+----++ ++----+-+ ++-----++ Level 1 | | | | | | | | |Spin111| |Spin112| |Spin121| |Spin122| +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ | | | South | | | | | +---0/0--->-----+ 0/0 | +----------------+ | 0/0 | | | | | | | | +---<-0/0-----+ | v | +--------------+ | | v | | | | | | | +-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ Level 0 | | | | | | | | |Leaf111| |Leaf112| |Leaf121| |Leaf122| +-+-----+ +-+---+-+ +--+--+-+ +-+-----+ + + \ / + + Prefix111 Prefix112 \ / Prefix121 Prefix122 multi-homed Prefix +---------- PoD 1 ---------+ +---------- PoD 2 ---------+
Figure 32: Normal Case Topology
This section describes RIFT deployment in example topology given in Figure 32 without any node or link failures. We disregard flooding reduction for simplicity's sake and compress the node names in some cases to fit them into the picture better.
First, the following bi-directional adjacencies will be established:
Leaf 111 and Leaf 112 originate N-TIEs for Prefix 111 and Prefix 112 (respectively) to both Spine 111 and Spine 112 (Leaf 112 also originates an N-TIE for the multi-homed prefix). Spine 111 and Spine 112 will then originate their own N-TIEs, as well as flood the N-TIEs received from Leaf 111 and Leaf 112 to both ToF 21 and ToF 22.
Similarly, Leaf 121 and Leaf 122 originate North TIEs for Prefix 121 and Prefix 122 (respectively) to Spine 121 and Spine 122 (Leaf 121 also originates an North TIE for the multi-homed prefix). Spine 121 and Spine 122 will then originate their own North TIEs, as well as flood the North TIEs received from Leaf 121 and Leaf 122 to both ToF 21 and ToF 22.
Spines hold only North TIEs of level 0 for their PoD, while leaves only hold their own North TIEs while at this point, both ToF 21 and ToF 22 (as well as any northbound connected controllers) would have the complete network topology.
ToF 21 and ToF 22 would then originate and flood South TIEs containing any established adjacencies and a default IP route to all spines. Spine 111, Spine 112, Spine 121, and Spine 122 will reflect all Node South TIEs received from ToF 21 to ToF 22, and all Node South TIEs from ToF 22 to ToF 21. South TIEs will not be re-propagated southbound.
South TIEs containing a default IP route are then originated by both Spine 111 and Spine 112 toward Leaf 111 and Leaf 112. Similarly, South TIEs containing a default IP route are originated by Spine 121 and Spine 122 toward Leaf 121 and Leaf 122.
At this point IP connectivity across maximum number of viable paths has been established for all leaves, with routing information constrained to only the minimum amount that allows for normal operation and redundancy.
. | | | | .+-+---+-+ +-+---+-+ .| | | | .|Spin111| |Spin112| .+-+---+-+ ++----+-+ . | | | | . | +---------------+ X . | | | X Failure . | +-------------+ | X . | | | | .+-+---+-+ +--+--+-+ .| | | | .|Leaf111| |Leaf112| .+-------+ +-------+ . + + . Prefix111 Prefix112
Figure 33: Single Leaf Link Failure
In the event of a link failure between Spine 112 and Leaf 112, both nodes will originate new Node TIEs that contain their connected adjacencies, except for the one that just failed. Leaf 112 will send a Node North TIE to Spine 111. Spine 112 will send a Node North TIE to ToF 21 and ToF 22 as well as a new Node South TIE to Leaf 111 that will be reflected to Spine 111. Necessary SPF recomputation will occur, resulting in Spine 112 no longer being in the forwarding path for Prefix 112.
Spine 111 will also disaggregate Prefix 112 by sending new Prefix South TIE to Leaf 111 and Leaf 112. Though we cover disaggregation in more detail in the following section, it is worth mentioning ini this example as it further illustrates RIFT's blackhole mitigation mechanism. Consider that Leaf 111 has yet to receive the more specific (disaggregated) route from Spine 111. In such a scenario, traffic from Leaf 111 toward Prefix 112 may still use Spine 112's default route, causing it to traverse ToF 21 and ToF 22 back down via Spine 111. While this behavior is suboptimal, it is transient in nature and preferred to black-holing traffic.
+--------+ +--------+ Level 2 |ToF 21| |ToF 22| ++-+--+-++ ++-+--+-++ | | | | | | | | | | | | | | | 0/0 | | | | | | | | | | | | | | | | +--------------+ | +--- XXXXXX + | | | +---------------+ | | | | | | | | | +-----------------------------+ | | | 0/0 | | | | | | | | 0/0 0/0 +- XXXXXXXXXXXXXXXXXXXXXXXXX -+ | | 1.1/16 | | | | | | | | +-+ +-0/0-----------+ | | | | | 1.1./16 | | | | +-+----++ +-+-----+ ++-----0/0 ++----0/0 Level 1 | | | | | 1.1/16 | 1.1/16 |Spin111| |Spin112| |Spin121| |Spin122| +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ | | | | | | | | | +---------------+ | | +----------------+ | | | | | | | | | | +-------------+ | | | +--------------+ | | | | | | | | | | +-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ Level 3 | | | | | | | | |Leaf111| |Leaf112| |Leaf121| |Leaf122| +-+-----+ ++------+ +-----+-+ +-+-----+ + + + + Prefix111 Prefix112 Prefix121 Prefix122 1.1/16
Figure 34: Fabric Partition
Figure 34 shows one of more catastrophic scenarios where ToF 21 is completely severed from access to Prefix 121 due to a double link failure. If only default routes existed, this would result in 50% of traffic from Leaf 111 and Leaf 112 toward Prefix 121 being black-holed.
The mechanism to resolve this scenario hinges on ToF 21's Sout TIEs being reflected from Spine 111 and Spine 112 to ToF 22. Once ToF 22 sees that Prefix 121 cannot be reached from ToF 21, it will begin to disaggregate Prefix 121 by advertising a more specific route (1.1/16) along with the default IP prefix route to all spines (ToF 21 still only sends a default route). The result is Spine 111 and Spine112 using the more specific route to Prefix 121 via ToF 22. All other prefixes continue to use the default IP prefix route toward both ToF 21 and ToF 22.
The more specific route for Prefix 121 being advertised by ToF 22 does not need to be propagated further south to the leaves, as they do not benefit from this information. Spine 111 and Spine 112 are only required to reflect the new South Node TIEs received from ToF 22 to ToF 21. In short, only the relevant nodes received the relevant updates, thereby restricting the failure to only the partitioned level rather than burdening the whole fabric with the flooding and recomputation of the new topology information.
To finish our example, the following table shows sets computed by ToF 22 using notation introduced in Section 4.2.5:
With that and |H (for r=Prefix 121) and |H (for r=Prefix 122) being disjoint from |A (for ToF 21), ToF 22 will originate an South TIE with Prefix 121 and Prefix 122, which will be flooded to all spines.
. + + + . X N1 | N2 | N3 . X | | .+--+----+ +--+----+ +--+-----+ .| |0/0> <0/0| |0/0> <0/0| | .| A01 +----------+ A02 +----------+ A03 | Level 1 .++-+-+--+ ++--+--++ +---+-+-++ . | | | | | | | | | . | | +----------------------------------+ | | | . | | | | | | | | | . | +-------------+ | | | +--------------+ | . | | | | | | | | | . | +----------------+ | +-----------------+ | . | | | | | | | | | . | | +------------------------------------+ | | . | | | | | | | | | .++-+-+--+ | +---+---+ | +-+---+-++ .| | +-+ +-+ | | .| L01 | | L02 | | L03 | Level 0 .+-------+ +-------+ +--------+
Figure 35: North Partitioned Router
Figure 35 shows a part of a fabric where level 1 is horizontally connected and A01 lost its only northbound adjacency. Based on N-SPF rules in Section 4.2.4.1 A01 will compute northbound reachability by using the link A01 to A02. A02 however, will NOT use this link during N-SPF. The result is A01 utilizing the horizontal link for default route advertisement and unidirectional routing.
Furthermore, if A02 also loses its only northbound adjacency (N2), the situation evolves. A01 will no longer have northbound reachability while it sees A03's northbound adjacencies in South Node TIEs reflected by nodes south of it. As a result, A01 will no longer advertise its default route in accordance with Section 4.2.3.8.
RIFT can and is intended to be stretched to the lowest level in the IP fabric to integrate ToRs or even servers. Since those entities would run as leaves only, it is worth to observe that a leaf only version is significantly simpler to implement and requires much less resources:
Spine nodes will never act as Top of Fabric, and are therefore not required to run a full RIFT implementation. Specifically, spines do not need to perform negative disaggregation computation other than respecting northbound disaggregation advertised from the north.
. +-----+ +-----+ . | | | | .+-+ S0 | | S1 | .| ++---++ ++---++ .| | | | | .| | +------------+ | .| | | +------------+ | .| | | | | .| ++-+--+ +--+-++ .| | | | | .| | A0 | | A1 | .| +-+--++ ++---++ .| | | | | .| | +------------+ | .| | +-----------+ | | .| | | | | .| +-+-+-+ +--+-++ .+-+ | | | . | L0 | | L1 | . +-----+ +-----+
Figure 36: Level Shortcut
RIFT is not strictly limited to Clos topologies. The protocol only requires a sense of "compass rose directionality" either achieved through configuration or derivation of levels. So, conceptually, leaf-2-leaf links and even shortcuts between levels could be included. Figure 36 depicts an example of a shortcut between levels. In this example, sub-optimal routing will occur when traffic is sent from L0 to L1 via S0's default route and back down through A0 or A1. In order to ensure that only default routes from A0 or A1 are used, all leaves would be required to install each others routes.
While various technical and operational challenges may require the use of such modifications, discussion of those topics are outside the scope of this document.
An implementation MAY choose to originate more specific prefixes (P') southbound instead of only the default route (as described in Section 4.2.3.8). In such a scenario, all addresses carried within the RIFT domain MUST be contained within P'.
One can consider attack vectors where a router may reboot many times while changing its system ID and pollute the network with many stale TIEs or TIEs are sent with very long lifetimes and not cleaned up when the routes vanish. Those attack vectors are not unique to RIFT. Given large memory footprints available today those attacks should be relatively benign. Otherwise a node SHOULD implement a strategy of discarding contents of all TIEs that were not present in the SPF tree over a certain, configurable period of time. Since the protocol, like all modern link-state protocols, is self-stabilizing and will advertise the presence of such TIEs to its neighbors, they can be re-requested again if a computation finds that it sees an adjacency formed towards the system ID of the discarded TIEs.
Section 4.2.7 presents many attack vectors in untrusted environments, starting with nodes that oscillate their level offers to the possibility of nodes offering a three-way adjacency with the highest possible level value and a very long holdtime trying to put itself "on top of the lattice" thereby allowing it to gain access to the whole southbound topology. Session authentication mechanisms are necessary in environments where this is possible and RIFT provides the security envelope to ensure this if so desired.
Traditional IGP protocols are vulnerable to lifetime modification and replay attacks that can be somewhat mitigated by using techniques like [RFC7987]. RIFT removes this attack vector by protecting the lifetime behind a signature computed over it and additional nonce combination which makes even the replay attack window very small and for practical purposes irrelevant since lifetime cannot be artificially shortened by the attacker.
Optional packet number is carried in the security envelope without any encryption protection and is hence vulnerable to replay and modification attacks. Contrary to nonces this number must change on every packet and would present a very high cryptographic load if signed. The attack vector packet number present is relatively benign. Changing the packet number by a man-in-the-middle attack will only affect operational validation tools and possibly some performance optimizations on flooding. It is expected that an implementation detecting too many "fake losses" or "misorderings" due to the attack on the packet number would simply suppress its further processing.
A node can try to inject LIE packets observing a conversation on the wire by using the outer key ID albeit it cannot generate valid hashes in case it changes the integrity of the message so the only possible attack is DoS due to excessive LIE validation.
A node can try to replay previous LIEs with changed state that it recorded but the attack is hard to replicate since the nonce combination must match the ongoing exchange and is then limited to a single flap only since both nodes will advance their nonces in case the adjacency state changed. Even in the most unlikely case the attack length is limited due to both sides periodically increasing their nonces.
A compromised node can attempt to generate "fake TIEs" using other nodes' TIE origin key identifiers. Albeit the ultimate validation of the origin fingerprint will fail in such scenarios and not progress further than immediately peering nodes, the resulting denial of service attack seems unavoidable since the TIE origin key id is only protected by the, here assumed to be compromised, node.
It can be reasonably expected that with the proliferation of RotH servers, rather than dedicated networking devices, servers will represent a significant amount of RIFT devices. Given their normally far wider software envelope and access granted to them, such servers are also far more likely to be compromised and present an attack vector on the protocol. Hijacking of prefixes to attract traffic is a trust problem and cannot be addressed within the protocol if the trust model is breached, i.e. the server presents valid credentials to form an adjacency and issue TIEs. However, in a more devious way, the servers can present DoS (or even DDos) vectors of issuing too many LIE packets, flood large amounts of North TIEs and attempt similar resource overrun attacks. A prudent implementation forming adjacencies to leaves should implement according thresholds mechanisms and raise warnings when e.g. a leaf is advertising an excess number of TIEs.
This specification requests multicast address assignments and standard port numbers. Additionally registries for the schema are requested and suggested values provided that reflect the numbers allocated in the given schema.
This document requests allocation in the 'IPv4 Multicast Address Space' registry the suggested value of 224.0.0.120 as 'ALL_V4_RIFT_ROUTERS' and in the 'IPv6 Multicast Address Space' registry the suggested value of FF02::A1F7 as 'ALL_V6_RIFT_ROUTERS'.
This document requests allocation in the 'Service Name and Transport Protocol Port Number Registry' the allocation of a suggested value of 914 on udp for 'RIFT_LIES_PORT' and suggested value of 915 for 'RIFT_TIES_PORT'.
This section requests registries that help govern the schema via usual IANA registry procedures. A top level 'RIFT' registry should hold the according registries requested in following sections with their pre-defined values. IANA is requested to store the schema version introducing the allocated value as well as, optionally, its description when present. This will allow to assign different values to an entry depending on schema version. Alternately, IANA is requested to consider a root RIFT/3 registry to store RIFT schema major version 3 values and may be requested in the future to create a RIFT/4 registry under that. In any case, IANA is requested to store the schema version in the entries since that will allow to distinguish between minor versions in the same major schema version. All values not suggested as to be considered `Unassigned`. The range of every registry is a 16-bit integer. Allocation of new values is always performed via `Expert Review` action.
Address family type.
Name | Value | Schema Version | Description |
---|---|---|---|
Illegal | 0 | 4.0 | |
AddressFamilyMinValue | 1 | 4.0 | |
IPv4 | 2 | 4.0 | |
IPv6 | 3 | 4.0 | |
AddressFamilyMaxValue | 4 | 4.0 |
Flags indicating node configuration in case of ZTP.
Name | Value | Schema Version | Description |
---|---|---|---|
leaf_only | 0 | 4.0 | |
leaf_only_and_leaf_2_leaf_procedures | 1 | 4.0 | |
top_of_fabric | 2 | 4.0 |
Timestamp per IEEE 802.1AS, all values MUST be interpreted in implementation as unsigned.
Name | Value | Schema Version | Description |
---|---|---|---|
AS_sec | 1 | 4.0 | |
AS_nsec | 2 | 4.0 |
IP address type.
Name | Value | Schema Version | Description |
---|---|---|---|
ipv4address | 1 | 4.0 | Content is IPv4 |
ipv6address | 2 | 4.0 | Content is IPv6 |
Prefix advertisement.
@note: for interface addresses the protocol can propagate the address part beyond the subnet mask and on reachability computation that has to be normalized. The non-significant bits can be used for operational purposes.
Name | Value | Schema Version | Description |
---|---|---|---|
ipv4prefix | 1 | 4.0 | |
ipv6prefix | 2 | 4.0 |
IPv4 prefix type.
Name | Value | Schema Version | Description |
---|---|---|---|
address | 1 | 4.0 | |
prefixlen | 2 | 4.0 |
IPv6 prefix type.
Name | Value | Schema Version | Description |
---|---|---|---|
address | 1 | 4.0 | |
prefixlen | 2 | 4.0 |
Sequence of a prefix in case of move.
Name | Value | Schema Version | Description |
---|---|---|---|
timestamp | 1 | 4.0 | |
transactionid | 2 | 4.0 | Transaction ID set by client in e.g. in 6LoWPAN. |
RIFT route types.
@note: route types which MUST be ordered on their preference PGP prefixes are most preferred attracting traffic north (towards spine) and then south normal prefixes are attracting traffic south (towards leafs), i.e. prefix in NORTH PREFIX TIE is preferred over SOUTH PREFIX TIE.
@note: The only purpose of those values is to introduce an ordering whereas an implementation can choose internally any other values as long the ordering is preserved
Name | Value | Schema Version | Description |
---|---|---|---|
Illegal | 0 | 4.0 | |
RouteTypeMinValue | 1 | 4.0 | |
Discard | 2 | 4.0 | |
LocalPrefix | 3 | 4.0 | |
SouthPGPPrefix | 4 | 4.0 | |
NorthPGPPrefix | 5 | 4.0 | |
NorthPrefix | 6 | 4.0 | |
NorthExternalPrefix | 7 | 4.0 | |
SouthPrefix | 8 | 4.0 | |
SouthExternalPrefix | 9 | 4.0 | |
NegativeSouthPrefix | 10 | 4.0 | |
RouteTypeMaxValue | 11 | 4.0 |
Type of TIE.
This enum indicates what TIE type the TIE is carrying. In case the value is not known to the receiver, the TIE MUST be re-flooded. This allows for future extensions of the protocol within the same major schema with types opaque to some nodes UNLESS the flooding scope is not the same as prefix TIE, then a major version revision MUST be performed.
Name | Value | Schema Version | Description |
---|---|---|---|
Illegal | 0 | 4.0 | |
TIETypeMinValue | 1 | 4.0 | |
NodeTIEType | 2 | 4.0 | |
PrefixTIEType | 3 | 4.0 | |
PositiveDisaggregationPrefixTIEType | 4 | 4.0 | |
NegativeDisaggregationPrefixTIEType | 5 | 4.0 | |
PGPrefixTIEType | 6 | 4.0 | |
KeyValueTIEType | 7 | 4.0 | |
ExternalPrefixTIEType | 8 | 4.0 | |
PositiveExternalDisaggregationPrefixTIEType | 9 | 4.0 | |
TIETypeMaxValue | 10 | 4.0 |
Direction of TIEs.
Name | Value | Schema Version | Description |
---|---|---|---|
Illegal | 0 | 4.0 | |
South | 1 | 4.0 | |
North | 2 | 4.0 | |
DirectionMaxValue | 3 | 4.0 |
Prefix community.
Name | Value | Schema Version | Description |
---|---|---|---|
top | 1 | 4.0 | Higher order bits |
bottom | 2 | 4.0 | Lower order bits |
Generic key value pairs.
Name | Value | Schema Version | Description |
---|---|---|---|
keyvalues | 1 | 4.0 |
RIFT LIE Packet.
@note: this node's level is already included on the packet header
Name | Value | Schema Version | Description |
---|---|---|---|
name | 1 | 4.0 | Node or adjacency name. |
local_id | 2 | 4.0 | Local link ID. |
flood_port | 3 | 4.0 | UDP port to which we can receive flooded TIEs. |
link_mtu_size | 4 | 4.0 | Layer 3 MTU, used to discover to mismatch. |
link_bandwidth | 5 | 4.0 | Local link bandwidth on the interface. |
neighbor | 6 | 4.0 | Reflects the neighbor once received to provide 3-way connectivity. |
pod | 7 | 4.0 | Node's PoD. |
node_capabilities | 10 | 4.0 | Node capabilities shown in LIE. The capabilities MUST match the capabilities shown in the Node TIEs, otherwise the behavior is unspecified. A node detecting the mismatch SHOULD generate according error. |
link_capabilities | 11 | 4.0 | Capabilities of this link. |
holdtime | 12 | 4.0 | Required holdtime of the adjacency, i.e. how much time MUST expire without LIE for the adjacency to drop. |
label | 13 | 4.0 | Unsolicited, downstream assigned locally significant label value for the adjacency. |
not_a_ztp_offer | 21 | 4.0 | Indicates that the level on the LIE MUST NOT be used to derive a ZTP level by the receiving node. |
you_are_flood_repeater | 22 | 4.0 | Indicates to northbound neighbor that it should be reflooding this node's North TIEs to achieve flood reduction and balancing for northbound flooding. To be ignored if received from a northbound adjacency. |
you_are_sending_too_quickly | 23 | 4.0 | Can be optionally set to indicate to neighbor that packet losses are seen on reception based on packet numbers or the rate is too high. The receiver SHOULD temporarily slow down flooding rates. |
instance_name | 24 | 4.0 | Instance name in case multiple RIFT instances running on same interface. |
Link capabilities.
Name | Value | Schema Version | Description |
---|---|---|---|
bfd | 1 | 4.0 | Indicates that the link is supporting BFD. |
v4_forwarding_capable | 2 | 4.0 | Indicates whether the interface will support v4 forwarding. |
LinkID pair describes one of parallel links between two nodes.
Name | Value | Schema Version | Description |
---|---|---|---|
local_id | 1 | 4.0 | Node-wide unique value for the local link. |
remote_id | 2 | 4.0 | Received remote link ID for this link. |
platform_interface_index | 10 | 4.0 | Describes the local interface index of the link. |
platform_interface_name | 11 | 4.0 | Describes the local interface name. |
trusted_outer_security_key | 12 | 4.0 | Indication whether the link is secured, i.e. protected by outer key, absence of this element means no indication, undefined outer key means not secured. |
bfd_up | 13 | 4.0 | Indication whether the link is protected by established BFD session. |
Neighbor structure.
Name | Value | Schema Version | Description |
---|---|---|---|
originator | 1 | 4.0 | System ID of the originator. |
remote_id | 2 | 4.0 | ID of remote side of the link. |
Capabilities the node supports.
@note: The schema may add to this field future capabilities to indicate whether it will support interpretation of future schema extensions on the same major revision. Such fields MUST be optional and have an implicit or explicit false default value. If a future capability changes route selection or generates blackholes if some nodes are not supporting it then a major version increment is unavoidable.
Name | Value | Schema Version | Description |
---|---|---|---|
protocol_minor_version | 1 | 4.0 | Must advertise supported minor version dialect that way. |
flood_reduction | 2 | 4.0 | Can this node participate in flood reduction. |
hierarchy_indications | 3 | 4.0 | Does this node restrict itself to be top-of-fabric or leaf only (in ZTP) and does it support leaf-2-leaf procedures. |
Indication flags of the node.
Name | Value | Schema Version | Description |
---|---|---|---|
overload | 1 | 4.0 | Indicates that node is in overload, do not transit traffic through it. |
neighbor of a node
Name | Value | Schema Version | Description |
---|---|---|---|
level | 1 | 4.0 | level of neighbor |
cost | 3 | 4.0 | Cost to neighbor. |
link_ids | 4 | 4.0 | can carry description of multiple parallel links in a TIE |
bandwidth | 5 | 4.0 | total bandwith to neighbor, this will be normally sum of the bandwidths of all the parallel links. |
Description of a node.
It may occur multiple times in different TIEs but if either
the behavior is undefined and a warning SHOULD be generated. Neighbors can be distributed across multiple TIEs however if the sets are disjoint. Miscablings SHOULD be repeated in every node TIE, otherwise the behavior is undefined.
@note: Observe that absence of fields implies defined defaults.
Name | Value | Schema Version | Description |
---|---|---|---|
level | 1 | 4.0 | Level of the node. |
neighbors | 2 | 4.0 | Node's neighbors. If neighbor systemID repeats in other node TIEs of same node the behavior is undefined. |
capabilities | 3 | 4.0 | Capabilities of the node. |
flags | 4 | 4.0 | Flags of the node. |
name | 5 | 4.0 | Optional node name for easier operations. |
pod | 6 | 4.0 | PoD to which the node belongs. |
miscabled_links | 10 | 4.0 | If any local links are miscabled, the indication is flooded. |
Content of a RIFT packet.
Name | Value | Schema Version | Description |
---|---|---|---|
lie | 1 | 4.0 | |
tide | 2 | 4.0 | |
tire | 3 | 4.0 | |
tie | 4 | 4.0 |
Common RIFT packet header.
Name | Value | Schema Version | Description |
---|---|---|---|
major_version | 1 | 4.0 | Major version of protocol. |
minor_version | 2 | 4.0 | Minor version of protocol. |
sender | 3 | 4.0 | Node sending the packet, in case of LIE/TIRE/TIDE also the originator of it. |
level | 4 | 4.0 | Level of the node sending the packet, required on everything except LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL and is used in ZTP procedures. |
Attributes of a prefix.
Name | Value | Schema Version | Description |
---|---|---|---|
metric | 2 | 4.0 | Distance of the prefix. |
tags | 3 | 4.0 | Generic unordered set of route tags, can be redistributed to other protocols or use within the context of real time analytics. |
monotonic_clock | 4 | 4.0 | Monotonic clock for mobile addresses. |
loopback | 6 | 4.0 | Indicates if the interface is a node loopback. |
directly_attached | 7 | 4.0 | Indicates that the prefix is directly attached, i.e. should be routed to even if the node is in overload. |
from_link | 10 | 4.0 | In case of locally originated prefixes, i.e. interface addresses this can describe which link the address belongs to. |
TIE carrying prefixes
Name | Value | Schema Version | Description |
---|---|---|---|
prefixes | 1 | 4.0 | Prefixes with the associated attributes. If the same prefix repeats in multiple TIEs of same node behavior is unspecified. |
RIFT packet structure.
Name | Value | Schema Version | Description |
---|---|---|---|
header | 1 | 4.0 | |
content | 2 | 4.0 |
TIDE with sorted TIE headers, if headers are unsorted, behavior is undefined.
Name | Value | Schema Version | Description |
---|---|---|---|
start_range | 1 | 4.0 | First TIE header in the tide packet. |
end_range | 2 | 4.0 | Last TIE header in the tide packet. |
headers | 3 | 4.0 | _Sorted_ list of headers. |
Single element in a TIE.
Schema enum `common.TIETypeType` in TIEID indicates which elements MUST be present in the TIEElement. In case of mismatch the unexpected elements MUST be ignored. In case of lack of expected element the TIE an error MUST be reported and the TIE MUST be ignored.
This type can be extended with new optional elements for new `common.TIETypeType` values without breaking the major but if it is necessary to understand whether all nodes support the new type a node capability must be added as well.
Name | Value | Schema Version | Description |
---|---|---|---|
node | 1 | 4.0 | Used in case of enum common.TIETypeType.NodeTIEType. |
prefixes | 2 | 4.0 | Used in case of enum common.TIETypeType.PrefixTIEType. |
positive_disaggregation_prefixes | 3 | 4.0 | Positive prefixes (always southbound). It MUST NOT be advertised within a North TIE and ignored otherwise. |
negative_disaggregation_prefixes | 5 | 4.0 | Transitive, negative prefixes (always southbound) which MUST be aggregated and propagated according to the specification southwards towards lower levels to heal pathological upper level partitioning, otherwise blackholes may occur in multiplane fabrics. It MUST NOT be advertised within a North TIE. |
external_prefixes | 6 | 4.0 | Externally reimported prefixes. |
positive_external_disaggregation_prefixes | 7 | 4.0 | Positive external disaggregated prefixes (always southbound). It MUST NOT be advertised within a North TIE and ignored otherwise. |
keyvalues | 9 | 4.0 | Key-Value store elements. |
Header of a TIE.
@note: TIEID space is a total order achieved by comparing the elements in sequence defined and comparing each value as an unsigned integer of according length.
@note: After sequence number the lifetime received on the envelope must be used for comparison before further fields.
@note: `origination_time` and `origination_lifetime` are disregarded for comparison purposes and carried purely for debugging/security purposes if present.
Name | Value | Schema Version | Description |
---|---|---|---|
tieid | 2 | 4.0 | ID of the tie. |
seq_nr | 3 | 4.0 | Sequence number of the tie. |
origination_time | 10 | 4.0 | Absolute timestamp when the TIE was generated. This can be used on fabrics with synchronized clock to prevent lifetime modification attacks. |
origination_lifetime | 12 | 4.0 | Original lifetime when the TIE was generated. This can be used on fabrics with synchronized clock to prevent lifetime modification attacks. |
Header of a TIE as described in TIRE/TIDE.
Name | Value | Schema Version | Description |
---|---|---|---|
header | 1 | 4.0 | |
remaining_lifetime | 2 | 4.0 | Remaining lifetime that expires down to 0 just like in ISIS. TIEs with lifetimes differing by less than `lifetime_diff2ignore` MUST be considered EQUAL. |
ID of a TIE.
@note: TIEID space is a total order achieved by comparing the elements in sequence defined and comparing each value as an unsigned integer of according length.
Name | Value | Schema Version | Description |
---|---|---|---|
direction | 1 | 4.0 | direction of TIE |
originator | 2 | 4.0 | indicates originator of the TIE |
tietype | 3 | 4.0 | type of the tie |
tie_nr | 4 | 4.0 | number of the tie |
TIE packet
Name | Value | Schema Version | Description |
---|---|---|---|
header | 1 | 4.0 | |
element | 2 | 4.0 |
TIRE packet
Name | Value | Schema Version | Description |
---|---|---|---|
headers | 1 | 4.0 |
A new routing protocol in its complexity is not a product of a parent but of a village as the author list shows already. However, many more people provided input, fine-combed the specification based on their experience in design, implementation or application of protocols in IP fabrics. This section will make an inadequate attempt in recording their contribution.
Many thanks to Naiming Shen for some of the early discussions around the topic of using IGPs for routing in topologies related to Clos. Russ White to be especially acknowledged for the key conversation on epistemology that allowed to tie current asynchronous distributed systems theory results to a modern protocol design presented in this scope. Adrian Farrel, Joel Halpern, Jeffrey Zhang, Krzysztof Szarkowicz, Nagendra Kumar, Melchior Aelmans, Kaushal Tank, Will Jones, Moin Ahmed and Jordan Head provided thoughtful comments that improved the readability of the document and found good amount of corners where the light failed to shine. Kris Price was first to mention single router, single arm default considerations. Jeff Tantsura helped out with some initial thoughts on BFD interactions while Jeff Haas corrected several misconceptions about BFD's finer points. Artur Makutunowicz pointed out many possible improvements and acted as sounding board in regard to modern protocol implementation techniques RIFT is exploring. Barak Gafni formalized first time clearly the problem of partitioned spine and fallen leaves on a (clean) napkin in Singapore that led to the very important part of the specification centered around multiple Top-of-Fabric planes and negative disaggregation. Igor Gashinsky and others shared many thoughts on problems encountered in design and operation of large-scale data center fabrics. Xu Benchong found a delicate error in the flooding procedures and a schema datatype size mismatch.
Last but not least, Alvaro Retana guided the undertaking by asking many necessary procedural and technical questions which did not only improve the content but did also lay out the track towards publication.
The only reasonably reference to a cleaner than [RFC1982] sequence number solution is given in [Wikipedia]. It basically converts the problem into two complement's arithmetic. Assuming a straight two complement's subtractions on the bit-width of the sequence number the according >: and =: relations are defined as:
U_1, U_2 are 12-bits aligned unsigned version number D_f is ( U_1 - U_2 ) interpreted as two complement signed 12-bits D_b is ( U_2 - U_1 ) interpreted as two complement signed 12-bits U_1 >: U_2 IIF D_f > 0 AND D_b < 0 U_1 =: U_2 IIF D_f = 0
The >: relationship is anti-symmetric but not transitive. Observe that this leaves >: of the numbers having maximum two complement distance, e.g. ( 0 and 0x800 ) undefined in our 12-bits case since D_f and D_b are both -0x7ff.
A simple example of the relationship in case of 3-bit arithmetic follows as table indicating D_f/D_b values and then the relationship of U_1 to U_2:
U2 / U1 0 1 2 3 4 5 6 7 0 +/+ +/- +/- +/- -/- -/+ -/+ -/+ 1 -/+ +/+ +/- +/- +/- -/- -/+ -/+ 2 -/+ -/+ +/+ +/- +/- +/- -/- -/+ 3 -/+ -/+ -/+ +/+ +/- +/- +/- -/- 4 -/- -/+ -/+ -/+ +/+ +/- +/- +/- 5 +/- -/- -/+ -/+ -/+ +/+ +/- +/- 6 +/- +/- -/- -/+ -/+ -/+ +/+ +/- 7 +/- +/- +/- -/- -/+ -/+ -/+ +/+
U2 / U1 0 1 2 3 4 5 6 7 0 = > > > ? < < < 1 < = > > > ? < < 2 < < = > > > ? < 3 < < < = > > > ? 4 ? < < < = > > > 5 > ? < < < = > > 6 > > ? < < < = > 7 > > > ? < < < =
This section introduces the schema for information elements. The IDL is Thrift.
On schema changes that
major version of the schema MUST increase. All other changes MUST increase minor version within the same major.
The above set of rules guarantees that every decoder can process serialized content generated by a higher minor version of the schema and with that the protocol can progress without a 'fork-lift'. Additionally, based on the propagated minor version in encoded content and added optional node capabilities new TIE types or even de-facto mandatory fields can be introduced without progressing the major version albeit only nodes supporting such new extensions would decode them. Given the model is encoded at the source and never re-encoded flooding through nodes not understanding any new extensions will preserve the according fields.
Content serialized using a major version X is NOT expected to be decodable by any implementation using decoder for a model with a major version lower than X.
Observe especially that introducing an optional field does not cause a major version increase even if the fields inside the structure are optional with defaults.
All signed integer as forced by Thrift support must be cast for internal purposes to equivalent unsigned values without discarding the signedness bit. An implementation SHOULD try to avoid using the signedness bit when generating values.
The schema is normative.
/** Thrift file with common definitions for RIFT */ /** @note MUST be interpreted in implementation as unsigned 64 bits. * The implementation SHOULD NOT use the MSB. */ typedef i64 SystemIDType typedef i32 IPv4Address /** this has to be long enough to accomodate prefix */ typedef binary IPv6Address /** @note MUST be interpreted in implementation as unsigned */ typedef i16 UDPPortType /** @note MUST be interpreted in implementation as unsigned */ typedef i32 TIENrType /** @note MUST be interpreted in implementation as unsigned */ typedef i32 MTUSizeType /** @note MUST be interpreted in implementation as unsigned rolling over number */ typedef i64 SeqNrType /** @note MUST be interpreted in implementation as unsigned */ typedef i32 LifeTimeInSecType /** @note MUST be interpreted in implementation as unsigned */ typedef i8 LevelType /** optional, recommended monotonically increasing number _per packet type per adjacency_ that can be used to detect losses/misordering/restarts. @note MUST be interpreted in implementation as unsigned rolling over number */ typedef i16 PacketNumberType /** @note MUST be interpreted in implementation as unsigned */ typedef i32 PodType /** @note MUST be interpreted in implementation as unsigned. This is carried in the security envelope and MUST fit into 8 bits. */ typedef i8 VersionType /** @note MUST be interpreted in implementation as unsigned */ typedef i16 MinorVersionType /** @note MUST be interpreted in implementation as unsigned */ typedef i32 MetricType /** @note MUST be interpreted in implementation as unsigned and unstructured */ typedef i64 RouteTagType /** @note MUST be interpreted in implementation as unstructured label value */ typedef i32 LabelType /** @note MUST be interpreted in implementation as unsigned */ typedef i32 BandwithInMegaBitsType /** @note Key Value key ID type */ typedef string KeyIDType /** node local, unique identification for a link (interface/tunnel * etc. Basically anything RIFT runs on). This is kept * at 32 bits so it aligns with BFD [RFC5880] discriminator size. */ typedef i32 LinkIDType typedef string KeyNameType typedef i8 PrefixLenType /** timestamp in seconds since the epoch */ typedef i64 TimestampInSecsType /** security nonce. @note MUST be interpreted in implementation as rolling over unsigned value */ typedef i16 NonceType /** LIE FSM holdtime type */ typedef i16 TimeIntervalInSecType /** Transaction ID type for prefix mobility as specified by RFC6550, value MUST be interpreted in implementation as unsigned */ typedef i8 PrefixTransactionIDType /** Timestamp per IEEE 802.1AS, all values MUST be interpreted in implementation as unsigned. */ struct IEEE802_1ASTimeStampType { 1: required i64 AS_sec; 2: optional i32 AS_nsec; } /** generic counter type */ typedef i64 CounterType /** Platform Interface Index type, i.e. index of interface on hardware, can be used e.g. with RFC5837 */ typedef i32 PlatformInterfaceIndex /** Flags indicating node configuration in case of ZTP. */ enum HierarchyIndications { /** forces level to `leaf_level` and enables according procedures */ leaf_only = 0, /** forces level to `leaf_level` and enables according procedures */ leaf_only_and_leaf_2_leaf_procedures = 1, /** forces level to `top_of_fabric` and enables according procedures */ top_of_fabric = 2, } const PacketNumberType undefined_packet_number = 0 /** This MUST be used when node is configured as top of fabric in ZTP. This is kept reasonably low to alow for fast ZTP convergence on failures. */ const LevelType top_of_fabric_level = 24 /** default bandwidth on a link */ const BandwithInMegaBitsType default_bandwidth = 100 /** fixed leaf level when ZTP is not used */ const LevelType leaf_level = 0 const LevelType default_level = leaf_level const PodType default_pod = 0 const LinkIDType undefined_linkid = 0 /** default distance used */ const MetricType default_distance = 1 /** any distance larger than this will be considered infinity */ const MetricType infinite_distance = 0x7FFFFFFF /** represents invalid distance */ const MetricType invalid_distance = 0 const bool overload_default = false const bool flood_reduction_default = true /** default LIE FSM holddown time */ const TimeIntervalInSecType default_lie_holdtime = 3 /** default ZTP FSM holddown time */ const TimeIntervalInSecType default_ztp_holdtime = 1 /** by default LIE levels are ZTP offers */ const bool default_not_a_ztp_offer = false /** by default everyone is repeating flooding */ const bool default_you_are_flood_repeater = true /** 0 is illegal for SystemID */ const SystemIDType IllegalSystemID = 0 /** empty set of nodes */ const set<SystemIDType> empty_set_of_nodeids = {} /** default lifetime of TIE is one week */ const LifeTimeInSecType default_lifetime = 604800 /** default lifetime when TIEs are purged is 5 minutes */ const LifeTimeInSecType purge_lifetime = 300 /** round down interval when TIEs are sent with security hashes to prevent excessive computation. **/ const LifeTimeInSecType rounddown_lifetime_interval = 60 /** any `TieHeader` that has a smaller lifetime difference than this constant is equal (if other fields equal). This constant MUST be larger than `purge_lifetime` to avoid retransmissions */ const LifeTimeInSecType lifetime_diff2ignore = 400 /** default UDP port to run LIEs on */ const UDPPortType default_lie_udp_port = 914 /** default UDP port to receive TIEs on, that can be peer specific */ const UDPPortType default_tie_udp_flood_port = 915 /** default MTU link size to use */ const MTUSizeType default_mtu_size = 1400 /** default link being BFD capable */ const bool bfd_default = true /** undefined nonce, equivalent to missing nonce */ const NonceType undefined_nonce = 0; /** outer security key id, MUST be interpreted as in implementation as unsigned */ typedef i8 OuterSecurityKeyID /** security key id, MUST be interpreted as in implementation as unsigned */ typedef i32 TIESecurityKeyID /** undefined key */ const TIESecurityKeyID undefined_securitykey_id = 0; /** Maximum delta (negative or positive) that a mirrored nonce can deviate from local value to be considered valid. If nonces are changed every minute on both sides this opens statistically a `maximum_valid_nonce_delta` minutes window of identical LIEs, TIE, TI(x)E replays. The interval cannot be too small since LIE FSM may change states fairly quickly during ZTP without sending LIEs*/ const i16 maximum_valid_nonce_delta = 5; /** Direction of TIEs. */ enum TieDirectionType { Illegal = 0, South = 1, North = 2, DirectionMaxValue = 3, } /** Address family type. */ enum AddressFamilyType { Illegal = 0, AddressFamilyMinValue = 1, IPv4 = 2, IPv6 = 3, AddressFamilyMaxValue = 4, } /** IPv4 prefix type. */ struct IPv4PrefixType { 1: required IPv4Address address; 2: required PrefixLenType prefixlen; } /** IPv6 prefix type. */ struct IPv6PrefixType { 1: required IPv6Address address; 2: required PrefixLenType prefixlen; } /** IP address type. */ union IPAddressType { /** Content is IPv4 */ 1: optional IPv4Address ipv4address; /** Content is IPv6 */ 2: optional IPv6Address ipv6address; } /** Prefix advertisement. @note: for interface addresses the protocol can propagate the address part beyond the subnet mask and on reachability computation that has to be normalized. The non-significant bits can be used for operational purposes. */ union IPPrefixType { 1: optional IPv4PrefixType ipv4prefix; 2: optional IPv6PrefixType ipv6prefix; } /** Sequence of a prefix in case of move. */ struct PrefixSequenceType { 1: required IEEE802_1ASTimeStampType timestamp; /** Transaction ID set by client in e.g. in 6LoWPAN. */ 2: optional PrefixTransactionIDType transactionid; } /** Type of TIE. This enum indicates what TIE type the TIE is carrying. In case the value is not known to the receiver, the TIE MUST be re-flooded. This allows for future extensions of the protocol within the same major schema with types opaque to some nodes UNLESS the flooding scope is not the same as prefix TIE, then a major version revision MUST be performed. */ enum TIETypeType { Illegal = 0, TIETypeMinValue = 1, /** first legal value */ NodeTIEType = 2, PrefixTIEType = 3, PositiveDisaggregationPrefixTIEType = 4, NegativeDisaggregationPrefixTIEType = 5, PGPrefixTIEType = 6, KeyValueTIEType = 7, ExternalPrefixTIEType = 8, PositiveExternalDisaggregationPrefixTIEType = 9, TIETypeMaxValue = 10, } /** RIFT route types. @note: route types which MUST be ordered on their preference PGP prefixes are most preferred attracting traffic north (towards spine) and then south normal prefixes are attracting traffic south (towards leafs), i.e. prefix in NORTH PREFIX TIE is preferred over SOUTH PREFIX TIE. @note: The only purpose of those values is to introduce an ordering whereas an implementation can choose internally any other values as long the ordering is preserved */ enum RouteType { Illegal = 0, RouteTypeMinValue = 1, /** First legal value. */ /** Discard routes are most preferred */ Discard = 2, /** Local prefixes are directly attached prefixes on the * system such as e.g. interface routes. */ LocalPrefix = 3, /** Advertised in S-TIEs */ SouthPGPPrefix = 4, /** Advertised in N-TIEs */ NorthPGPPrefix = 5, /** Advertised in N-TIEs */ NorthPrefix = 6, /** Externally imported north */ NorthExternalPrefix = 7, /** Advertised in S-TIEs, either normal prefix or positive disaggregation */ SouthPrefix = 8, /** Externally imported south */ SouthExternalPrefix = 9, /** Negative, transitive prefixes are least preferred */ NegativeSouthPrefix = 10, RouteTypeMaxValue = 11, }
/** Thrift file for packet encodings for RIFT */ include "common.thrift" /** Represents protocol encoding schema major version */ const common.VersionType protocol_major_version = 4 /** Represents protocol encoding schema minor version */ const common.MinorVersionType protocol_minor_version = 0 /** Common RIFT packet header. */ struct PacketHeader { /** Major version of protocol. */ 1: required common.VersionType major_version = protocol_major_version; /** Minor version of protocol. */ 2: required common.MinorVersionType minor_version = protocol_minor_version; /** Node sending the packet, in case of LIE/TIRE/TIDE also the originator of it. */ 3: required common.SystemIDType sender; /** Level of the node sending the packet, required on everything except LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL and is used in ZTP procedures. */ 4: optional common.LevelType level; } /** Prefix community. */ struct Community { /** Higher order bits */ 1: required i32 top; /** Lower order bits */ 2: required i32 bottom; } /** Neighbor structure. */ struct Neighbor { /** System ID of the originator. */ 1: required common.SystemIDType originator; /** ID of remote side of the link. */ 2: required common.LinkIDType remote_id; } /** Capabilities the node supports. @note: The schema may add to this field future capabilities to indicate whether it will support interpretation of future schema extensions on the same major revision. Such fields MUST be optional and have an implicit or explicit false default value. If a future capability changes route selection or generates blackholes if some nodes are not supporting it then a major version increment is unavoidable. */ struct NodeCapabilities { /** Must advertise supported minor version dialect that way. */ 1: required common.MinorVersionType protocol_minor_version = protocol_minor_version; /** Can this node participate in flood reduction. */ 2: optional bool flood_reduction = common.flood_reduction_default; /** Does this node restrict itself to be top-of-fabric or leaf only (in ZTP) and does it support leaf-2-leaf procedures. */ 3: optional common.HierarchyIndications hierarchy_indications; } /** Link capabilities. */ struct LinkCapabilities { /** Indicates that the link is supporting BFD. */ 1: optional bool bfd = common.bfd_default; /** Indicates whether the interface will support v4 forwarding. @note: This MUST be set to true when LIEs from a v4 address are sent and MAY be set to true in LIEs on v6 address. If v4 and v6 LIEs indicate contradicting information the behavior is unspecified. */ 2: optional bool v4_forwarding_capable = true; } /** RIFT LIE Packet. @note: this node's level is already included on the packet header */ struct LIEPacket { /** Node or adjacency name. */ 1: optional string name; /** Local link ID. */ 2: required common.LinkIDType local_id; /** UDP port to which we can receive flooded TIEs. */ 3: required common.UDPPortType flood_port = common.default_tie_udp_flood_port; /** Layer 3 MTU, used to discover to mismatch. */ 4: optional common.MTUSizeType link_mtu_size = common.default_mtu_size; /** Local link bandwidth on the interface. */ 5: optional common.BandwithInMegaBitsType link_bandwidth = common.default_bandwidth; /** Reflects the neighbor once received to provide 3-way connectivity. */ 6: optional Neighbor neighbor; /** Node's PoD. */ 7: optional common.PodType pod = common.default_pod; /** Node capabilities shown in LIE. The capabilities MUST match the capabilities shown in the Node TIEs, otherwise the behavior is unspecified. A node detecting the mismatch SHOULD generate according error. */ 10: required NodeCapabilities node_capabilities; /** Capabilities of this link. */ 11: optional LinkCapabilities link_capabilities; /** Required holdtime of the adjacency, i.e. how much time MUST expire without LIE for the adjacency to drop. */ 12: required common.TimeIntervalInSecType holdtime = common.default_lie_holdtime; /** Unsolicited, downstream assigned locally significant label value for the adjacency. */ 13: optional common.LabelType label; /** Indicates that the level on the LIE MUST NOT be used to derive a ZTP level by the receiving node. */ 21: optional bool not_a_ztp_offer = common.default_not_a_ztp_offer; /** Indicates to northbound neighbor that it should be reflooding this node's N-TIEs to achieve flood reduction and balancing for northbound flooding. To be ignored if received from a northbound adjacency. */ 22: optional bool you_are_flood_repeater = common.default_you_are_flood_repeater; /** Can be optionally set to indicate to neighbor that packet losses are seen on reception based on packet numbers or the rate is too high. The receiver SHOULD temporarily slow down flooding rates. */ 23: optional bool you_are_sending_too_quickly = false; /** Instance name in case multiple RIFT instances running on same interface. */ 24: optional string instance_name; } /** LinkID pair describes one of parallel links between two nodes. */ struct LinkIDPair { /** Node-wide unique value for the local link. */ 1: required common.LinkIDType local_id; /** Received remote link ID for this link. */ 2: required common.LinkIDType remote_id; /** Describes the local interface index of the link. */ 10: optional common.PlatformInterfaceIndex platform_interface_index; /** Describes the local interface name. */ 11: optional string platform_interface_name; /** Indication whether the link is secured, i.e. protected by outer key, absence of this element means no indication, undefined outer key means not secured. */ 12: optional common.OuterSecurityKeyID trusted_outer_security_key; /** Indication whether the link is protected by established BFD session. */ 13: optional bool bfd_up; } /** ID of a TIE. @note: TIEID space is a total order achieved by comparing the elements in sequence defined and comparing each value as an unsigned integer of according length. */ struct TIEID { /** direction of TIE */ 1: required common.TieDirectionType direction; /** indicates originator of the TIE */ 2: required common.SystemIDType originator; /** type of the tie */ 3: required common.TIETypeType tietype; /** number of the tie */ 4: required common.TIENrType tie_nr; } /** Header of a TIE. @note: TIEID space is a total order achieved by comparing the elements in sequence defined and comparing each value as an unsigned integer of according length. @note: After sequence number the lifetime received on the envelope must be used for comparison before further fields. @note: `origination_time` and `origination_lifetime` are disregarded for comparison purposes and carried purely for debugging/security purposes if present. */ struct TIEHeader { /** ID of the tie. */ 2: required TIEID tieid; /** Sequence number of the tie. */ 3: required common.SeqNrType seq_nr; /** Absolute timestamp when the TIE was generated. This can be used on fabrics with synchronized clock to prevent lifetime modification attacks. */ 10: optional common.IEEE802_1ASTimeStampType origination_time; /** Original lifetime when the TIE was generated. This can be used on fabrics with synchronized clock to prevent lifetime modification attacks. */ 12: optional common.LifeTimeInSecType origination_lifetime; } /** Header of a TIE as described in TIRE/TIDE. */ struct TIEHeaderWithLifeTime { 1: required TIEHeader header; /** Remaining lifetime that expires down to 0 just like in ISIS. TIEs with lifetimes differing by less than `lifetime_diff2ignore` MUST be considered EQUAL. */ 2: required common.LifeTimeInSecType remaining_lifetime; } /** TIDE with sorted TIE headers, if headers are unsorted, behavior is undefined. */ struct TIDEPacket { /** First TIE header in the tide packet. */ 1: required TIEID start_range; /** Last TIE header in the tide packet. */ 2: required TIEID end_range; /** _Sorted_ list of headers. */ 3: required list<TIEHeaderWithLifeTime> headers; } /** TIRE packet */ struct TIREPacket { 1: required set<TIEHeaderWithLifeTime> headers; } /** neighbor of a node */ struct NodeNeighborsTIEElement { /** level of neighbor */ 1: required common.LevelType level; /** Cost to neighbor. @note: All parallel links to same node incur same cost, in case the neighbor has multiple parallel links at different cost, the largest distance (highest numerical value) MUST be advertised. @note: any neighbor with cost <= 0 MUST be ignored in computations */ 3: optional common.MetricType cost = common.default_distance; /** can carry description of multiple parallel links in a TIE */ 4: optional set<LinkIDPair> link_ids; /** total bandwith to neighbor, this will be normally sum of the bandwidths of all the parallel links. */ 5: optional common.BandwithInMegaBitsType bandwidth = common.default_bandwidth; } /** Indication flags of the node. */ struct NodeFlags { /** Indicates that node is in overload, do not transit traffic through it. */ 1: optional bool overload = common.overload_default; } /** Description of a node. It may occur multiple times in different TIEs but if either <list> <t>capabilities values do not match or</t> <t>flags values do not match or</t> <t>neighbors repeat with different values</t> </list> the behavior is undefined and a warning SHOULD be generated. Neighbors can be distributed across multiple TIEs however if the sets are disjoint. Miscablings SHOULD be repeated in every node TIE, otherwise the behavior is undefined. @note: Observe that absence of fields implies defined defaults. */ struct NodeTIEElement { /** Level of the node. */ 1: required common.LevelType level; /** Node's neighbors. If neighbor systemID repeats in other node TIEs of same node the behavior is undefined. */ 2: required map<common.SystemIDType, NodeNeighborsTIEElement> neighbors; /** Capabilities of the node. */ 3: required NodeCapabilities capabilities; /** Flags of the node. */ 4: optional NodeFlags flags; /** Optional node name for easier operations. */ 5: optional string name; /** PoD to which the node belongs. */ 6: optional common.PodType pod; /** optional startup time of the node */ 7: optional common.TimestampInSecsType startup_time; /** If any local links are miscabled, the indication is flooded. */ 10: optional set<common.LinkIDType> miscabled_links; } /** Attributes of a prefix. */ struct PrefixAttributes { /** Distance of the prefix. */ 2: required common.MetricType metric = common.default_distance; /** Generic unordered set of route tags, can be redistributed to other protocols or use within the context of real time analytics. */ 3: optional set<common.RouteTagType> tags; /** Monotonic clock for mobile addresses. */ 4: optional common.PrefixSequenceType monotonic_clock; /** Indicates if the interface is a node loopback. */ 6: optional bool loopback = false; /** Indicates that the prefix is directly attached, i.e. should be routed to even if the node is in overload. */ 7: optional bool directly_attached = true; /** In case of locally originated prefixes, i.e. interface addresses this can describe which link the address belongs to. */ 10: optional common.LinkIDType from_link; } /** TIE carrying prefixes */ struct PrefixTIEElement { /** Prefixes with the associated attributes. If the same prefix repeats in multiple TIEs of same node behavior is unspecified. */ 1: required map<common.IPPrefixType, PrefixAttributes> prefixes; } /** Generic key value pairs. */ struct KeyValueTIEElement { /** @note: if the same key repeats in multiple TIEs of same node or with different values, behavior is unspecified */ 1: required map<common.KeyIDType,string> keyvalues; } /** Single element in a TIE. Schema enum `common.TIETypeType` in TIEID indicates which elements MUST be present in the TIEElement. In case of mismatch the unexpected elements MUST be ignored. In case of lack of expected element the TIE an error MUST be reported and the TIE MUST be ignored. This type can be extended with new optional elements for new `common.TIETypeType` values without breaking the major but if it is necessary to understand whether all nodes support the new type a node capability must be added as well. */ union TIEElement { /** Used in case of enum common.TIETypeType.NodeTIEType. */ 1: optional NodeTIEElement node; /** Used in case of enum common.TIETypeType.PrefixTIEType. */ 2: optional PrefixTIEElement prefixes; /** Positive prefixes (always southbound). It MUST NOT be advertised within a North TIE and ignored otherwise. */ 3: optional PrefixTIEElement positive_disaggregation_prefixes; /** Transitive, negative prefixes (always southbound) which MUST be aggregated and propagated according to the specification southwards towards lower levels to heal pathological upper level partitioning, otherwise blackholes may occur in multiplane fabrics. It MUST NOT be advertised within a North TIE. */ 5: optional PrefixTIEElement negative_disaggregation_prefixes; /** Externally reimported prefixes. */ 6: optional PrefixTIEElement external_prefixes; /** Positive external disaggregated prefixes (always southbound). It MUST NOT be advertised within a North TIE and ignored otherwise. */ 7: optional PrefixTIEElement positive_external_disaggregation_prefixes; /** Key-Value store elements. */ 9: optional KeyValueTIEElement keyvalues; } /** TIE packet */ struct TIEPacket { 1: required TIEHeader header; 2: required TIEElement element; } /** Content of a RIFT packet. */ union PacketContent { 1: optional LIEPacket lie; 2: optional TIDEPacket tide; 3: optional TIREPacket tire; 4: optional TIEPacket tie; } /** RIFT packet structure. */ struct ProtocolPacket { 1: required PacketHeader header; 2: required PacketContent content; }
This section gathers constants that are provided in the schema files and in the document.
Type | Value | |
---|---|---|
LIE IPv4 Multicast Address | Default Value, Configurable | 224.0.0.120 or all-rift-routers to be assigned in IPv4 Multicast Address Space Registry in Local Network Control Block |
LIE IPv6 Multicast Address | Default Value, Configurable | FF02::A1F7 or all-rift-routers to be assigned in IPv6 Multicast Address Assignments |
LIE Destination Port | Default Value, Configurable | 914 |
Level value for TOP_OF_FABRIC flag | Constant | 24 |
Default LIE Holdtime | Default Value, Configurable | 3 seconds |
TIE Retransmission Interval | Default Value | 1 second |
TIDE Generation Interval | Default Value, Configurable | 5 seconds |
MIN_TIEID signifies start of TIDEs | Constant | TIE Key with minimal values: TIEID(originator=0, tietype=TIETypeMinValue, tie_nr=0, direction=South) |
MAX_TIEID signifies end of TIDEs | Constant | TIE Key with maximal values: TIEID(originator=MAX_UINT64, tietype=TIETypeMaxValue, tie_nr=MAX_UINT64, direction=North) |