Internet-Draft | IS-IS Distributed Flooding Reduction | April 2025 |
White, et al. | Expires 27 October 2025 | [Page] |
In dense topologies (such as data center fabrics based on the Clos and butterfly though not limited to those; in fact any large topology or one with relatively high degree of connectivity qualifies here) IGP flooding mechanisms designed originally for rather sparse topologies can "overflood", or in other words generate too many identical copies of same information arriving at a given node from other devices. This normally results in longer convergence times and higher resource utilization to process and discard the superfluous copies. Flooding algorithm extensions that restrict the amount of flooding performed can be constructed and can reduce resource utilization significantly, while improving convergence performance.¶
One such flooding modification (based on previous art) optimized for operational considerations, described further in Section 2, is described in this document.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 27 October 2025.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The following section describes a distributed algorithm similar to and based on those implemented in OSPF to support mobile ad-hoc networks, as described in [RFC5449],[RFC5614]. These solutions have been widely implemented and deployed.¶
Laboratory tests based on a well known open source codebase show that modifications similar to the algorithm presented here reduce flooding in a large scale emulated butterfly network topology significantly. Under unmodified flooding procedures intermediate systems receive, on average, 40 copies of any changed LSP fragment in a 2'500 nodes butterfly network. With the changes described in this document said systems received, on average, two copies of any changed LSP fragment. In many cases, only a single copy of each changed LSP was received and processed per node. In terms of performance, overall convergence times were cut in roughly half. Other topologies under experimentation in CLOS networks using another implementation show similar performance and simulations of the extension indicate significant reductions in flooding volumes.¶
An early version of mechanisms described here has been implemented in the FR Routing open source routing stack as part of `fabricd` daemon and the described modification has been implement by commercial vendors.¶
This section describes detailed modifications to the IS-IS flooding process to reduce the full topology to a dominating connected set of links used for flooding. It does at the same time balance the remaining flooding across all links in the topology to prevent hot-spots.¶
Following spine and leaf fabric will be used in further description of the introduced modifications.¶
+====+ +====+ +====+ +====+ +====+ +====+ | 1A | | 1B | | 1C | | 1D | | 1E | | 1F | (T0) +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ | 2A | | 2B | | 2C | | 2D | | 2E | | 2F | (T1) +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ | 3A | | 3B | | 3C | | 3D | | 3E | | 3F | (T2) +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ | 4A | | 4B | | 4C | | 4D | | 4E | | 4F | (T1) +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ +====+ | 5A | | 5B | | 5C | | 5D | | 5E | | 5F | (T0) +====+ +====+ +====+ +====+ +====+ +====+
The above picture does not contain the connections between devices for readability purposes. The reader should assume that each device in a given layer is connected to every device in the layer above it in a butterfly network fashion. For instance:¶
The tiers or stages of the fabric are marked for easier reference. Alternate representation of this topology is a "folded Clos" with T2 being the "top of the fabric" and T0 representing the leaves.¶
The simplest way to conceive of the solution presented here is in two stages:¶
The first stage is best explained through an illustration. In the network above, if 5A transmits a modified Link State Protocol Data Unit (LSP) to 4A-4F, each of 4A-4F nodes will, in turn, flood this modified LSP to 3A (for instance). With this, 3A will receive 6 copies of the modified LSP, while only one copy is necessary for the intermediate systems shown to converge on the same view of the topology. If 4A-4F could determine that all of them will all flood identical copies of the modified LSP to 3A, it would be possible for all of them except one to decide not to flood the changed LSP to 3A.¶
The technique used in this draft to determine such flooding group is for each intermediate system to calculate a special SPT (shortest-path spanning tree) from the point of view of the transmitting neighbor. As next step, by setting the metric of all links to 1 and truncating the SPT to two hops, the local IS can find the group of neighbors it will flood any changed LSP towards and the set of intermediate systems (not necessarily neighbors) which will also flood to this same set of neighbors. If every intermediate system in the flooding set performs this same calculation, they will all obtain the same flooding group.¶
Once such a flooding group is determined, the members of the flooding group will each (independently) choose which of the members should re-flood the received information. A common hash function is used across a set of shared variables so each member of the group comes to the same conclusion as to the designated flooding nodes. The group member which is in such a way `selected` to flood the changed LSP does so normally; the remaining group members suppress the flooding of the LSP initially.¶
Each IS calculates the special, truncated SPT separately, and determines which IS should flood any changed LSPs independently based on a common hash function. Because these calculations are performed using a shared view of the network, however (based on the common link state database) and such a shared hash function, each member of the flooding group will make the same decision under converged conditions. In the transitory state of nodes having potentially different view of topologies the flooding may either overflood or in worse case not flood enough for which we introduce a 'quick-patching' mechanism later but ultimately will converge due to periodic CSNP origination per normal protocol operation.¶
The second stage is simpler, consisting of a single rule: do not flood modified LSPs along the shortest path towards the origin of the modified LSP. This rule relies on the observation that any IS between the origin of the modified LSP and the local IS should receive the modified LSP from some other IS closer to the source of the modified LSP. It is worth to observe that if all the nodes that should be designated to flood within a peer group are pruned by the second stage the receiving node is at the `tail-end` of the flooding chain and no further flooding will be necessary. Also, per normal protocol procedures flooding to the node from which the LSP has been received will not be performed.¶
This section provides normative description of the specification. Any node implementing this solution MUST exhibit external behavior that conforms to the algorithms provided.¶
Each intermediate system will determine whether it should re-flood LSPs as described below. When a modified LSP arrives from a Transmitting Neighbor (TN), the result of the following algorithm obtains the necessary decision:¶
Step 1: Build the Two-Hop List (THL) and Remote Neighbor's List (RNL) of nodes:¶
For each IS that is two hops away (has a metric of 2 in the truncated SPT) from TN:¶
Step 2: Sort nodes in RNL by system IDs, from the least value to the greatest.¶
Step 3: Take as initial value the fragment number and shift it by 3 bits to the right. Calculate a number H by walking bytes of the system ID of the originator w/o pseudo node byte, starting from the highest network order byte. For all bytes XOR each with the current value and rotate after each byte the current value to the left by 4 bits. Ultimately, the result is right shifted by 4 bits again. Appendix A provides reference implementation.¶
The shifting of the fragment number will put 8 sequent fragments onto the same flooding tree to minimize re-ordering and the subsequent XOR'ing and rotating of the originatorID (without pseudo node byte) will maximize the amount of entropy obtained for a good hash value. RNum is the number of nodes in the RNL. Consequently, set N to the H MOD of RNum (N=H MOD RNum). With that N will be less than the number of members of RNL. (footnote 1: this allows for some balancing of LSPs coming from same system ID).¶
Step 4: Starting with the Nth member of RNL: where N is the index into the members in RNL, with index starting from zero (index value zero is assigned to the IS with lowest system-id):¶
This description is leaning towards clarity rather than optimal performance when implemented.¶
It is possible that during initial convergence or in some failure modes the flooding will be incomplete due to the optimizations outlined. Specifically, if a re-flooder fails, or is somehow disconnected from all the links across which it should be re-flooding, an LSP could be only partially distributed through the topology. To speed up convergence under such partition failures (observe that periodic CSNPs will under any circumstances converge the topology though at a slower pace), an intermediate system which does not re-flood a specific LSP (or fragment) SHOULD:¶
A node deploying this algorithm on point-to-point links MUST send CSNPs on such links. This does not represent a dramatic change given most deployed implementations today already exhibit this behavior to prevent possible slow synchronization of IS-IS database across such links and to provide additional periodic consistency guarantees.¶
The calculations described here seem complex, which might lead the reader to conclude that the cost of calculation is so much higher than the cost of flooding that this optimization is counter-productive. First, The description provided here is designed for clarity rather than optimal calculation. Second, many of the involved calculations can be easily performed in advance and stored, rather than being performed for each LSP occurrence and each neighbor. Optimized versions of the process described here have been implemented, and do result in strong convergence speed gains.¶
An implementation in a node MAY choose independently of others to provide a configurable parameter to allow for more than one node in RNL to re-flood, e.g. it may re-flood even if it's only the member that would be chosen from the RNL if a double coverage of THL is required. The modifications to the algorithm are simple enough to not require further text.¶
An implementation should pay particular attention that the case of a stale LSP with a higher version that persists in the network still works correctly in case the originator reboots and starts with lower version. Though the flooding of an LSP back to originator is suppressed by this extension the normal PSNP and CSNP procedures should trigger re-origination by the source of a higher version correctly.¶
The extension introduced to flooding in this document exhibits many desirable properties important for large production IS-IS networks.¶
This extension is designed to be deployable without initial configuration and balances the reduced flooding not only on a single or few trees but uses the information about the origin of the fragment to spread the load across whole topology.¶
This document outlines flooding algorithm modification to the IS-IS protocol for operation most useful at large scale or in high density network topologies. The extension does not present any new attack vectors even if nodes start to advertise a byzantine attack of signalling that they run the extension while still following standard behavior. As always, ISIS implementations SHOULD implement IS-IS cryptographic authentication, as described in [RFC5304], and should enable other security measures in accordance to best common practices for the IS-IS protocol.¶
IANA is requested to allocate a TBD value in 1-127 range with description of "Modified MANET Reduction" further referring to this document as specification in the "IGP Algorithm Type For Computing Flooding Topology" registry.¶
The following people have contributed to this draft or provided valuable comments and are mentioned without any particular order: Abhishek Kumar, Nikos Triantafillis, Ivan Pepelnjak, Christian Franke, Hannes Gredler, Les Ginsberg, Naiming Shen, Uma Chunduri, Nick Russo, Tony Li, Acee Lindem and Rodny Molina.¶
fn balancing_hash(systemid: [u8; 6], fragment_nr: u8) -> u32 { let mut h: u32 = ((fragment_nr & 0xff) >> 3) as _; h = systemid.iter().fold(h, | prev_value, byte | { (prev_value ^ (*byte as u32)) << 4 }); h >> 4 }
The hash has been designed to fit into unsigned 32-bit integer while generating maximum entropy. When the hash is calculated the following reference results are expected. The "modulo Rnum" result is given for further reference.¶
ID: 01 02 03 04 05 06 fragment: 00 check 0x00123456 -> 0x00123456 modulo %(2,3,4,5,6) [0, 0, 2, 1, 0] ID: 01 02 03 04 05 06 fragment: 07 check 0x00123456 -> 0x00123456 modulo %(2,3,4,5,6) [0, 0, 2, 1, 0] ID: 01 02 03 04 05 06 fragment: 08 check 0x00023456 -> 0x00023456 modulo %(2,3,4,5,6) [0, 2, 2, 0, 2] ID: 00 01 02 03 04 05 fragment: 07 check 0x00012345 -> 0x00012345 modulo %(2,3,4,5,6) [1, 0, 1, 0, 3] ID: 01 02 03 04 05 07 fragment: 08 check 0x00023457 -> 0x00023457 modulo %(2,3,4,5,6) [1, 0, 3, 1, 3] ID: FF 05 04 03 02 01 fragment: 00 check 0x0FF54321 -> 0x0FF54321 modulo %(2,3,4,5,6) [1, 0, 1, 0, 3] ID: FF 05 04 03 02 01 fragment: FE check 0x0E054321 -> 0x0E054321 modulo %(2,3,4,5,6) [1, 2, 1, 4, 5] ID: FF 05 04 03 02 01 fragment: FD check 0x0E054321 -> 0x0E054321 modulo %(2,3,4,5,6) [1, 2, 1, 4, 5]
Assume, in the network specified in example given in Section 1.2.1, that 5A floods some modified LSP towards 4A-4F and we only use a single node to re-flood. To determine whether 4A should flood this LSP to 3A-3F:¶