<?xml version="1.0" encoding="US-ASCII"?>
<?xml-model href="rfc7991bis.rnc"?>  <!-- Required for schema validation and schema-aware editing -->


<rfc
  xmlns:xi="http://www.w3.org/2001/XInclude"
  category="info"
  docName="draft-filsfils-srv6ops-srv6-ai-backend-03"
  ipr="trust200902"
  obsoletes=""
  updates=""
  submissionType="IETF"
  xml:lang="en"
  version="3">

  <front>
    <title abbrev="SRv6 for AI Backends">SRv6 for Deterministic Path Placement in AI Backends</title>

    <author fullname="Clarence Filsfils" initials="C" surname="Filsfils">
      <organization>Cisco Systems</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>Belgium</country>
        </postal>

        <email>cf@cisco.com</email>
      </address>
    </author>

    <author fullname="Chris Martin" initials="C" surname="Martin">
      <organization>Oracle Cloud</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>United States of America</country>
        </postal>

        <email>christian.j.martin@oracle.com</email>
      </address>
    </author>

    <author fullname="Kiran Pillai" initials="K" surname="Pillai">
      <organization>IBM</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>United States of America</country>
        </postal>

        <email>Kiran.Pillai@ibm.com</email>
      </address>
    </author>

    <author fullname="Pablo Camarillo Garvia" initials="P" role="editor"
            surname="Camarillo">
      <organization>Cisco Systems</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>Spain</country>
        </postal>

        <email>pcamaril@cisco.com</email>
      </address>
    </author>

    <author fullname="Ahmed Abdelsalam" initials="A" surname="Abdelsalam">
      <organization>Cisco Systems</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>Italy</country>
        </postal>

        <email>ahabdels@cisco.com</email>
      </address>
    </author>

    <author fullname="Jeff Tantsura" initials="J" surname="Tantsura">
      <organization>NVIDIA</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>United States of America</country>
        </postal>

        <email>jefftant.ietf@gmail.com</email>
      </address>
    </author>

    <author fullname="Keyur Patel" initials="K" surname="Patel">
      <organization>Arrcus, Inc.</organization>

      <address>
        <postal>
          <street/>

          <city/>

          <region/>

          <code/>

          <country>United States of America</country>
        </postal>

        <email>keyur@arrcus.com</email>
      </address>
    </author>

    <date year="2026"/>

    <area>Routing</area>

    <workgroup>SRv6 Operations</workgroup>

    <keyword>SRv6</keyword>
    <keyword>AI Backend</keyword>

    <abstract>
      <t>This document describes the use of SRv6 to enable deterministic path placement in AI backends, optimizing load balancing and congestion control for predictable GPU workloads.</t>
    </abstract>
  </front>

  <middle>
    <section anchor="Intro" title="Introduction">
      <t>Hyperscale AI training clusters rely on massive GPU-to-GPU data exchanges, where synchronization delays caused due to congestion delays and packet loss directly impact model convergence time and operational costs. </t>

      <t>These workloads generate <strong>large, predictable flows</strong> that require ultra-low latency, high bandwidth, and precise congestion control to maintain efficiency. Traditional networking approaches, such as ECMP-based per-flow load balancing, suffer from poor entropy due to the limited number of RoCEv2 flows, leading to fabric hotspots, congestion, and slow reconvergence after failures.</t>

      <t>SRv6 uSID (NEXT-CSID) provides the ability to steer in the fabric, allowing the NIC (i.e., SmartNIC, DPU) to perform <strong>deterministic path placement of ROCEv2 traffic through the fabric. This ensures predictable performance, fine-grained traffic control, and real-time adaptation to congestion in a stateless manner. </strong></t>

      <t>Future revisions of this draft will cover additional use-cases (multi-path transport, virtual rail topologies, stateless interaction between AI/LLM leasing a cluster infra and the operator managing the cluster, etc).</t>

      <t>The document draft-filsfils-srv6ops-srv6-end-to-end-dc-frontend-wan-00 explains how SRv6 uSID (NEXT-CSID) is applied to an end-to-end DC Frontend and WAN fabric.</t>

      <section title="Requirements Language">
        <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
        "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
        "OPTIONAL" in this document are to be interpreted as described in BCP
        14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only
        when, they appear in all capitals, as shown here.</t>
      </section>
    </section>

    <section anchor="Term" title="Terminology">
      <dl>

        <dt>SRv6</dt><dd>Segment Routing over IPv6 <xref target="RFC8986"/>. </dd>
        <dt>uSID</dt>
        <dd>
          <t>Micro-segment. Formally defined as NEXT-CSID in <xref target="RFC9800"/>. </t>
          <t>The term <em>uSID (micro SID)</em> predates the formal naming and has been widely adopted across the industry - including operators with large-scale deployments, vendors, open-source implementations, and used consistently in multi-vendor interoperability reports.</t>
          <t>To maintain alignment with the formal specification while also acknowledging the widespread and practical use of the term, this document uses uSID and NEXT-CSID interchangeably.</t>
        </dd>
        <dt>ECMP</dt><dd>Equal-Cost Multi-Path</dd>
        <dt>uN</dt><dd>The uN is a short notation for the End behavior with NEXT-CSID, PSP, and USD flavors as defined in <xref target="RFC9800"/>.</dd>
        <dt>uA</dt><dd>The uA local behavior is a short notation for the End.X behavior with NEXT-CSID, PSP, and USD flavors <xref target="RFC9800"/>.</dd>
        <dt>ROCEv2</dt><dd>RDMA over Converged Ethernet version 2 <xref target="IBTA-ROCEv2"/>.</dd>
        <dt>NIC</dt><dd>Network Interface Card, a hardware component that connects a computer to a network.</dd>
        <dt>SmartNIC</dt><dd>A Network Interface Card with embedded processing capabilities, designed to offload network and storage tasks from the host CPU.</dd>
        <dt>DPU</dt> <dd>Data Processing Unit, a specialized processor designed to offload and accelerate data-centric tasks, often used in network and storage functions.</dd>
        <dt>GPU</dt><dd>Graphics Processing Unit, a processor designed for rendering graphics and performing parallel computation tasks, commonly used for AI and machine learning workloads.</dd>
      </dl>
    </section>

    <section anchor="AItraffic" title="AI Traffic Characteristics and Challenges">
      <t>AI workloads exhibit highly structured traffic patterns:</t>
        <ul>
          <li><strong>Predictable Elephant Flows</strong>: Collectives&apos; communications require multiple GPUs to exchange data in a structured manner that is known in advance. Flows between GPUs are large, long-lived, high throughput and predictable.</li>
          <li><strong>Synchronized Bursts</strong>: Model synchronization causes periodic, coordinated traffic spikes.</li>
          <li><strong>Low ECMP Entropy</strong>: Data exchange between GPUs relies on a small number of flows (ROCEv2 Queue Pairs), leading to poor performance of traditional load-balancing solutions. A 5-tuple based ECMP load-balancing results in non-homogenous utilization across the fabric, leading to congestion.</li>
          <li><strong>Resilience</strong>: Network failures prolong training time and increase significantly operational costs. Therefore, it is imperative for the network to provide high resiliency, and fast reaction to congestion.</li>
      </ul>
    </section>

    <section anchor="SRv6Det" title="SRv6 for Deterministic Path Placement">
      <t>SRv6 enables the NIC to directly control the AI workload traffic journey through the fabric by encoding an ordered list of segments in the packet header.</t>
        <ul>
          <li><t><strong>AI Scheduler</strong>: Upon AI job orchestration, the collectives&apos; communications are defined (i.e., the GPU Topology). The AI scheduler determines the optimal fabric routed paths based on all the running jobs in the fabric, and the GPU topology for each one of them.</t>
            <ul>
              <li>The encoding of a path as an SRv6 Network Program <strong>does not require any per-path communication between the AI Scheduler and the fabric</strong>.</li>
              <li>At fabric bring up, the controller managing the fabric communicates the overall topology together with SRv6 uSID (NEXT-CSID) explicit instructions for each link (uA). These instructions are statically configured are thus independent of any routing protocol dynamic state. The AI Scheduler build any path through the fabric without any further control-plane interaction with the routers.</li>
            </ul></li>
          <li><t><strong>NIC</strong>: The NIC, before sending the ROCEv2 traffic, encapsulates with an outer IPv6 header and encodes in the packet header the sequence of instructions to enforce the precomputed path through the fabric.</t>
            <ul>
              <li>Note that an outer IPv6 header allows to encode 6 uSIDs in the Destination Address. This implies that even upon presence of a super-spine in a 3-tier Clos fabric, the entire path can be encoded without the need of any additional Segment Routing extension Header (SRH).</li>
            </ul></li>
          <li><strong>Highly Scalable Stateless Fabric</strong>: The routers in the fabric enforce the path by following the sequence of SRv6 instructions in the packet header. There is <strong>no per-flow state in the network</strong> (unlike MPLS RSVP-TE which would require the instantiation of states in the fabric on a per GPU-to-GPU deterministic path basis).</li>
          <li><strong>Congestion Feedback Loop</strong>: The NICs react in real time to congestion notifications (ECN, inband latency measurement, Packet Trimming, inband packet loss). These mechanisms are preserved and leveraged by the solution to optimize traffic steering and prevent congestion hotspots. At any time (without any fabric signaling or dependency), within a few nanoseconds, the NIC can change the deterministic path through the fabric by simply changing the outer IPv6 Destination Address. The change is only at the source NIC. There is no change required at any of the intermediate devices in the fabric.</li>
      </ul>
    </section>

    <section anchor="Illust" title="Illustration">
      <t>The following figure depicts a typical 2-tier Clos topology.</t>

      <figure align="left" anchor="topo" title="Reference Topology">
        <artwork align="left"><![CDATA[
          Spine4                      Spine5          
            |                           |             
   +--------+----+--------------+-------|-----+       
   |             |              |       |     |       
   |   +---------|---+----------|---+---+-----|----+   
   |   |         |   |          |   |         |    |   
+--+------+   +--+------+    +--+------+   +--+------+
|  Leaf1  |   |  Leaf2  |    |  Leaf3  |   |  Leaf4  |
+----+----+\ /+----+----+    +----+----+\ /+----+----+
     |      X      |              |      X      |     
     |     / \     |              |     / \     |     
     |    /   \    |              |    /   \    |     
+----+----+   +----+----+    +----+----+   +----+----+
|  DPU1   |   |  DPU2   |    |  DPU3   |   |  DPU4   |
|    |    |   |    |    |    |    |    |   |    |    |
|  GPU1   |   |  GPU2   |    |  GPU3   |   |  GPU4   |
+---------+   +---------+    +---------+   +---------+
]]></artwork>
      </figure>

      <t>The topology consists of two Spine devices. Each of the Spines is connected to four Leaf devices.</t>
      <t>There are 4 NICs, which are connected through the host interface (e.g., PCIe) to a GPU. In this example each NIC is dual-homed to two Leaf devices.</t>

      <section anchor="provisioning" title="SRv6 Fabric Provisioning">
        <t>At a day0 cluster build-up (fabric bring-up), the topology is provisioned with SRv6 SIDs on the Spine and Leafs devices. These SIDs are statically configured and thus independent of any routing protocol dynamic state. The following is provisioned:</t>

        <ul>
          <li>SRv6 SID Space in the fabric 5f00:0::/32</li>
          <li>Leaf<strong>1</strong> instantiates the SID 5f00:0:0<strong>1</strong>00::/48 associated with the uN instruction (End with NEXT-CSID, PSP &amp; USD)</li>
          <li>Leaf<strong>2</strong> instantiates the SID 5f00:0:0<strong>2</strong>00::/48 associated with the uN instruction (End with NEXT-CSID, PSP &amp; USD)</li>
          <li>Leaf<strong>3</strong> instantiates the SID 5f00:0:0<strong>3</strong>00::/48 associated with the uN instruction (End with NEXT-CSID, PSP &amp; USD)</li>
          <li>Leaf<strong>4</strong> instantiates the SID 5f00:0:0<strong>4</strong>00::/48 associated with the uN instruction (End with NEXT-CSID, PSP &amp; USD)</li>
          <li>Spine<strong>5</strong> instantiates the SID 5f00:0:0<strong>5</strong>00::/48 associated with the uN instruction (End with NEXT-CSID, PSP &amp; USD)</li>
          <li>Spine<strong>6</strong> instantiates the SID 5f00:0:0<strong>6</strong>00::/48 associated with the uN instruction (End with NEXT-CSID, PSP &amp; USD)</li>
        </ul>
      </section>

      <section anchor="srv6pathsel" title="SRv6-Based Deterministic Path Selection">
        <t>In the fabric there is an AI job being orchestrated. As a result of the AI orchestration and the collectives&apos; communication, it results that the GPU1 and GPU2 must send traffic periodically to GPU3.</t>
        <t>The AI orchestration, based on the network topology, computes the paths which achieve homogenous utilization in the fabric to avoid congestion:</t>
        <ul>
          <li>GPU1->GPU3: via Leaf1, Spine5, Leaf3</li>
          <li>GPU2->GPU3: via Leaf2, Spine6, Leaf4</li>
        </ul>
        <t>Upon AI job computation (at GPU synchronization time):</t>
        <ul>
          <li><t>NIC1: creates a ROCEv2 packet that must be sent to NIC3. NIC1 encapsulates the ROCEv2 packet with an outer IPv6 Header (H.Encaps.Red behavior).</t>
          <ul>
            <li>IPv6 DA: 5f00:0:0100:0500:0300::</li>
            <li>The packet has no SRH.</li>
          </ul></li>
          <li><t>Leaf1:</t>
          <ul>
            <li>Packet in: (IPv6. DA=5f00:0:0100:0500:0300::)(ROCEv2)</li>
            <li>Leaf1 has the SID 5f00:0:0100::/48 instantiated with the End with NEXT-CSID, PSP &amp; USD behavior. As a result, it shifts, lookup, and forwards the packet.</li>
            <li>Packet out: (IPv6. DA=5f00:0:0500:0300::)(ROCEv2)</li>
          </ul></li>
          <li><t>Spine5:</t>
          <ul>
            <li>Packet in: (IPv6. DA=5f00:0:0500:0300::)(ROCEv2)</li>
            <li>Spine5 has the SID 5f00:0:0500::/48 instantiated with the End with NEXT-CSID, PSP &amp; USD behavior. As a result, it shifts, lookup, and forwards the packet.</li>
            <li>Packet out: (IPv6. DA=5f00:0:0500::)(ROCEv2)</li>
          </ul></li>
          <li><t>Leaf3:</t>
          <ul>
            <li>Packet in: (IPv6. DA=5f00:0:0300::)(ROCEv2)</li>
            <li>Leaf3 has the SID 5f00:0:0400::/48 instantiated with the End with NEXT-CSID, PSP &amp; USD behavior. As a result it removes the outer IPv6 header and forward the inner packet.</li>
            <li>Packet out: (ROCEv2)</li>
          </ul></li>
          <li>NIC3: receives the ROCEv2 packet, process it, and passes data to the GPU3.</li>
        </ul>
        <t><strong>Note that Leaf1, Spine5, and Leaf3 do not hold any state for this specific flow</strong>. It is a single uSID instruction per node instantiated upon cluster build-up and reused by all flows.</t>
        <t>The flow for the traffic from GPU2 to GPU3 leverages the path Leaf2, Spine6, Leaf4. It does so by using the uSID Network Program 5f00:0:0200:0600:0400:: .</t>
        <t>While in this example we have used the uN instruction, it can also be encoded using uA instructions specifying the sequence of interfaces.</t>
      </section>

      <section anchor="adaptiverouting" title="Adaptive Routing with congestion feedback">
        <t>At any time, during the execution of the AI job, Spine5 experiences congestion. NIC1 learns about the congestion of Spine5.</t>
        <t>Within usecs, without any fabric signaling or new state at intermediate devices, NIC1 steers the traffic into a different path through the fabric. NIC1 switches the path from &lt;Leaf1, Spine5, Leaf3&gt; to &lt;Leaf1, Spine6, Leaf3&gt;. This is done simply by encapsulating any new traffic of the flow GPU1->GPU3 with the IPv6 DA 5f00:0:0100:0600:0300:: .</t>
        <t><strong>Note that the change of path is instantaneous. There is no routing protocol or control plane notification to the network devices to change the path.</strong> The fabric is entirely stateless, and the packet path is encoded into the IPv6 header built by the source NIC. This is essential as AI workloads cannot be exposed to slow reconvergence.</t>
      </section>
    </section>

    <section anchor="benefits" title="Benefits">
      <ul>
        <li><strong>Deterministic Path Placement</strong>: SRv6 allows the NIC to control the path of each flow through the fabric.</li>
        <li><strong>Minimum-MTU</strong>: A plain outer IPv6 encapsulation allows to encode 6 uSIDs in the outer DA. This implies that without the need of additional extension headers, only with 40Bytes of IPv6 encapsulation, we can encode up to 6 intermediate waypoints allowing to enforce a path in a 3-tier Clos network. This is sufficient to control a path hop-by-hop (link by link) through a leaf, spine, super-spine, spine, leaf. </li>
        <li><strong>Congestion Feedback Loop</strong>: Instant rerouting at the source based on ECN, in-band measured One-Way and Two-Way latency, Packet Trimming feedback and in-band packet loss, without any dependency of routing protocols. There is neither any control-plane signaling involved between the GPU and the fabric, nor between the AI orchestrator and the fabric devices. </li>
        <li><strong>Standardization</strong>: Open, vendor-agnostic implementation</li>
        <li><strong>Ease of operation</strong>: as opposed to black-box proprietary solution which packs opaque layer-2 optimization, the SRv6 solution is minimalistic, IP based, fully standardized and a rich ecosystem (vendor, merchant and open source). The deterministic and open nature of the solution simplifies troubleshooting.</li>
      </ul>
    </section>

    <section anchor="hyperscale" title="Hyperscale">
      <t>AI workloads are deployed across thousands of GPUs in multi-tier Clos networks, requiring a networking architecture that scales efficiently. SRv6 uSID (NEXT-CSID) ensures deterministic path placement while maintaining scalability through the following mechanisms:</t>
      <ul>
        <li><strong>Stateless Fabric</strong>: Unlike RSVP-TE or MPLS-TE, which require per-flow state on network devices, SRv6 enforces paths by including all the instructions in the packet header. This eliminates state explosion as the number of GPUs increases.</li>
        <li><strong>uSID Encapsulation</strong>: The SRv6 uSID (NEXT-CSID) encoding allows paths to be efficiently encoded even in multi-tier topologies, reducing encapsulation overhead while supporting large deployments. If more than 6 instructions are required, a simple IPv6 Segment Routing Extension Header can be used to encode additional instructions. </li>
        <li><strong>Cross-Datacenter Extension</strong>: The same SRv6-based mechanism can extend beyond a single cluster to multi-datacenter AI fabrics (i.e., inter-DC AI training), where deterministic path placement ensures efficient inter-cluster data transfers.</li>
        <li><strong>Overlay Tenant Separation</strong>: SRv6 can provide per-tenant network segmentation, ensuring AI workloads from different tenants or jobs are isolated while sharing the same physical infrastructure. By adding into the network program VPN Service SIDs; traffic steering and resource allocation can be enforced at the network level without requiring additional overlay encapsulations.</li>
      </ul>
    </section>

    <section anchor="Security" title="Security Considerations">
      <t>The deployment model described in this document is secured leveraging the mechanisms defined in <xref target="RFC8986"/>.</t>
    </section>

    <section anchor="ACK" title="Acknowledgements">
      <t>The authors would like to recognize the work of Lihua Yuan, Guohan Lu, Rita Hui, and Riff Jiang at Microsoft.</t>
      <t>Rita Hui presented this use-case at MPLS &amp; SRv6 World Congress in March 2025. A recording is available here: <eref target="https://www.segment-routing.net/conferences/Paris25-Microsoft-Rita-Hui/"/></t>
      <t>The authors would like to acknowledge the work of the developers who have enabled this use-case in the open-source <xref target="SONiC" /> implementation. In particular: Carmine Scarpitta, Abhishek Dosi, Changrong Wu, Kumaresh Perumal, Eddie Ruan, Yuqing Zhao, Rajasekar Raja, and Vivek Venkatraman.</t>
    </section>
  </middle>

  <back>
    <references title="Normative References">
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8986.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9800.xml"/>
    </references>

    <references title="Informative References">
      <reference anchor="IBTA-ROCEv2"
                 target="https://web.archive.org/web/20200917012109/https://cw.infinibandta.org/document/dl/7781">
        <front>
          <title>InfiniBand Architecture Specification Volume 1, Release 1.2.1, Annex A17: ROCEv2</title>

          <author>
            <organization>InfiniBand Trade Association</organization>
          </author>

          <date month="September" day="2" year="2014"/>
        </front>
      </reference>
      <reference anchor="SONiC" target="https://sonicfoundation.dev/">
        <front>
          <title>SONiC</title>
          <author>
            <organization>Linux Foundation</organization>
          </author>
        </front>
      </reference>
    </references>
  </back>
</rfc>
