<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-kompella-rtgwg-mlnwsched-02" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="ML NW sched">Scheduling Network Resources for Machine Learning Clusters</title>

    <author initials="K." surname="Kompella" fullname="Kireeti Kompella">
      <organization>HPE</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94089</code>
          <country>United States of America</country>
        </postal>
        <email>kireeti.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="P." surname="Beeram" fullname="Vishnu Pavan Beeram">
      <organization>HPE</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94089</code>
          <country>United States of America</country>
        </postal>
        <email>vishnu-pavan-kumar.beeram@hpe.com</email>
      </address>
    </author>
    <author initials="A." surname="Mahale" fullname="Aditya Mahale">
      <organization>Meta</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94085</code>
          <country>United States of America</country>
        </postal>
        <email>aditya.ietf@gmail.com</email>
      </address>
    </author>
    <author initials="R." surname="Bhargava" fullname="Raghav Bhargava">
      <organization>Crusoe</organization>
      <address>
        <postal>
          <city>Sunnyvale</city>
          <region>California</region>
          <code>94085</code>
          <country>United States of America</country>
        </postal>
        <email>rbhargava@crusoe.ai</email>
      </address>
    </author>
    <author initials="N." surname="Geyer" fullname="Nikolas Geyer">
      <organization>CoreWeave</organization>
      <address>
        <postal>
          <city>Canberra</city>
          <region>ACT</region>
          <country>Australia</country>
        </postal>
        <email>ngeyer@coreweave.com</email>
      </address>
    </author>

    <date year="2026"/>

    <area>Routing</area>
    <workgroup>RTG WG</workgroup>
    <keyword>multipath, bandwidth, scheduling, ML clusters</keyword>

    <abstract>


<?line 91?>

<t>Large Language Models (LLMs) are pushing the boundaries of technology. The scale that they have reached currently vastly exceeds the capacity of any single compute unit (XPU); this requires a distributed approach where multiple XPUs are connected via a "backend" network, sometimes in a single data center, but increasingly in multiple data centers connected by a "data center interconnect" (DCI). We are approaching the point where the scale exceeds that of a single data center, thus requiring multiple such data centers connected via a "data center interconnect" network. Training and inferencing are expensive and critical operations, thus they are typically scheduled, i.e., the (compute) resources they need are carefully estimated, allocated and deployed so that these resources are efficiently used. However, while compute investment in these LLM processing clusters dwarfs that of networks, it is becoming increasingly clear that the latter can greatly impact the former. This has been the focus of recent conferences, including the fantel Birds of a Feather meeting in IETF 123, @Scale: Networking 2025 and Open Compute Project 2025.</t>

<t>This memo proposes that the same care that is taken regarding allocation of compute resources to jobs be taken with networking resources: that they are estimated, allocated and deployed alongside compute resources; that they have contingency plans in case of network glitches; and that a holistic view be taken in order to optimize job completion times of training and inferencing jobs.</t>



    </abstract>



  </front>

  <middle>


<?line 97?>

<section anchor="intro"><name>Introduction</name>

<t>Large Language Models (LLMs) are pushing the industry to ever greater scale, both in training and in inference. This leads to more critical use of backend networks and a higher stake in producing timely results. A major learning from recent work is that the network cannot be taken for granted: a dropped or delayed packet can delay, stall or even abort a Machine Learning (ML) job, requiring more effort in checkpointing and managing job restarts, dealing with network congestion, and dealing with network failures. The problems get exacerbated in multi-tenant clusters where multiple jobs are run and job isolation becomes a key requirement. The fantel Birds of a Feather meeting (BoF) illustrated well the role the network plays in ML jobs, the potential for network events to disrupt jobs, and some early thoughts on how to handle these events. While the BoF was very successful in exposing these issues, we believe that adding a proactive approach would be beneficial; this can go hand in hand with the reactive approach of dealing effectively with network events.</t>

<t>This memo proposes that the network resources are reserved/scheduled in coordination with ML job scheduler, which is responsible for reserving compute resources (Central Processing Units [CPUs], Graphics Processing Units [GPUs], XPUs, memory, storage, ...). This is especially useful when multiple jobs are run in each cluster; an example is GPUaaS (GPU as a Service), or running several inference jobs simultaneously, or multi-tenancy. Reserving network resources reduces the probability of some disruptive network events and improves job isolation. This is the network analogy of reserving compute resources and ideally can be done at the same time. Essentially, when an ML job is scheduled, the “size” of the job (type of model, complexity of model, number of parameters, etc.) determines how many CPU/GPU/XPU cores are needed and how much memory and storage is needed; typically, the same parameters determine the amount of network resources needed during different collective (i.e., inter-XPU) communication stages (Broadcast, AllReduce, Reduce, etc.) Job placement (i.e., which XPUs to allocate for this job?) also determines the source(s) and destination(s) of the communication. If, at the time the job is scheduled, network resources are also reserved (and potentially, backup resources are put in place), the probability that network events can disrupt the job is reduced (although not eliminated). One can also set up the communication pathway and reserve resources when a collective communications API call (<xref target="MPI"/> or <xref target="NCCL"/> or the like) is made; this is especially relevant for long-running jobs where the time between communications phases can be long, and the phases vary from (say) Broadcast to AllReduce to quiescent. Finally, if backup pathways for a given communication are set up, traffic can quickly be protected when a failure happens, and in parallel, the sources can be notified of the failure and can reduce their traffic they send, build new end-to-end pathways or otherwise handle the failure.</t>

<t>The previous paragraph suggests a proactive methodology. Fast congestion notification and signaling constitutes a reactive methodology. These fit well together. One can couple network resource scheduling with fast event detection, signaling and mitigation for an overall much-reduced impact of network events on job progress.</t>

<section anchor="terminology"><name>Terminology</name>

<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.
<?line -6?></t>

<section anchor="definition-of-commonly-used-terms"><name>Definition of Commonly Used Terms</name>

<t>This section provides definitions for terms and abbreviations that are used in this memo.</t>

<dl>
  <dt>XPU:</dt>
  <dd>
    <t>one of several types of processing units: central processing unit (CPU), graphics processing unit (GPU), language processing unit (LPU), tensor processing unit (TPU) and the like. They fall under the category of "compute resources".</t>
  </dd>
  <dt>TE:</dt>
  <dd>
    <t>traffic engineering, a technology that allows the specification of constraints (such as "admin groups" or colors) to guide the layout of</t>
  </dd>
  <dt>phop:</dt>
  <dd>
    <t>previous hop (of N), a node and link that feeds in to junction N</t>
  </dd>
  <dt>nhop:</dt>
  <dd>
    <t>next hop (of N): a node that is fed by N over a specified link.</t>
  </dd>
  <dt>MPTE:</dt>
  <dd>
    <t>multipath TE, a technology that combines all the features of TE while offering multipathing with weighted load balancing for unicast traffic</t>
  </dd>
  <dt>MCTE:</dt>
  <dd>
    <t>multicast TE, a technology that combines all the features of TE with load balancing for multicast traffic</t>
  </dd>
  <dt>ML:</dt>
  <dd>
    <t>machine learning, a powerful technique to learn from data without explicit programming, used to solve problems of AI.</t>
  </dd>
  <dt>junction:</dt>
  <dd>
    <t>a node in a DAG, with 0 or more phops, and 0 or more nhops. A junction with 0 phops is an ingress; a junction with 0 nhops is an egress. Other junctions are transit. A junction may be a unicast or a multicast junction. A DAG must have 1 or more ingresses, 1 or more egresses, and 0 or more transit junctions.</t>
  </dd>
  <dt>DSF:</dt>
  <dd>
    <t>disaggregated scheduled fabric, a methodology for packet spraying in networks with multipathing.</t>
  </dd>
  <dt>DCI:</dt>
  <dd>
    <t>data center interconnect</t>
  </dd>
  <dt>DAG:</dt>
  <dd>
    <t>directed acyclic graph</t>
  </dd>
</dl>

</section>
</section>
</section>
<section anchor="problem-statement"><name>Problem Statement</name>

<t>Consider the ML cluster <xref target="mlc-1"/>:</t>

<figure title="ML Cluster 1" anchor="mlc-1"><artwork><![CDATA[
        S1         .... S2 
      / ...\.......   /    \      Note: L1 & L2 are connected to S2;
    L1..    L2      L3      L4          L3 & L4 are connected to S1.
   /  \    /  \    /  \    /  \   All links are 400G links.
  X1  X2  X3  X4  X5  X6  X7  X8
]]></artwork></figure>

<t>The bottom layer consists of XPUs X1 through X8. The next layer up consists of "leaf" switches L1 through L4. The top layer consists of "spine" switches S1 and S2. All links between layers are 400Gbps; thus there is no oversubscription in the network, provided:</t>

<t><list style="numbers" type="1">
  <t>All XPUs are well-behaved.</t>
  <t>All switches load balance fairly and perfectly.</t>
</list></t>

<t>However, "fair" load balancing is insufficient unless the load balancing is done on a per-packet (or better, per-cell) basis ("packet spraying") <xref target="DSF"/>. If load balancing is done on a per-flow basis ("flow level multipathing"), it is highly unlikely to be perfectly balanced across the next hops, in which case one next hop may see too much traffic, leading to congestion, packet delays or even packet drops. Disaggregated Scheduled Fabric (DSF) uses per-packet or per-cell load balancing, but it comes at a cost, and may not scale (and scale is a big consideration in these networks).</t>

<t>With flow level multipathing, say X1 and X2 are both sending 400G of traffic to L1. L1 tries to load balance X1's traffic to S1 and S2 (in principle, 200G each). In practice, that may turn out to be 220G to S1 and 180G to S2. L1 does the same with X2's traffic; let's say this goes 190G to S1 and 210G to S2. The L1-S1 link will be congested, with 410G of traffic.</t>

<t>On the "downward" side (traffic going to the XPUs), there can be an "in-cast" problem: say both X1 and X3 are sending traffic to X6. In the worst case, each sends 400G for a total of 800G to X6, but the L3-X6 link can only transmit 400G. Thus, half the traffic will be dropped.</t>

<t>If the entire cluster (here, XPUs X1 through X8) is working on a single ML job, things are a bit simpler (but the issues remain). However, if this cluster is used for inferencing, or multi-tenant workloads, additional considerations arise. Tenant 1 (or inferencing job 1) (T1) may be using XPU X1 and part of X6; tenant 2 (or job 2) (T2) may be using XPU X3 and another part of X6.</t>

<t>If T1 and T2 simultaneously require communication to X6, there could be contention for the L3-X6 link. Again, this could lead to congestion, and hence delayed or dropped packets. But now, the issue is inter-tenant.</t>

<t>As stated in the Introduction <xref target="intro"/>, such delayed or dropped packets can have big consequences for the jobs that are running. Issues such as these are the motivation for DSF, packet spraying and fast congestion notification.</t>

<section anchor="collective-operation"><name>Collective Operation</name>

<t>Collective operations <xref target="CO"/> are used in distributed computing for the participating compute entities to exchange information. One example is the Message Passing Interface <xref target="MPI"/>; others are the NVIDIA Collection Communications Library <xref target="NCCL"/> and the ROCm Communication Collectives Library <xref target="RCCL"/>. These are used by the compute entities in a deep learning cluster to send information to each other, or as a group.</t>

<t>Collective operations include both unicast and multicast communications. Thus, in scheduling network resources, both patterns should be covered.</t>

</section>
<section anchor="compsched"><name>Compute Scheduling</name>

<t>In shared compute environments, such as a compute cluster or a cloud, a scheduler is commonly used to orchestrate access to compute resources. SLURM <xref target="SLURM"/> is a commonly used scheduler in Linux clusters; its documentation says "First, [SLURM] allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work." Another is KAI <xref target="KAI"/> which says "KAI Scheduler is a robust, efficient, and scalable Kubernetes scheduler that optimizes GPU resource allocation for AI and machine learning workloads." There are several other schedulers in common use.</t>

<t>A scheduler offers several features. The following are taken from SLURM:</t>

<t><list style="numbers" type="1">
  <t>Accounting</t>
  <t>Advanced reservation</t>
  <t>Gang scheduling (time sharing for parallel jobs)</t>
  <t>Backfill scheduling</t>
  <t>Topology optimized resource selection</t>
  <t>Resource limits by user or bank account</t>
  <t>Sophisticated multifactor job prioritization algorithms</t>
</list></t>

<t>KAI offers the following:</t>

<t><list style="numbers" type="1">
  <t>Batch Scheduling</t>
  <t>Bin Packing &amp; Spread Scheduling</t>
  <t>Workload Priority</t>
  <t>Hierarchical Queues</t>
  <t>Resource distribution</t>
  <t>Fairness Policies</t>
  <t>Workload Consolidation</t>
  <t>Elastic Workloads</t>
  <t>Dynamic Resource Allocation (DRA)</t>
  <t>GPU Sharing</t>
</list></t>

<t>To summarize, a compute scheduler allows effective and optimal sharing of compute resources among multiple tenants and multiple jobs, while ensuring fairness, enforcing limits and enabling accounting. Without a scheduler, multitenancy and multiple jobs would be impractical and chaotic.</t>

<t>Note that multi-tenancy is implicit. There may be ways to reserve resources for a particular tenant or group of tenants with allocating them, but the documentation doesn't say how.</t>

</section>
<section anchor="nwsched"><name>Network Scheduling</name>

<t>In shared network environments (which almost all networks are), a scheduler can be used to orchestrate access to network resources -- primarily bandwidth, but also highly prized links(*), QoS, etc.</t>

<t>The primary task of network resource scheduling is to reserve resource along a pathway (tunnel) from one or more XPUs (ingresses) to another set of XPUs (egresses). Note that the paradigm here is of uni-directional reservations; this is more general than bidirectional reservations, as the traffic requirements may not be symmetric.</t>

<t>Given that X1 wants to send 20Gbps to {X2, X3, X4}, one would create a tunnel from X1 to {X2, X3, X4} with 20Gbps capacity. Note that this traffic might be unicast (distributing different parts of a matrix to the recipients) or broadcast (distributing the same information to all). If further, one wanted to use certain links exclusively, one can color links in the network and state that this tunnel must/must not use links of a certain color. Thus, link coloring is a tool that network administrators can use to hold back links for a subset of job types. The compute analogy would be to hold back some XPUs, mark them "blue" and allow only a subset of jobs to use those XPUs.</t>

<t>Link coloring allows a provider to partition their network to optimally serve their customers. While links in a Clos network (as most ML clusters are) are perfectly symmetrical, once one gets into "distributed clusters" that are connected via DCI links, link coloring and other link attributes will find greater use.</t>

<t>Reserving bandwidth means that a particular job J1 (probably) won't step on another job J2's traffic. Say J1 is using a tunnel T1 with a reservation of 20G, and J2 is using a tunnel T2 with a reservation of 50G. The reservation procedure ensures any links T1 and T2 traverse in common have sufficient bandwidth for both T1 and T2 (and any other tunnels with reservations). Of course, J1 may use more than its allocated bandwidth; this can negatively impact J2. To reduce/prevent this, one can apply a policer at the ingress of J1's tunnels to ensure that J1 sends no more than its allocated share over each tunnel. This policer can drop traffic over the limit, or simply mark it as such, so that if the other jobs on a common link are not using their full quota, J1's traffic can go through.</t>

<t>This last point is crucial for multi-tenancy. A provider who cannot provide hard (or at least soft) guarantees to their customers that they will in fact get the resources they asked (and paid) for will soon be out of business.</t>

<t>Elastic bandwidth is a very useful feature that goes along with elastic compute. If a job's requirements are: start me off with 5 XPUs, but expand that to 8 as the need arises, and shrink it back down to 5 when no longer needed, then the job's bandwidth requirements are likely to grow and shrink in tandem. Thus, in addition to making binding reservations, one must be able to adjust those reservations as needs change.</t>

<t>Finally, not all jobs (and all customers) are created equal. Priority and preemption are powerful tools in schedulers to give preference to certain jobs over others. Without these tools, a provider would be helpless if their cluster were overrun with low priority jobs. In addition, it would be nice to have a graceful way of managing preemption.</t>

<section anchor="traffic-engineering"><name>Traffic Engineering</name>

<t>All the features mentioned in the last section are available today, in bandwidth-aware traffic engineering (TE).</t>

<t>TE constraints allow a user to specify constraints on the path a tunnel will take. These can include administrative groups (colors), shared risk link groups (SRLGs), TE metric, other metrics such as delay, bandwidth reservations, and many others.</t>

<t>Bandwidth reservation allows the allocation of bandwidth resources to a tunnel. Policers are a useful adjunct to enforce limits.</t>

<t>Elastic bandwidth (aka "auto-bandwidth") allows a tunnel to dynamically adjust its reservations (within limits).</t>

<t>Priority and preemption are implemented by all vendors. Graceful preemption is possible using "soft preemption".</t>

<t>New traffic engineering parameters such as available buffer space, available queue-pairs for communication, etc. will be introduced and discussed in a future version of this memo, as well as in companion documents.</t>

</section>
<section anchor="multipathing"><name>Multipathing</name>

<t>There is one missing piece with "regular" TE: ML clusters (and Clos networks or fat trees in general) make heavy use of multipathing, and often have multiple ingresses and egresses for their communications. Current traffic engineering techniques focus on a single path from one ingress to one egress. However, a new technique for multipath TE that allows for multiple ingresses and egresses and multiple paths between them is being developed that has relevance here <xref target="I-D.kompella-teas-mpte"/>.</t>

</section>
</section>
<section anchor="comparing-compute-and-network-scheduling-features"><name>Comparing Compute and Network Scheduling Features</name>

<t>In this section, we look at compute scheduling features, and ask whether the corresponding feature exists in network scheduling.</t>

<texttable title="Comparing SLURM and Network Scheduling">
      <ttcol align='left'>SLURM - Compute Scheduling Features</ttcol>
      <ttcol align='left'>Network Scheduling (Feature Availability)</ttcol>
      <c>Accounting</c>
      <c>Yes</c>
      <c>Advanced reservation</c>
      <c>Yes (bandwidth calendaring)</c>
      <c>Gang scheduling</c>
      <c>Yes (primary effort is on compute)</c>
      <c>Backfill scheduling</c>
      <c>N/A</c>
      <c>Topology optimized resource selection</c>
      <c>Yes</c>
      <c>Resource limits by user or bank account</c>
      <c>Yes (via controller policy) (enforcement via policers)</c>
      <c>Sophisticated multifactor job prioritization algorithms</c>
      <c>No (maybe N/A)</c>
</texttable>

<texttable title="Comparing KAI and Network Scheduling">
      <ttcol align='left'>KAI features</ttcol>
      <ttcol align='left'>Network Scheduling (Feature Availability)</ttcol>
      <c>Batch Scheduling</c>
      <c>Yes (via multi-ingress/multi-egress tunnels)</c>
      <c>Bin Packing &amp; Spread Scheduling</c>
      <c>Yes ("least-fill", "max-fill")</c>
      <c>Workload Priority</c>
      <c>Yes</c>
      <c>Hierarchical Queues</c>
      <c>Yes (via QoS in the data plane)</c>
      <c>Resource distribution</c>
      <c>Yes (via tunnel priority)</c>
      <c>Fairness Policies</c>
      <c>Yes</c>
      <c>Workload Consolidation</c>
      <c>N/A</c>
      <c>Elastic Workloads</c>
      <c>Yes ("auto-bandwidth")</c>
      <c>Dynamic Resource Allocation (DRA)</c>
      <c>N/A (multivendor is a given)</c>
      <c>GPU Sharing</c>
      <c>Yes (link sharing)</c>
</texttable>

<t>As can be seen, almost all features are supported; some other features are supported in network scheduling that may not have analogies in compute scheduling.</t>

</section>
<section anchor="back-to-the-problem"><name>Back to the Problem</name>

<t>Back to <xref target="mlc-1"/>.</t>

<t>With flow level multipathing, say X1 and X2 both send 400G of traffic to L1. L1 tries to load balance X1's traffic to S1 and S2 (in principle, 200G each). In practice, that may turn out to be 220G to S1 and 180G to S2. However, L1 knows that it's only supposed to send 200G to S1 from X1. S1 adjusts its load balancing weights ("adaptive load balancing") until the traffic sent to each of S1 and S2 is 200G. L1 does the same with X2's traffic; if all works well, L1 will send a total of 400G to each of S1 and S2.</t>

<t>On the "downward" side (traffic going to the XPUs), there can be an "in-cast" problem: say both X1 and X3 are sending traffic to X6. Now, X1 has a TE tunnel to X6 with only 200G; similarly for X3. So, in principle, the L3-X6 link should only carry 400G.</t>

<t>Reservations can be temporarily exceeded; that is equally true with compute reservations. Depending on the enforcement policies, an oversubscription situation should be temporary and is clearly visible (since accounting is easy), allowing more severe enforcement should it be persistent.</t>

</section>
</section>
<section anchor="proposal"><name>Proposal</name>

<t>Multipath TE (MPTE) <xref target="I-D.kompella-teas-mpte"/> has all the features of Traffic Engineering, including the above-mentioned TE constraints. However, whereas "regular" TE <xref target="RFC2702"/> considers a TE path with one ingress, one egress and a single path between them, MPTE allows multiple ingresses and egresses, and considers all paths between ingresses and egressses that meet the TE constraints. Thus, MPTE build a directed acyclic graph (DAG) between ingresses and egresses. This allows traffic flowing over the MPTE DAG to be load balanced across these paths. Moreover, MPTE computes near optimal load balancing factors at each node; it does not simply use an equally weighted scheme.</t>

<t>This memo proposes the use of MPTE to compute, set up and allocate bandwidth for unicast collection communication among compute nodes in a deep learning cluster.</t>

<t>Multicast TE (MCTE) uses similar constructs as MPTE (namely, DAGs and junctions) to set up point-to-multipoint and multipoint-to-multipoint tunnels among compute nodes. MCTE also obeys TE constraints and allocates bandwidth resources. Thus, whatever the type of communication is required at various phases of a deep learning job, there is a TE construct to allocate network resources and instantiate the communication pattern.</t>

<t>Both MPTE and MCTE can preprogram "backup" paths in case of a link or node failure.</t>

<t>We believe the use of MPTE and MCTE will reduce the incidence of congestion in a deep learning cluster. Of course, congestion can happen for a number of reasons, including network failures. Thus congestion notification will be needed; however, with the state installed in the network for the TE tunnels, a node X that detects a (link or node) failure knows exactly what tunnels are affected by a given failure and which ingress nodes to notify. Furthermore, X can quickly put in place a backup path to protect against that failure until the ingresses can either reduce the traffic they send, or compute alternate end-to-end tunnels.</t>

</section>
<section anchor="conclusion"><name>Conclusion</name>

<t>As mentioned in the Introduction, to make optimal use of deep learning clusters, especially when multiple jobs (e.g., inferencing or multi-tenancy) are run, and multi-tenancy is in play, network scheduling takes on increasing importance as a proactive measure to prevent network events such as congestion. (This works orthogonally to packet spraying.) One can add fast network event notification as a reactive measure. Together, these techniques present a more holistic approach and should allow much better utilization of ML resources.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>None, for now.</t>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>TBD</t>

</section>


  </middle>

  <back>


<references title='References' anchor="sec-combined-references">

    <references title='Normative References' anchor="sec-normative-references">



<reference anchor="RFC2119">
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname="S. Bradner" initials="S." surname="Bradner"/>
    <date month="March" year="1997"/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="2119"/>
  <seriesInfo name="DOI" value="10.17487/RFC2119"/>
</reference>
<reference anchor="RFC8174">
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname="B. Leiba" initials="B." surname="Leiba"/>
    <date month="May" year="2017"/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="8174"/>
  <seriesInfo name="DOI" value="10.17487/RFC8174"/>
</reference>

<reference anchor="I-D.kompella-teas-mpte">
   <front>
      <title>Multipath Traffic Engineering</title>
      <author fullname="Kireeti Kompella" initials="K." surname="Kompella">
         <organization>Juniper Networks</organization>
      </author>
      <author fullname="Luay Jalil" initials="L." surname="Jalil">
         <organization>Verizon</organization>
      </author>
      <author fullname="Mazen Khaddam" initials="M." surname="Khaddam">
         <organization>Cox Communications</organization>
      </author>
      <author fullname="Andy Smith" initials="A." surname="Smith">
         <organization>Oracle Cloud Infrastructure</organization>
      </author>
      <date day="7" month="July" year="2025"/>
      <abstract>
	 <t>   Shortest path routing offers an easy-to-understand, easy-to-implement
   method of establishing loop-free connectivity in a network, but
   offers few other features.  Equal-cost multipath (ECMP), a simple
   extension, uses multiple equal-cost paths between any two points in a
   network: at any node in a path (really, Directed Acyclic Graph),
   traffic can be (typically equally) load-balanced among the next hops.
   ECMP is easy to add on to shortest path routing, and offers a few
   more features, such as resiliency and load distribution, but the
   feature set is still quite limited.

   Traffic Engineering (TE), on the other hand, offers a very rich
   toolkit for managing traffic flows and the paths they take in a
   network.  A TE network can have link attributes such as bandwidth,
   colors, risk groups and alternate metrics.  A TE path can use these
   attributes to include or avoid certain links, increase path
   diversity, manage bandwidth reservations, improve service experience,
   and offer protection paths.  However, TE typically doesn&#x27;t offer
   multipathing as the tunnels used to implement TE usually take a
   single path.

   This memo proposes multipath traffic-engineering (MPTE), combining
   the best of ECMP and TE.  The multipathing proposed here need not be
   strictly equal-cost, nor the load balancing equally weighted to each
   next hop.  Moreover, the desired destination may be reachable via
   multiple egresses.  The proposal includes a protocol for signaling
   MPTE paths using various types of tunnels, some of which are better
   suited to multipathing.

	 </t>
      </abstract>
   </front>
   <seriesInfo name="Internet-Draft" value="draft-kompella-teas-mpte-01"/>
   
</reference>



    </references>

    <references title='Informative References' anchor="sec-informative-references">

<reference anchor="CO" target="https://en.wikipedia.org/wiki/Collective_operation">
  <front>
    <title>Collective operation</title>
    <author >
      <organization></organization>
    </author>
    <date year="2025" month="November"/>
  </front>
</reference>
<reference anchor="DSF" target="https://engineering.fb.com/2024/10/15/data-infrastructure/open-future-networking-hardware-ai-ocp-2024-meta">
  <front>
    <title>Disaggregated Scheduled Fabric</title>
    <author >
      <organization></organization>
    </author>
    <date year="2024" month="October"/>
  </front>
</reference>
<reference anchor="KAI" target="https://github.com/NVIDIA/KAI-Scheduler">
  <front>
    <title>KAI Scheduler</title>
    <author >
      <organization></organization>
    </author>
    <date year="n.d."/>
  </front>
</reference>
<reference anchor="MPI" target="https://www.mpi-forum.org/docs/mpi-5.0/mpi50-report.pdf">
  <front>
    <title>MPI: A Message-Passing Interface Standard, version 5.0</title>
    <author >
      <organization></organization>
    </author>
    <date year="2025" month="June" day="05"/>
  </front>
</reference>
<reference anchor="NCCL" target="https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/collectives.html">
  <front>
    <title>Collective Operations</title>
    <author >
      <organization></organization>
    </author>
    <date year="2020"/>
  </front>
</reference>
<reference anchor="RCCL" target="https://rocm.docs.amd.com/projects/rccl/en/latest/">
  <front>
    <title>ROCm Communication Collectives Library</title>
    <author >
      <organization></organization>
    </author>
    <date year="2025" month="October" day="31"/>
  </front>
</reference>
<reference anchor="SLURM" target="https://slurm.schedmd.com/overview.html">
  <front>
    <title>SLURM Workload Manager</title>
    <author >
      <organization></organization>
    </author>
    <date year="n.d."/>
  </front>
</reference>


<reference anchor="RFC2702">
  <front>
    <title>Requirements for Traffic Engineering Over MPLS</title>
    <author fullname="D. Awduche" initials="D." surname="Awduche"/>
    <author fullname="J. Malcolm" initials="J." surname="Malcolm"/>
    <author fullname="J. Agogbua" initials="J." surname="Agogbua"/>
    <author fullname="M. O'Dell" initials="M." surname="O'Dell"/>
    <author fullname="J. McManus" initials="J." surname="McManus"/>
    <date month="September" year="1999"/>
    <abstract>
      <t>This document presents a set of requirements for Traffic Engineering over Multiprotocol Label Switching (MPLS). It identifies the functional capabilities required to implement policies that facilitate efficient and reliable network operations in an MPLS domain. This memo provides information for the Internet community.</t>
    </abstract>
  </front>
  <seriesInfo name="RFC" value="2702"/>
  <seriesInfo name="DOI" value="10.17487/RFC2702"/>
</reference>



    </references>

</references>



  </back>

<!-- ##markdown-source:
H4sIACofpWkAA9V863IbR5bmfzxFDhzRDW4AoEhLbpva3TZFXSyburRIjzXR
7phIFBJAmYUquC6E0bI25kF2I+ZZ5lHmSfb7zsnMqgJBubt/7EVhkwQqLyfP
/ZY1mUwGdVpn7sxcJSs3b7I0X5rXrt4W5Y1556qiKRNXmUVRmlc2WaW5M5fO
ljmHXWRNVbuyGtjZrHS3Z+bVpXn9g6m40GBeJLldY915aRf15KZYb1yW2UlZ
L7fLyTrLtzJu8uB0MLc1xp0+OP1ikODPZVHuzkyaL4rB4DNTzKoic7Wrzgbp
pjwzdYlNTx88+AoTq7p0dn1mXj67fj6w+PvMvCuaGrANCP+yLJoNvrp+YX54
MbjZnpl1k9XpxtarsZnZfL5N5/yziicf8whJOBbWx6B/tVmRA76dqwab9Mz8
uS4SzClKbL6o8NduzT/+MhjYpl4V5dnAmAnAr87Md1PznT83vjRGEfJdWjpX
p/1HRbm0efpXW6dFfma+eftMvk3SGqi4avJ8d2szJ9+VbiljLmyWgix5qgsk
xRxrf/XwwZdf+c9NXhOR3+dp7ebmqgZqK1MszPnalWmis9zaptmZuVGQpqmr
F18v+d00KdbtQd5OzRPnSrvuHOOf02qVN+atvbV59+n/3ZPcClSTDaGa3DRr
W05nAtvXq43rH6pzlvM5wLPg8FWArX+KV662//gxHv39x7AC0L306ID+zi5X
9tY8WVlAfHuImS4gMYX7Pwp+OfPgfJ3I5lObHgT+dXpTZLYyL9zOlYdAL0r3
g7O3XegvbD5zZWl7wJ9fXPehPIcMlzhRD6x8yX2+TrDqlqsKUgd5Ua6x3607
GwyodeInYy7enMn8Gqdx9ZlZ1fWmOjs+dvl0m96kGzdP7RQgH/PT8UWRZS7h
3H8tNuA5nkGnq4Ztn5v+86gAH01OTvDN06vn9+27hAYGuvPldDEj+MeY9fD4
5MHxyaNjLGMnOEFpcfYmqZvSHWOjfLJo+PckV7WOyROQZ76FxpzYdFIkmwlX
mawDl3t4n6aVXS6BYisEVy2Jv57bGQjeh/zh5OQBvvnu/OVhyJdpvWoU4tf/
/PLpy/NjDJ2ENcvutnhgug9evb1nze12O11v0glI1qyFDDA61TG/ejR9wN+P
HkxKt4Gqnm7mi+4eQy5qziHYFc7oJm9tVdGkvcyh+Rc2ceTvfA4sjc0tbAEo
ZbDmcJ9cD76YPKB8vL64uDwMJEGa5rcpOYWnnzu3ybwFPc6TJDtuKldOlk06
dwp/Q4iOk8gs1XRVr7N7GOlNYKSqDxqJ8e5eoMoiWU8FMrueC1ibsvgJS1bH
JUFy+XFG8a6Pu9u+e3Oxxt7rdZND3rlpB5LKXKaz0pa7Oxz9YPI5mfrq8vt3
rw6DU2VNuZ6KGfbgFED6beq2d44uq5gfwMZZYefQ2DmQVQ4GEzgy/GHsjIKf
1IPBJXcxlzZfNhhjXkGhZZUZXV6+qo4MeN9smmpFotcrZ2ZQHCB3qiqtdskq
L7JiuZuaazytEmhLjLM1B+8MNK6D7rGE2CRNWbq8znbmFoKHX+6XxLl5Jesm
dmOpt7iqzXeGXIalcMZNUzsDTNZm9P7t90ePMTytsOjPDYxxZayZpzhJOmso
fHYDAmE7s105QK5+DNbBzErOkhR5DkJg6G1qMXk4s8mNy+dD46WeLgsEPF1j
7TTHCA8JlYZJHPkeTlFT42GCk8nTHUfGvTojq85+sx236zzEJPz0A4Zm9PTi
5dHU/OAEznCQgPdNgdH+VHXEdItBYJyYOwgtNEpAGJeLgFYNEHUPtB4794Pr
0QW6lzYVNxd6gN4oIMwT+VwSQOjVigLIp0mZ1pCIrFXrlYdOuIUT6t2GI4DS
KujRsUmnbjqWY488QxzhPMHnlrnQ9nMlMH4sGi4AsUzXVMljgwWLRLQzwZhD
sxQ7fKiKyKuV66wokC8WaZIqv0LzzKfmm2LrbonP7SrtsGaaQ6jrNUaSC3Qp
yI4B/bCWaMvgKRvakkVLLo9DIAHcDaaeOSzKCT3eSqgGI6AGCofUSOBNwuZY
wpeuITz6lGbZlZRGrLeyXNPl/knSiNCWjgQlrZVWjvvngHEemG1hQerMPEnL
eaVs9RwbgffMmu6vACjBhDk5/Xxsvr4iM56FcIjPqdEE19C71H6KqreqPOXp
dDAQGNduXRBXm6JyVXvKCn6P0FK/wsDaQk7pyMDYCHspTaldAWKgRoctCvMT
giIgwE/dwrSa1ra3Q886GktI/5uMw1BnWcES3d338b76A56JMiB6ZzaZzUWt
JBZc0nKAWWZpDX7HbO4jK1izKjKotjQx1PDtOTC9KOcgBk5YbABq+lfHowos
CAGJEdVf1ND3SSdxM1VTsE7nczi5CCJh1stiDpeIa3z4LOXHj3+nhUjzOX3K
HcGjvCiX4reoLOjOAmSgpPQhi8A5z7zg+rlQcV1QrIPqaBRxXm1HEZJVgLJ0
STatiCkuupHjCGjACEQFdIL6q6bwadb2J0Trwccwi7JYB9kQmqQddgx0gtTl
Rd3SgvH+sqS4zM9oisDHcHZBH/BKZskqGwJai7jKV2MCl2UcAuzAwMzgdhl7
N2swenV5RCqNu6q7UM3EKeSilUtuxDIEPK5p5z15eVZ4EDXEe+6sJCy6IkDG
XJLVi3zsufvAoAUCAvjEldp3oHOWuXVl4JZAt8P/K2ciHcH+TWqXWyqXoPH2
7LBIJDmmbHLZlHCmFYIbYTlRgGLUb9wu2HiqVt3+t/XS6Enx/MikWSZhDSHb
OiCbJCyLzPVoCWHciTC+uhS4xt7Q4gh1Ck4jbcNYkqoWboS3UTab2s/gEegv
GFAN3FWvima5wkCcZVVsOX6FIboxGFeXgZEXC8LtAK/ZQk1DUHY0yDQZMF8E
C7azqLxUYW5aVQ1V9RZOmMtSLOX1xFzVoRGHQZzd1gsqmmxObp253NGe2cz7
T2I+FDruJb+F8IIqt78QkB3YA+ynnizO22MVf7iB+aRiD6P71haf4Mi6+XG0
+sLfRUFdr7wheympom+gxhjwiUdYbeBQpGBQIZ0uKeb3jm0YXQBUhL20SMFI
M16vzI9/voCr+JexeVHaDZauDg15oUPoVI7lnKWIdVFCO47NdDo98joM/wEq
R7yrG0HSQiTyeySCVCe6vfjQGlDMqNa5Fva19sqM8NtYSskVT5i4ozHVCRYQ
xVFR69qsVai6R5VyT5u7oqmynczoiGwCD/5dxNhdIpXAt3e2RAvYWZp5d134
34sFmWZPZoTD1pjD2Kcn7i2SuowBDcaAQj2V+0koy5In6SFZqg4zL6A/u/4D
df7UPKsqlWieWpBvg8xz746fyYn/+W//s4JB/c9/+19iPldqWkfwS8XurGn7
xt7U/uIx4L/Mm/UMughfbGyJ/akAx8bVyfQI0oNP8O0AORXDmjEOOO0YpDwG
HxnmWpQP6Ml6f0NG0klXHlNto2xGyHXk49ZnHrdHbyFot5ands3UT9f3aHHq
t543Ymzm6UIYiK5ijKNH6otLLDBhPEZUdAJdWJwlBewJ9MYcXk49NudZ9k64
Z2zCb8XJt0AslHAiGj6srPIs8Rq0Z/C/RKJFcYEaf4TTkcF37+BUzi2nGNEl
EWsG06aqg195WvaAnZqXi3FgGPJKJHefLQ6rLAEh6C0z4p7RdpAU9FCazd6k
jQSOeuqj8R1pEi25Jz/iOXij04FPBZIbZ2p1DB0TGIY1T+3mUEFvciezBdIK
Fhvg3EGCYZp/a5W5/HE6QKu8dDmgN7sy529fGjKfGX348Orty48fqVg+fGCi
R/+WmCW9QchGo2Dnzlugvm4sXeZu6TmQ0PSuJ0GdifJqA1+h0wwoYlyzB8sG
AY+rgjrgKmPvT7vw7NZCjsTVG1V2d2Qin5LbIqfyA3wPVyXiezwHSoWm6SKQ
1WNNKz7WLNPbfXCE3or1MZ1dxpMCGhZObnDkmdC+1njb49m7W7DHGwbO4+Ac
U5xBgWzc4fR4UBA+XaR0Oxc+fNNFJOS2uWcVPkrLCIkEKFCMc2Yz0oye9Nbg
46QuJnSs4wFxvoI+1jatXMebCbuovedR3G0K0yKgLmk84c4s6WBWPd8ESmlV
zH3O6DkR3/qh/iQBf9R26TJXtwOjMKZuanEPo4fSW+5aHKUFgmn1+gq4qStG
w0ESkqKhKd0X6E5lS92MBeES+RMlk6iT3AIjrjYCkqVCKjyAqEzsbiYqexLk
08fmHX3r5RrzKMrADAKkCk4hArDPzLVoNDnPQPBKRxjT4PAOX31/dT0c62/z
+o38/e7Zn75/+e7ZU/599c355WX8Y+BHXH3z5vvLp+1f7cyLN69ePXv9VCfj
W9P7ajB8df4vQ+XA4Zu31y/fvD6/HGqOA7I7L5JG9LbE6AX5UIwC+EDC5WoA
DYyobaau3JOLt//x7ycPoRn+6d3zi9OTk6+gHfTDlyd/eIgPlADdrcjpWspH
cumAsmBLycYBu4ndpIigKBvQ0jCR8F4d+fC//jGjkZt88cf/PiAuPzNP4fPC
ZfMpAmZkZenvK8BERFc+B1EphUmLW/gUNJlhogo47YwPMaWEm3qFow44EMAM
UUQNrTVCa9iws8GZoVNCL8l7ZXQkJHjp5IeY3azOJM/GIXtP4K3Czo7NMnik
d56/kOdZiM/vPL+U57BNFc5y5+k1rXhQk9TUIkjQksR2k0umQTK0WnIm7MM7
/tiQ+ZxnPG9QMJ0aDEjVSRV7rMGyb73hphWIYi/JnLyS9ADEZCRZSpB6aOeQ
DCPF6mpItQSjVJSw7eA+KQv49NiuaETeBoPNqtgQpKib8NmM8Oj1EUHK4bbJ
wcE3NwrVQtKpJGRhfmpyZYvXg0HuV8rdL3VnlbOwSkhSLTTR+1p0AdOxejSn
e1DGX71VNMUSu7l+dgg/wPBMXBvr49cFQl1G4jza9TOfhCzoorVJXSwXddjW
pQhFuTMLADML9pBECPlZjBRtntIKUF10oJJH/yBU3PnAhu267ZaXsqFPe4Q0
DDfdFFtXMlKS3dOfG1EwMkJNt2SluRUpjTA5Q2Rbqya167WsIgKJWYg1bjt5
C5ZhX4JVA20JgiehJPufnr8Y6yEeSIDEfAu5yFvi9ksyhGSRIpf4WTKarGAZ
zolqRxh3Z1jeGea8AXgjqYwwUh1GoAsxbd3baW3FebCRjOKCtCgOAzkJB8IT
fCm5yJMIvweNCYX2Sxe/6x/WA9GCBgxK7fWMnmmn9tlG7wupfZKaHQstrOAT
YtWmtDufSo5JPEFOl5W50cVL2eieSgQGnL9QSEp1pWyyS8ARqi+Z1Xyr1NdS
PG3WYHDBTEHQbG0vCyzSOksmJx8/ng0G/wP/pKTGf1cn4S/G91NzdWr8s2N+
8eNU/8ln/PtRn70uWN67PDG/M5ene/UnMOfV6WNZ5PJEZnKM/Lv83P9+GDfl
d7/jF3cXOZkOdNsfzf2/4dqKClKuevjgwQv9yLnvcbb32Po9tn2PLd8/wv9f
4P8/4P8vFQ8fzrSy+N+GwJZvaDInQ/OZ4kt9lVlR1xBPJj5LUeIpvT/InIRz
2KZelRKqvP9SE3qiTnU4fOrujCHEfTE01Vbz4sRhmHz5UCfXUMN3txpWG+iT
zkxQjtx8dTrtICEEEDK/xclsI/l7LUiVGmQXosurZkZvZiMCqHWetmLo/YY5
mOZEd4n1Rjqik5mj9M2n4WmEraMpxaFmDlECSeg/EDjbgf9j2WnIAcN97cpI
Kq+aUK2CTsggw2oK74yUDAlda+4w8ZI4glACH1Iq5NcJQD7CPKDUjIZ74jo8
goxA9j9+ZPT8m1ssYOTjUvIBYZ7LejI+PAr1L2bvmSfL6YVkO+9WRmQETFHE
y6IKaSO1yFLE8tkDra7k7UNRmZUjzxSaUfFmaCxlBkmyFr2EuD+2pOyrmKwP
35ai/D/d92FGQNMRDVHVxTY1oEfyHvZ8WVmMLC1sLXE3Myia2d9JhK+FX0k3
6J+0IWaWaoREnWY7PFpFLq2OwEs/SHBzmAyIb7DFe5WW96qtpFbDIJEYEqWh
JSWNIAsqLpFM6Qegje7y8/uT31fdwVEQzUgqMylOvWFF6JQLM+95BJ7iE8Z2
TBOJy8GDw8WAY9jUniNOTzGhXfHkS//xVKCZFyEhxESYmJT3py0oj3H2Gh95
WnHXlxx/8lVvydOTdknqmsuTCZ6In7hNQbmZC9zCBJHs8fCkhx5g+42qieEc
McrWlnMoJXqpo4CTZeEZj6OoMTQjVLoQ2OPnMM0nNOrD4MOcCeBCmECrz32q
QcnUwfj7LwShXB4swFAbgjHWFDOHV0pTTWHURc3K/MJ8+UDP/v4L5UhOv/x8
Aosg5ydsEkaJV4AoWBYhmhrI4MpmmoQIYAR0+dIY0PJSBzBTxqN6WzLiwccH
LIXkjULNtuh0ZGgKlyjDZ5+TgyDUzHSDr7BkgF5LJwhW1ogpjjqV/HThqyEe
CPwpjiMx0imW7ifLtUBIZqerNJ9LpAjk9SSQEKUVgymdciKKdq8Ea06OEIDh
h/fpGonLmBD2xN3YUoKZ9188Nn7vU1mIk085+fTQ5M81WM0lc9NZRNF/rWtf
n+4VBULFbS+R5XnBs2YoKbGuTRr6/EefS2DnlsD12KNX5lDT7qtZyXJLjSLU
S1k69VVU1ZhQtE9Ax7zYjltiqtljDlqRgnOdV0w+1yEOd3vF7A9azP449m0v
924nDC7eclCqQIq0ScRzSkoyxv4+UQlRUzYLAauqX+vzluuiTm/bdBFsw/iO
K0xsLD6REptKiuhQVxs92rtNkxWOffHm48deiqLbLqVhfAjSJE0KVoEnAZvQ
LbuQzrXX8e6XZGVzViBCDygjDWbYOpUqcau1Z9Dc7Rn0meLHmlisIo604zEe
UPrmegle3zrX5pdD2uJva7nDvHcyL6QKI15mu5AZ759XgkK2IrZ9AkFXMLh0
2lQR0CDokXIpzyVaQ8p0krOY3kcj7b3xxjaEdGLyY0TXz3MHXQvgOonLO1UK
32qxkX6hXHJlUXSh/djS5PlJD9252PDhM2JCFodrDxNSrYCqeQc/t2lZ5Iyl
qnFkeBufBxyJYUmyomErTVu1NaITfDYuROpFSZdYqvbw7xJxYou7hb+pb238
8EF+gwdSv3Nnuc5OOYifN7/EjoTHhjXckL30RSv6eMPnaUln68c/y8J/ibWn
iiyP2b6L7ZhtAUU+6XwZoe1UmAPgzC1ojor9q6pDtGLalDHhJUUN6UZzWs2k
zwuuEjszHZpzr8hxVHb9fviAn5Iypber0Pe6gRUncBganii2svl+BbiNllXy
75oZGIN3RjoI094031YkJec2Td5pueIxsKN6pv0UTmscAfm1GA31TzQBqieJ
G2o7lFCPGKIi70Aj+a0qzg3ZJt8OUjCHGFoNfUMO80O+hVbCrURa3XnVhZ/m
txpAaKFLVSe+f2FZN2/5f6QEAdMH3RgKMKL7jzjnCZT3gs5NO49fXxcbzXUE
HM47ZQbn9RoHhktDhiU7sORMeFdkZmbhZ1mFnEOvis1KOsHEvIlagB6tvRcA
T7pgd9RffdkkW/LjitltMoVHYd1FmOLmiUUU2hF7+Q7UeIuT8dy/M1ebkka7
PyQ2Fb/VjXf88psUBIIES4/WnxoHQ9g7ZLQ6/vTPEcvmlJq3BbN3OjquzPQM
vp9HAj3LrHTChREy/Okut2t8GTc5b/lz9PTduZCJDHylhBwMrqGzm/Uan/7q
xh191TKcT0vHJhctSJCUOFdgiIP9hhYc3GmuVb+kahV5aPYI/aMOUbuyl0cF
BJWmRFxDzxOcjHVmWnSKrAxM+eSn7bbCyDa+m+Puvm0zELsxJMTCkaRGuLJw
MhixMF/lo65ua4h4W2tNs069SHufU0qEdXGgcqxRhfoTTcYOVnVfpWMO9lD7
xxVJEkAF7aIdT+s2+Ogra4Z3+e9rCYNWxVZ9onATr2fD/NW5ngWLRbiOBTMj
VaQ2WxeVVCY6bYWlO+rbLh+bfdps3W0ZmEwoqGQ9SWTES3U8pBToffJjU4rO
kDTV6L9g7z8VV9owYQa+0MpV4K/Y6uZQG0dXj6UHaaPtq0IcLf2PariwLjtS
7SkJHJ/5lXBsFJPFYsdCVMHadsjtjULqGNFVy0TeobTzdLk2IaGGKXBlJpqu
1bipo42rtjtAAFi6XMtmK6I9vW/W2DvcMezs9A5WMXcCslW79dpBE5HbX0jR
XiBFtLW1vsNPvLpTSQby44f3pwhLP8f/DxE8EDkqSIm0tTJwFuQp7hi59qco
b/vlwi2HPpLSNk2yZs1G2Mv7gaNWcfZacihXvgcSvmeZ/hLyCMBPuqGxZ9ML
TEnsb+ivFPMje+4rmP9IMnuLpvROLE8sHa7eiTGJK2sEdz6VGv0gaSqLhfaM
fRwyoJ8t9U1Mtn98RSELFcdSrSC1uJWuIMcMu8rSwQXWjAS/8ezOPEaR9Rtp
pHCYiowWpcZ3XJvNmQU1Isyd30h1FnO9yts0r1KwVYcjKP3QohY1am8p8e58
b6Atb0SZmeEsa9xQo3KaGE2h7O1VBQxDuVe6Btj0sndGb6BsSDpLICJaVkko
TR7h5KFDXC9UiBLQAQmQDDDL2IsaSWXNRVZUcYWRpSCCIp0Lv6IUtZ0ppmaj
XNmMXJBo/nXJeBqBd2GGvbDTLzRsg+j+zZOnFy8Von0KizUW7SPfI7TRNSvN
My1SPA/95upNtk2NUematbOxeN+1UCT3tydmpC1Z2e4IFBZTUyMAlI4U3VvG
dVKK8NCgYTBT8kfajetZ+vrEG7euuiK5oRHUHf/29NC003umPdJUm+t9LyX9
OVt+xKmQ7sidJ2mb6wGwLGS4jsMtaY5O8aBFESVBwsd2/khTSjtPAIXU2+6u
LmbnGR2kpmSyEVih+iVXay2Rilx8m3i1Iu7a6U3OmVXXJmPfQfMtU7GFb2Y6
ZkmfEHNCq3bsZiNStaFTSW/O5//UfhF/30pS2oPOcF0QprwAUDUrmhf3wipu
hFb3JdLXpXwna9hWWvbKYhPVuozX9gr4dZIYkCTlTjVEWksvC8LocbyTlGqW
NDJcpblPTzjl/tJ5Rek1OgSbV5/Mz01R27E/a6fzbFmEvGq4fUO32t8tI+LL
Jgmd73u9weetvtmuinAPwn9neE1WspIAHGEg1qyKRX1klo2VyxGaOdpTPaa9
KCPCC7ZkVCN3C9SS9W54wd2JfZY2nR8JlDKxKuTqgPF9HzPigwX1wSCEDS1f
i4mQZnvfje1DSgVGSgHqHQlbOz/fK34xjJbU+H3V9zBAijMjty2gXBhy6fxH
3g7MtE8h3u0BMr4MHou/uZbGunu1KkndtFZzwuoBJzzSHsG8kMZGV/puXUmK
5iEtCbjas+5DaNqyGjzwbW8zrICPbt3JLYXEttzBsRIRzlKtMfRdL0qfWG2W
LJhYoB8x/4nfqCHrDuexc+my0TQiqBTbK8lS9L6F3UfeVrYMozZH/S7ERT83
FoIXglDNlpfOrTex+7JtJIFPUHXyZcJ9hXRtck7ok2e+yXsZKnEUW81RtiGX
pnVlxXHXDEdnYOWyjdRgVYTJ8j4dtnVed7DX37fKbEP8vtMrWSzYBMxLXTSu
C49QPQ0qbSYVbaL3Caw2oYerPy0SNMH3Ga9oig541rZkDQbn+x08a83mtwl0
UQ6hN07KK7c2zTyF57zKhIGR3SZ26xtW9tu/zOj62ZH0h/U6u9QPspr1oN8t
vVK73hj1aCRQaY2jCD3TPSGNm0ivjWZRO84eqavtYkzHSa/YOMSBkDd1+uKI
q3eXLzgAUKonM/bKVz+1eX1/jasrZ71ARK9h7QLjDAZPDo3sNsD1rzL2Fm4v
M9poa96qmQklL6/IKHF5UqtRYx4hZJYO6sGRvbFmaJu6mMQvh0etd+lRzQtP
mmURF9JLNU1iT6ZHZGaJCLghaf0psZTyHLnN34oGNWHL5wWl7EXg6s4csayV
3uxRWzekeekMYffha7c9yHydmxAxUR35eNYwoALrWdab2+9/ZgZrAjvjM7a9
BLwPx0NlM/V1pnBJNK2gsXyxxRp9p0R8OUKxaHtEJWiVTmUbsqAwEZrj0IxH
NVUBftWp00sWwIfSVLypVlcQ8yW+2j0s3ZL+7BC8fNZz3EWndv17aW1Y0CKV
TssdPuBmUfGGusze7sKdy363gLjiCzgIqpBirikmCzR7FT74AlNa3ilmXOh7
AQ4SL3YAVuH+cqcALFohpiyCk8eYJ3exqS7Weq10uLcthdHJ8U2YvbbU9uH9
J+pl2LhI21YkIZ9c6Jaonb0WBUuMsgVvZfvrDonTtMiHD398OXk6jS9/quFD
TcDajqWqWKTR5ONFDELnh9Jez706l6RX3elulpuDWVEwatpPfEoa0k9UyjK5
BH9DPX0JfUu9YDfvjIVPI21XbQtfZ0Gw7q9aqpkcKjEFOH89dIiRf2rOVSLl
dszRr4NfJ71/8SOetIn+X82/gDr86kC23+jTUasJ2UQjr7TIl9xhvxTgx4es
W7h8K5wYXkaAaQeqAZj6+vgcz/6mkoCJYP+NhYEAGeNl1uJLFhdLjUEQuY68
EZAefY7xwUlFaP/BggJPVJgR4jmoPZwNS7V9gS2HKtUP8+fwI9iCZYnAbr/+
veT/1RN8v3rRQYfGLl5sj/WT89pBQz8h2adLHX69ocQzE9KW1yTW9hf9myvc
qYTIHDw4UA3pkutPxVXwsaStla8FEC46WC7pTPRWOXiMnHKnkGICEIerKYEp
71RUwnn3fQKM/c1CiywKxiCm1ZRroCXXorBCpwjjtxHXy9dT7mGj73x98TAT
ncfrT5WTiyNt8j56tFJ2bDZ8sRHvSUpiTn26w0MOK7K2DY3xiXrfkgD0DQJ3
danWJKgSQlbWtyDTF9QvY6Px39mVFzvy/v9ox4vWF2Dd5OrvMrnBBjzJgAru
Q6++Jt7jYj6jPpWFxfOsxPXcazrViw7CunOrV5D7I+DX0jJkvQJBJS5H6NdY
dNABzj2Vjra/pZcQMR5Zzrevw3jLUTUt4eSVFLGv7qE/2Z0d/19pFXzNFiuM
XEkvBz2iGAS8/0IPLzQjdh4zgQXdzI5lOkvvP2eZemz6rLTXOug7UGSRxJYw
p9I6GJK0PpjwJ6rh2xel1sv0PUdy19lftJHoX5oQG0+XTkU2rjU1T93GH9aH
kl2zuPEqcxxu7/W6vKu0bnx/SOycCUBpXCOdg/rmh9tU45MRfNPEdWq1Aqyt
dkf6MhlpV5DkovQ09OHx+6S1b3tmU7uT5ja5xABBsdlg8KrrtI54n4g92f90
nwOp5Dx0XeduVmD/RUB2BqxM2sRAP4LvvRLJ8YVFvdCDXi3v+v3hwSmgCL2R
nrUEfs9S0cMedzx3/0KXrqvf9a7HhgcP7vpvuOrq0nYgADb6DvuhefG1FXy5
iKBj//iaKhNA9A6tvecWCqzk+YujT+6mNaa0inkBT52F55mYQpbteK1HlW9X
x3cb4ysfk0zNK3BbIVSSqV5OGALaMrY27N/ZEm9QmtBFXbGLiZ1TqhClD13z
1wwNeYXJi2O8dEZjuHb3veTJhZBSIGrbvMbhmnqok8nd/35VIlRGk7ZLcO/e
tTRi9PqvPtHHN/UC5S+9QaAuKFDStu9VnCd6k9SSvRSYR3wxJ3OWoISSMV6O
OlJbJueQ1DqvVCuHSqK9jRsPPAyliQOHACUvhOUrRLgzt6vMfkKtg7TqUBop
cOwWXO0CP4U3W/SR2L5ib04muIUelsvdeoteCrJ9dPpWbJ+ZsC1sjeakIjUP
vE1Buij5EuE61brwgXcUsIeRuTRaMxV9zBKESL9c6fz1P32jX7MZegnvvGzL
qhWS/r155wL74IfuK3X6vBl3EZPe3qSnooQykULnotuv+wlO65bFOjO035j3
/X39uX2PCHWqJBVbtXzoxUxNde8l+pCkCu8KWUWNHV72o7V4wX+WtcnfuI9v
DI7uQBWvzr5X9ag35EnzURe/R/E9BOr48XVRtVzsth02ZwZTeq3CGxL1TQrd
Vxj4N/z49I7KM1tseEy+Q0AbFWhT4b303rDQfdsG7wa0r26Qorm+fMFYdqpL
rYIXgP3GrcPYKmsu7VKJHzqMcOCFCpow1CxNRsa10jMb363gTy92HcGZ9E+w
h/v8QBK+28U+9qUYF9W2Z9WD/MaGsvYFGwfeOjRy06W8y6W9krBf+DsKHe7j
Vm31usIEu7vxwbAJgEqapH2bIVO/iLQkHrH774WwWoktTKjt7r0wIWRvW1af
mpGYl5DJrFfFssjVLyz2u+unR+0LUea+z763w97bJ/ZeMyHQsQKt75QYhzpQ
m6Dc0POkflf/Lr69L75ES+tt4uJp7UNuounVO4NQPwsZFyqfy47allfznb8+
Nxe9Cybs18vB9PKiMu2EM1cuaSTvvj/0+snTwf8GijC/gcZeAAA=

-->

</rfc>

