<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.31 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-yl-bmwg-cats-03" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="BM for CATS ">Benchmarking Methodology for Computing-aware Traffic Steering</title>
    <seriesInfo name="Internet-Draft" value="draft-yl-bmwg-cats-03"/>
    <author initials="K." surname="Yao" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="P." surname="Liu" fullname="Peng Liu">
      <organization>China Mobile</organization>
      <address>
        <email>liupengyjy@chinamobile.com</email>
      </address>
    </author>
    <author initials="G." surname="Zeng" fullname="Guanming Zeng">
      <organization>Huawei</organization>
      <address>
        <email>zengguanming@huawei.com</email>
      </address>
    </author>
    <author initials="X." surname="Yi" fullname="Xinxin Yi">
      <organization>China Unicom</organization>
      <address>
        <email>yixx3@chinaunicom.cn</email>
      </address>
    </author>
    <author initials="Q." surname="Xiong" fullname="Quan Xiong">
      <organization>ZTE</organization>
      <address>
        <email>xiong.quan@zte.com.cn</email>
      </address>
    </author>
    <author initials="M.-N." surname="Tran" fullname="Minh-Ngoc Tran">
      <organization>ETRI</organization>
      <address>
        <email>mipearlska@etri.re.kr</email>
      </address>
    </author>
    <date year="2026" month="March" day="02"/>
    <workgroup>bmwg</workgroup>
    <abstract>
      <?line 48?>

<t>Computing-aware traffic steering(CATS) is a traffic engineering approach based on the awareness of both computing and network information. This document proposes benchmarking methodologies for CATS.</t>
    </abstract>
  </front>
  <middle>
    <?line 52?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Computing-aware traffic Steering(CATS) is a traffic engineering approach considering both computing and network metrics, in order to select appropriate service instances. Some of the latency-sensitive, throughput-sensitive applications or compute-intensive applications need CATS to guarantee effective instance selection, which are mentioned in <xref target="I-D.ietf-cats-usecases-requirements"/>. There is also a general CATS framework <xref target="I-D.ietf-cats-framework"/> for implementation guidance. However, considering there are many computing and network metrics that can be selected for traffic steering, as proposed in <xref target="I-D.ietf-cats-metric-definition"/>, some benchmarking test methods are required to validate the effectiveness of different CATS metrics. Besides, there are also different deployment approaches, i.e. the distributed approach, the centralized approach and the hybrid approach, and there are also multiple objectives for instance selection, for example, instance with lowest end-to-end latency or the highest system utilization. The benchmarking methodology proposed in this document is essential for guiding CATS implementation.</t>
    </section>
    <section anchor="definition-of-terms">
      <name>Definition of Terms</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>CATS: Computing-aware Traffic Steering</t>
        </li>
        <li>
          <t>C-PS: CATS path-selection</t>
        </li>
      </ul>
      <t>This document further defines:</t>
      <ul spacing="normal">
        <li>
          <t>CATS Router: Router that supports CATS mechanisms for traffic engineering.</t>
        </li>
        <li>
          <t>ECMP: Equal cost multi-path routing</t>
        </li>
      </ul>
    </section>
    <section anchor="test-methodology">
      <name>Test Methodology</name>
      <section anchor="test-setup">
        <name>Test Setup</name>
        <t>The test setup in general is compliant with <xref target="RFC2544"/>.  As is mentioned in the introduction, there are basically three approaches for CATS deployment. The centralized approach, the distributed approach, and the hybrid approach. 
The difference primarily sits in how CATS metrics are collected and distributed into the network and accordingly, where the CATS path selector(C-PS) is placed to make decisions, as is defined in <xref target="I-D.ietf-cats-framework"/>.</t>
        <section anchor="test-setup-centralized-approach">
          <name>Test Setup - Centralized Approach</name>
          <t><xref target="centralized-test-setup"/> shows the test setup of the centralized approach to implement CATS. The centralized test setup is similar to the Software Defined Networking(SDN) standalone mode test setup defined in <xref target="RFC8456"/>. The DUT locates at the same place with the SDN controller. In the centralized approach, SDN controller takes both the roles of CATS metrics collection and the decision making for instance selection as well as traffic steering. The application plane test emulator is connected with the forwarding plane test emulator via interface 2(I2). The SND controller is connected to Edge server manager via interface 4(I4). The interface(I1) of the SDN controller is connected with the forwarding plane. Service request is sent from application to the CATS ingress router through I2. CATS metrics are collected from Edge server manager via I4. The traffic steering polocies are configured through I1. 
In the forwarding plane, CATS router 1 serves as the ingress node and is connected with the host which is an application plane emulator. CATS router 2 and CATS router 3 serve as the egress nodes and are connected with two edge servers respectively. Both of the edge servers are connected with edge server manager via I3. I3 is an internal interface for CATS metrics collection within edge sites.</t>
          <figure anchor="centralized-test-setup">
            <name>Centralized Test Setup</name>
            <artwork><![CDATA[
      +-----------------------------------------------+
      |       Application-Plane Test Emulator         |
      |                                               |
      |   +-----------------+      +-------------+    |
      |   |   Application   |      |   Service   |    |
      |   +-----------------+      +-------------+    |
      |                                               |
      +-+(I2)-----------------------------------------+
        |
        | 
        |   +-------------------------------+    +-------------+
        |   |       +----------------+      |    |             |
        |   |       | SDN Controller |      |    |     Edge    | 
        |   |       +----------------+      |----|    Server   |
        |   |                               | I4 |    Manager  |
        |   |    Device Under Test (DUT)    |    |             | 
        |   +-------------------------------+    +--------+----+
        |             |                                   |
        |             |                                   |
      +-+------------+(I1)--------------------------+     |
      |                                             |     |
      |         +------------+                      |     |
      |         |    CATS    |                      |     |
      |         |   Router  1|                      |     | I3
      |         +------------+                      |     |
      |         /            \                      |     |
      |        /              \                     |     |
      |    l0 /                \ ln                 |     |
      |      /                  \                   |     |
      |    +------------+  +------------+           |     |
      |    |    CATS    |  |    CATS    |           |     |
      |    |  Router 2  |..|   Router 3 |           |     |
      |    +------------+  +------------+           |     |
      |          |                |                 |     |
      |    +------------+  +------------+           |     |
      |    |   Edge     |  |   Edge     |           |     |
      |    |  Server 1  |  |  Server 2  |           |     |
      |    |   (ES1)    |  |   (ES2)    |           |     |
      |    +------------+  +------------+           |     |
      |          |               |                  |     |
      |          +---------------+------------------------+    
      |     Forwarding-Plane Test Emulator          |
      +------------------------------------ --------+
]]></artwork>
          </figure>
        </section>
        <section anchor="test-setup-distributed-approach">
          <name>Test Setup - Distributed Approach</name>
          <t><xref target="distributed-test-setup"/> shows the test setup of the distributed approach to implement CATS. In the distributed test setup, The DUT is the group of CATS routers, since the decision maker is the CATS ingress node, namely CATS router 1. CATS egress nodes, CATS router 2 and 3, take the role of collecting CATS metrics from edge servers and distribute these metrics towards other CATS routers. Service emulators from application plane is connected with the control-plane and forwarding-plane test emulator through the interface 1.</t>
          <figure anchor="distributed-test-setup">
            <name>Distributed Test Setup</name>
            <artwork><![CDATA[
      +---------------------------------------------+
      |       Application-Plane Test Emulator       |
      |                                             |
      |   +-----------------+      +-------------+  |
      |   |   Application   |      |   Service   |  |
      |   +-----------------+      +-------------+  |
      |                                             |
      +---------------+-----------------------------+
                      |  
                      |                                   
      +---------------+(I1)-------------------------+     
      |                                             |
      |   +--------------------------------+        |
      |   |      +------------+            |        |
      |   |      |    CATS    |            |        |
      |   |      |   Router  1|            |        |
      |   |      +------------+            |        | 
      |   |      /            \            |        |
      |   |     /              \           |        |
      |   | l0 /                \ ln       |        |
      |   |   /                  \         |        |
      |   | +------------+  +------------+ |        |
      |   | |    CATS    |  |    CATS    | |        |
      |   | |  Router 2  |..|   Router 3 | |        |
      |   | +------------+  +------------+ |        |
      |   |      Device Under Test (DUT)   |        |
      |   +--------------------------------+        |
      |        |                |                   |
      |    +------------+  +------------+           |
      |    |   Edge     |  |   Edge     |           |
      |    |  Server 1  |  |  Server 2  |           |
      |    |   (ES1)    |  |   (ES2)    |           |
      |    +------------+  +------------+           |       
      |           Control-Plane and                 |
      |      Forwarding-Plane Test Emulator         |
      +------------------------------------ --------+
]]></artwork>
          </figure>
        </section>
        <section anchor="test-setup-hybrid-approach">
          <name>Test Setup - Hybrid Approach</name>
          <t>As is explained in <xref target="I-D.ietf-cats-framework"/>, the hybrid model is a combination of distributed and centralized models. In hybrid model, some stable CATS metrics are distributed among involved network devices, while other frequent changing CATS metrics may be collected by a centralized SDN controller. At the mean time, Service scheduling function can be performed by a SDN controller and/or CATS router(s). The entire or partial C-PS function may be implemented in the centralized control plane, depending on the specific implementation and deployment. The test setup of the hybird model also follows <xref target="centralized-test-setup"/> as defined in section before.</t>
        </section>
      </section>
      <section anchor="control-plane-and-forwarding-plane-support">
        <name>Control Plane and Forwarding Plane Support</name>
        <t>In the centralized approach, Both of the control plane and forwarding plane follow Segment Routing pattern, i.e. SRv6<xref target="RFC8986"/>. The SDN controller configure SRv6 policies based on the awareness of CATS metrics and traffic is steered through SRv6 tunnels built between CATS ingress nodes and CATS egress nodes. The collection of CATS metrics in control plane is through Restful API or similar signalling protocols built between the SDN controller and the edge server manager.</t>
        <t>In the distributed approach, In terms of the control plane, EBGP<xref target="RFC4271"/> is established between CATS egress nodes and edge servers. And IBGP<xref target="RFC4271"/> is established between CATS egress nodes with CATS ingress nodes. BGP is chosen to distribute CATS metrics in network domain, from edge servers to CATS ingress node. Carrying CATS metrics is implemented through the extension of BGP, following the definition of <xref target="I-D.ietf-idr-5g-edge-service-metadata"/>. Some examples for defining sub-TLVs are like:</t>
        <ul spacing="normal">
          <li>
            <t>Delay sub-TLV: The processing delay within edge sites and the transmission delay in the network.</t>
          </li>
          <li>
            <t>Site Preference sub-TLV: The priority of edge sites.</t>
          </li>
          <li>
            <t>Load sub-TLV: The available compute capability of each edge site.</t>
          </li>
        </ul>
        <t>Other sub-TLVs and can be gradually defined according to the CATS metrics agreement defined in <xref target="I-D.ietf-cats-metric-definition"/>.</t>
        <t>In the hybrid approach, the metric distribution follows the control plane settings in both centralized and distributed approach, according to the actual choices in what metrics are required to be distributed centrally or distributedly.</t>
        <t>In terms of the forwarding plane, SRv6 tunnels are enabled between CATS ingress nodes with CATS egress nodes.</t>
        <t>Service flows are routed towards service instances by following anycast IP addresses in all of the approaches.</t>
      </section>
      <section anchor="topology">
        <name>Topology</name>
        <t>In terms of all of the approaches to test CATS performance in laboratory environments, implementors consider only single domain realization, that is all CATS routers are within the same AS. There is no further special requirement for specific topologies.</t>
      </section>
      <section anchor="device-configuration">
        <name>Device Configuration</name>
        <t>Before implementation, there are some pre-configurations need to be settled. 
Firstly, in all of the approaches, application plane functionalities must be settled. CATS services must be setup in edge servers before the implementation, and hosts that send service requests must also be setup.</t>
        <t>Secondly, it comes to the CATS metrics collector setup.
In the centralized approach and the hybrid approach, the CATS metrics collector need to be first setup in the edge server manager. A typical example of the collector can be the monitoring components of Kubernetes. It can periodically collect different levels of CATS metrics. Then the connecton between the edge server manager and the SDN controller must be established, one example is to set restful API or ALTO protocol for CATS metrics publication and subscription.</t>
        <t>In the distributed approach and the hybrid approach, the CATS metrics collector need to be setup in each edge site. In this benchmark test, the collector is setup in each edge server which is directly connected with a CATS egress node. Implementors can use plugin software to collect CATS metrics. Then each edge server must set BGP peer with the CATS egress node that's directly connected. In each each edge server, a BGP speaker is setup.</t>
        <t>Thirdly, The control plane and fordwarding plane functions must be pre-configured. In the centralized approach and the hybrid approach, the SDN controller need to be pre-configured and the interface between the SDN controller and CATS routers must be tested to validate if control plane policies can be correctly downloaded and it metrics from network side can be correctly uploaded. In the distributed approach and the hybrid approach, the control plane setup is the iBGP connections between CATS routers. For both the approaches. the forwarding plane functions, SRv6 tunnels must be pre-established and tested.</t>
      </section>
    </section>
    <section anchor="reporting-format">
      <name>Reporting Format</name>
      <t>CATS benchmarking tests focus on data that can be measured and controllable.</t>
      <ul spacing="normal">
        <li>
          <t>Control plane configurations:
          </t>
          <ul spacing="normal">
            <li>
              <t>SDN controller types and versions;</t>
            </li>
            <li>
              <t>northbound and southbound protocols.</t>
            </li>
          </ul>
        </li>
        <li>
          <t>Forwarding plane configurations:
          </t>
          <ul spacing="normal">
            <li>
              <t>forwarding plane protocols (e.g., SRv6);</t>
            </li>
            <li>
              <t>the number of routers;</t>
            </li>
            <li>
              <t>the number of edge servers;</t>
            </li>
            <li>
              <t>the number of links;</t>
            </li>
            <li>
              <t>edge server types, versions.</t>
            </li>
          </ul>
        </li>
        <li>
          <t>Application plane configurations:
          </t>
          <ul spacing="normal">
            <li>
              <t>Traffic types and configurations.</t>
            </li>
          </ul>
        </li>
        <li>
          <t>CATS Metrics:
Each test MUST clearly state what CATS metrics it use for traffic steering, according to the CATS metrics definition in <xref target="I-D.ietf-cats-metric-definition"/>.  </t>
          <ul spacing="normal">
            <li>
              <t>For L0 metrics, benchmarking tests MUST declare metric types, units, statistics(e.g, mean, max, min), and formats. Benchmarking tests SHOULD optionally declare metric sources(e.g, nominal, estimation, aggregation).</t>
            </li>
            <li>
              <t>For L1 metrics, benchmarking tests MUST declare metric types, statistics, normalization functions, and aggregation functions. Benchmarking tests SHOULD optionally declare metric sources(e.g, nominal, estimation, aggregation).</t>
            </li>
            <li>
              <t>For L2 metrics, benchmarking tests MUST declare metric type and normalization functions.</t>
            </li>
          </ul>
          <t>
<strong>Detailed normalization functions and aggregation functions will be listed in appendix A (TBD.)</strong></t>
        </li>
      </ul>
    </section>
    <section anchor="benchmarking-tests">
      <name>Benchmarking Tests</name>
      <section anchor="cats-metrics-collection-and-distribution">
        <name>CATS Metrics Collection and Distribution</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS metrics can be correctly collected and distributed to the DUTs which are the SDN controller in the centralized approach and the CATS ingress node in the distributed approach, as anticipated within a pre-defined time interval for CATS metrics update.</t>
          </li>
          <li>
            <t>Procedure:</t>
          </li>
        </ul>
        <t>In the centralized approach and the hybrid approach, the edge server manager periodically grasp CATS metrics from every edge server that can provide CATS service. Then it passes the information to the SDN controller through publish-subscription methods. Implementors then should log into the SDN controller to check if it can receive the CATS metrics from the edge server manager.</t>
        <t>In the distributed approach and the hybrid approach, the collectors within each edge server periodically grasp the CATS metrics of the edge server. Then it distributes the metrics to the CATS egress node it directly connected. Then Each CATS egress node further distributes the metrics to the CATS ingress node. Implementors then log into the CATS ingress node to check if metrics from all edge servers have been received.</t>
        <t>For all of these approaches, to test whether metrics can be received within the pre-defined time interval, implementors could compare the timestamp when receiving the current metric and the timestamp when the last metric arrives from the logs. If the time difference is exactly equal to the pre-defined metric update time interval, then CATS metrics collection is correct.</t>
      </section>
      <section anchor="session-continuity">
        <name>Session continuity</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that traffic can be correctly steered to the selected service instances and TCP sessions are maintained for specific service flows.</t>
          </li>
          <li>
            <t>Procedure:
Enable several hosts to send service requests. In distributed approach, log into the CATS ingress node to check the forwarding table that route entries have been created for service instances. Implementors can see that a specific packet which hits the session table, is matched to a target service intance. Then manually increasing the load of the target edge server. From the host side, one can see that service is going normally, while in the interface of the CATS router, one can see that the previous session table aging successfully which means CATS has steer the service traffic to another service instance.</t>
          </li>
        </ul>
        <t>In the centralized approach and the hybrid approach, implementors log into the management interface of the SDN controller and can check routes and sessions.</t>
      </section>
      <section anchor="end-to-end-service-latency">
        <name>End-to-end Service Latency</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS works properly under the pre-defined test condition and prove its effectiveness in service end-to-end latency guarantee.</t>
          </li>
          <li>
            <t>Procedure:
Pre-define the CATS metrics distribution time to be T_1 seconds. Enable a host to send service requests. In distributed approach, log into the CATS ingress node to check if route entries have been successfully created. Suppose the current selected edge server is ES1. Then manually increase the load of ES1, and check the CATS ingress node again. The selected instance has been changed to ES2. CATS works properly. Then print the logs of the CATS ingress router to check the time it update the route entries. The time difference delta_T between when the new route entry first appears and when the previous route entry last appears should equals to T_1. Then check if service SLA can be satisfied.</t>
          </li>
        </ul>
        <t>In the centralized approach and the hybrid approach, implementors log into the management interface of the SDN controller and can check routes and sessions.</t>
      </section>
      <section anchor="system-utilization">
        <name>System Utilization</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To determine that CATS can have better load balancing effect at server side than simple network load balancing mechanism, for example, ECMP.</t>
          </li>
          <li>
            <t>Procedure:
Enable several hosts to send service requests and enable ECMP at network side. Then measure the bias of the CPU utilization among different edge servers in time duration dela_T_2. Stop services. Then enable the same number of service requests and enable CATS at network side(the distributed approach, the centralized approach, and the hybrid approach are tested separately.). Measure the bias of the CPU utilization among the same edge servers in time duration dela_T_2. Compare the bias value from two test setup.</t>
          </li>
        </ul>
      </section>
      <section anchor="load-balancing-variance">
        <name>Load Balancing Variance</name>
        <ul spacing="normal">
          <li>
            <t>Objective:
To test the load balancing variance under different path selection algorithms, which could evaluate the traffic steering effectiveness of these algorithms. Low variance value means the algorithm performs better for traffic steering. Algorithms that are compared include ECMP, global-min, and Proportional-Integral-Derivative (PID).</t>
          </li>
          <li>
            <t>Procedure:
There are three test rounds for three different path selection algorithm respectively. In distributed approach, pre-configure the control plane function C-PS in CATS ingress router, while in the centralized and hybrid approach, the path selection function is configured in the SDN controller. For each test round, implementors initiate the same number of service flows to multiple service edge sites. For example, the number of service flows are set to 100, while the number of service edge sites are 3. Implementors need to record the number of service flows at each site, and calculate the load balancing variance, according to the following equations:</t>
          </li>
        </ul>
        <t>** n_avg = (n_s1 + n_s2 + n_s3) / 3 **</t>
        <t>** var_alg = (abs(n_s1 - n_avg) + abs(n_s2 - n_avg) + abs(n_s3 - n_avg))/ (n_avg * 3) **</t>
        <t>Where 'n_s1', 'n_s2', and 'n_s3' refer to the number of service flows that are steered to the corresponding edge site, while 'n_avg' refers to the average number of service flows that are directed to each site. 'var_alg' refers to the average variance of service flows among all three edge sites, which is used to evaluate the load balancing effectiveness of each algorithm. It is calculated by adding three absolute values and then divide the sum by three times of the average number of service flows. Lower variance value means better load balancing effect.</t>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The benchmarking characterization described in this document is constrained to a controlled environment (as a laboratory) and includes controlled stimuli. The network under benchmarking MUST NOT be connected to production networks.
Beyond these, there are no specific security considerations within the scope of this document.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
    <section anchor="acknowledgements">
      <name>Acknowledgements</name>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2544">
          <front>
            <title>Benchmarking Methodology for Network Interconnect Devices</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
            <date month="March" year="1999"/>
            <abstract>
              <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2544"/>
          <seriesInfo name="DOI" value="10.17487/RFC2544"/>
        </reference>
        <reference anchor="RFC4271">
          <front>
            <title>A Border Gateway Protocol 4 (BGP-4)</title>
            <author fullname="Y. Rekhter" initials="Y." role="editor" surname="Rekhter"/>
            <author fullname="T. Li" initials="T." role="editor" surname="Li"/>
            <author fullname="S. Hares" initials="S." role="editor" surname="Hares"/>
            <date month="January" year="2006"/>
            <abstract>
              <t>This document discusses the Border Gateway Protocol (BGP), which is an inter-Autonomous System routing protocol.</t>
              <t>The primary function of a BGP speaking system is to exchange network reachability information with other BGP systems. This network reachability information includes information on the list of Autonomous Systems (ASes) that reachability information traverses. This information is sufficient for constructing a graph of AS connectivity for this reachability from which routing loops may be pruned, and, at the AS level, some policy decisions may be enforced.</t>
              <t>BGP-4 provides a set of mechanisms for supporting Classless Inter-Domain Routing (CIDR). These mechanisms include support for advertising a set of destinations as an IP prefix, and eliminating the concept of network "class" within BGP. BGP-4 also introduces mechanisms that allow aggregation of routes, including aggregation of AS paths.</t>
              <t>This document obsoletes RFC 1771. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4271"/>
          <seriesInfo name="DOI" value="10.17487/RFC4271"/>
        </reference>
        <reference anchor="RFC8456">
          <front>
            <title>Benchmarking Methodology for Software-Defined Networking (SDN) Controller Performance</title>
            <author fullname="V. Bhuvaneswaran" initials="V." surname="Bhuvaneswaran"/>
            <author fullname="A. Basil" initials="A." surname="Basil"/>
            <author fullname="M. Tassinari" initials="M." surname="Tassinari"/>
            <author fullname="V. Manral" initials="V." surname="Manral"/>
            <author fullname="S. Banks" initials="S." surname="Banks"/>
            <date month="October" year="2018"/>
            <abstract>
              <t>This document defines methodologies for benchmarking the control-plane performance of Software-Defined Networking (SDN) Controllers. The SDN Controller is a core component in the SDN architecture that controls the behavior of the network. SDN Controllers have been implemented with many varying designs in order to achieve their intended network functionality. Hence, the authors of this document have taken the approach of considering an SDN Controller to be a black box, defining the methodology in a manner that is agnostic to protocols and network services supported by controllers. This document provides a method for measuring the performance of all controller implementations.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8456"/>
          <seriesInfo name="DOI" value="10.17487/RFC8456"/>
        </reference>
        <reference anchor="RFC8986">
          <front>
            <title>Segment Routing over IPv6 (SRv6) Network Programming</title>
            <author fullname="C. Filsfils" initials="C." role="editor" surname="Filsfils"/>
            <author fullname="P. Camarillo" initials="P." role="editor" surname="Camarillo"/>
            <author fullname="J. Leddy" initials="J." surname="Leddy"/>
            <author fullname="D. Voyer" initials="D." surname="Voyer"/>
            <author fullname="S. Matsushima" initials="S." surname="Matsushima"/>
            <author fullname="Z. Li" initials="Z." surname="Li"/>
            <date month="February" year="2021"/>
            <abstract>
              <t>The Segment Routing over IPv6 (SRv6) Network Programming framework enables a network operator or an application to specify a packet processing program by encoding a sequence of instructions in the IPv6 packet header.</t>
              <t>Each instruction is implemented on one or several nodes in the network and identified by an SRv6 Segment Identifier in the packet.</t>
              <t>This document defines the SRv6 Network Programming concept and specifies the base set of SRv6 behaviors that enables the creation of interoperable overlays with underlay optimization.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8986"/>
          <seriesInfo name="DOI" value="10.17487/RFC8986"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="2" month="February" year="2026"/>
            <abstract>
              <t>   Distributed computing enhances service response time and energy
   efficiency by utilizing diverse computing facilities for compute-
   intensive and delay-sensitive services.  To optimize throughput and
   response time, "Computing-Aware Traffic Steering" (CATS) selects
   servers and directs traffic based on compute capabilities and
   resources, rather than static dispatch or connectivity metrics alone.
   This document outlines the problem statement and scenarios for CATS
   within a single domain, and drives requirements for the CATS
   framework.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-14"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="26" month="February" year="2026"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   functional components, describes their interactions, and provides
   illustrative workflows of the control and data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-20"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Guanming Zeng" initials="G." surname="Zeng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="2" month="February" year="2026"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a given service
   instance by considering the dynamic nature of computing and network
   resources.  In order to consider the computing and network resources,
   a system needs to share information (metrics) that describes the
   state of the resources.  Metrics from network domain have been in use
   in network systems for a long time.  This document defines a set of
   metrics from the computing domain used for CATS.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-05"/>
        </reference>
        <reference anchor="I-D.ietf-idr-5g-edge-service-metadata">
          <front>
            <title>BGP Extension for 5G Edge Service Metadata</title>
            <author fullname="Linda Dunbar" initials="L." surname="Dunbar">
              <organization>Futurewei</organization>
            </author>
            <author fullname="Kausik Majumdar" initials="K." surname="Majumdar">
              <organization>Oracle</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Gyan Mishra" initials="G. S." surname="Mishra">
              <organization>Verizon</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <date day="18" month="September" year="2025"/>
            <abstract>
              <t>   This draft describes a new Edge Metadata Path Attribute and some Sub-
   TLVs for egress routers to advertise the Edge Metadata about the
   attached edge services (ES).  The edge service Metadata can be used
   by the ingress routers in the 5G Local Data Network to make path
   selections not only based on the routing cost but also the running
   environment of the edge services.  The goal is to improve latency and
   performance for 5G edge services.

   The extension enables an edge service at one specific location to be
   more preferred than the others with the same IP address (ANYCAST) to
   receive data flow from a specific source, like a specific User
   Equipment (UE).


              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-idr-5g-edge-service-metadata-30"/>
        </reference>
      </references>
    </references>
    <?line 306?>



  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA9U8bW/cNprfDfg/EMiH2PFoWttpr+vDHurEztbYJPVlJnt3
iwMCjoYzw41GmhUlO9Mk/e37vJAUqZex46QHnAs0tkQ+5PPK541KkmR/7+ZM
nO7vzYs0l2t1JualXFTJNktm69tlksrKJN/De/jlTOh8UezvmXq21sboIq+2
G5hxdTl9sb9X6SqDP56pPF2tZfle50vxSlWrYl5kxXIrFkUpnhfrTV3Bm0Te
ylKJKay10KmYVEqV8Hh/T85mpYId/cf+nhDPXvGs8+lkf+92eSZwS/t7MKoG
uOXZ/l4ieNN/VSuZi/+RBU4rShj6fKVzKV4VM50pfKjWUmdnYiuL9zj25xTf
r+n1OC3WDahrBRt/qeu7IGW63sDQ7T+2w7D+Ust8jYT4u0LkLMBfanmrdADq
N3i7tEN/XtFbhCMaQP+t8w8aENStXb3NNS3Y4Kc/fDjlDdX0bpzmIgD0n7AM
QCuC7fx9ehkA+IDvxv+EYT//VhE6AKGZ/0rnq+T1skiRd7mHcTl9cxUAWeuN
kmVm3sufVVXqcanG70tkXAoyU+pZXRH3+L+8KNey0jfqDCG8efH85IenT93v
T0/+7dj9/tPTH370v//ppx8JAopkOP8quRhrVS1YcmujUmmUSUr1z1qXaq3y
yvSMW5SA3W1Rvu95t0YU0mSuFjrXFZAnHqPnZfLDMlHzpUqMKm90qnCKnMtK
WhyTJBFyZqpSphX+3VaDyqqBsWpwgBJ/KLQR0r8DEdE5vxZysykLma7EDHCb
iyIX1UoJgpUrY0SxELOiWonUrSNkPhe5qhBD4SlW5GMxXcEqoPs1UkYA2E0B
5BKzUIvXXos1vHIaORYOtbWez1E19vceiSvgbzGvU4QuPj7SwZ+fd6E++VLU
QZKMnvPDHcgy88wIsAZJhfGiKoRRmUorhrUptayUsJyDYaaSearMWEyKtUJS
Im0zGJOnW2AwrIqyNoLHZVEvV7Bm8xRBZjol2gIbSrsplQAdcEx7BGA1J2Li
rsACgEoBHYRaLGB/ONptx24ZJo3E7UoD/kg85Bk8AhiA3ceP9xD9z5+R5Qrm
IoUzUwCZlyA0pcx4H14ROvD8m8+fSQb0epMRUMIFdq/nuNOx+KW4VTeqHEUs
qmhR2rTMt7uZBYNlJVIwVDOHOKCIa7YVZSSkcULbS4SO7n7+PBIGGRtJeKVM
ZcXc0CYtzebIlxuZAWogIygInjVO0eYanpSoPEQ/i8IYzkHE3YwCzInezfi5
2mTFlvTOSTWO12OgIS4114ZtJWzDDSBwIoU5wDH9W/CGCIkvV9tZqcMZ9kW4
iXWdVRrYJ4rZPxgd1us+ccPn6oNEbo+aAbcaVC4DTgPhVD5PqiKBf5yeoOjT
XvRyhSPMFji2FsBx2LS3PGrIzGwjnlaRiYLfgfIo+CCyuDcUPJxO5I+Fcsw2
6cJzHxk2VeXagGlqZCIpFkmFT8lCxRaxRmuIqCyKDNBlYUEANL1f6BpNIfv/
hLZ2dg/nh8Ym1zgWkdnIapV4TnT3tqhL5KvdigkWE28KEJvyzP7LCmXqzaYo
K+MENQU3SJu1iTQrsLVjhnf5/NU1HPDgEWSgtqgnKDwJbk6ABSQlZjpPkdWB
y0dP7eOJquoNo6BY3Qw+QfI5+wOooVnINBhBlq+PH603gGZLnBscEtk8ZEx4
xITaBmcj2Nks26KlVirQMX+EBSrI8tinWKMdyjigc3g0TmkS6zroCxwzIOca
dgMnhcG9r4rbyGTQplOQMjZ3CDpcFNAsaC1nLHGATFM41IAD2RbPBUQdh3jp
sXpclAcoVnSsbjKZsl1by/ewRZVq9OQNmVJ9T7nms/9RxFyRiOcB+c4tLXDg
x48BYRPkfkLch5PEABlYwQKhsIdur5mDjXslt35Im3OheBmg91pnks59BDop
FhWp34VF9DXTE32PycXrQ4EGbi4zEDGxLubRviLaWIfUnqji4u0U7CHQCeQL
lA2XMkAupjeLMy1/8VqQE4yMLsfgLw2iOmoNFhUwzLCzg3PgqaIjKJIiK0Fo
7Jx0Oh4jx1Fb+y098v9WZRn+2z5nGcXAdUG0cksbBQZBVggTV89zll+PMawG
9CYT3TfpRkuUbVUukE4nB1cnh7za5PVFiHwEHHh5Cf42eW3wDlwKuVRtWE8P
rp5aWP7hwdXxoROvFnXvt3twC62niC4CYoIiRta4hIgtJJEVOD6X8mWJDkPp
7DH5juLqZLzLBhDIIUSvnjJubV6JDRjfFD11BpYv9LImV8YteozqawWvjd+I
92P3ecwLG5IJoiOjkaNioHj1E22FxwS7qehm5j2S4/g/jtY7IaDhk1PegduA
atY3bAEZx2gHt4VQDdGA6Mps2MvJtuCYof5YEYiG9YBSQ7Q/Bc09tciRcOV4
gHnR8wdMj1YiYDAgDFpXGGmgjfz9998xrMSfo+TLfo7cxE/8D9peR+7kmshN
ZvrS6Zz7+dSeeN+faGJ3u0d9eBx1Jn6Kt9rsA/9xamYffpsVH4TjUXKEVunL
2RHAwOXD3+9m8lEPPjEEh1AH0lGAcIz1p34In8gWPm9s4acOBLJCPWjcuQn8
nQZNWI0GNzH08wksHQ96ZTWwF8KFInF5m2OMT+J+AEfy4RAhvoobRz3cCGHf
/fPpm0w+SqJ9H+HpdgcOD9WHT0OTj7pL3H8y/UaGcnhDOyfb+EYc754Mxvob
7/y7cNT/ftnk7+Jh/bP7Jmfft+fi7Cy/58qduf1r901u02qQdn2T21we5PrA
5DfOORCfxuOA66d3Tv6qbXd+HXjwxxDMGVxHsPDvuyZbU3vsJtu/T+41WRxc
To6d3bR/nzR29I/CufNr7987J7eN+KBRp73EAF54P3inxxTa3nv8CL+idfA+
nolH/bGwoPrZnx+HMXQTXD/+3B9xXwQZgibiFhhyB8mD+4fcfWmOvpDbRg/h
8AbYyAfEmtdZgjO/8ZEqe/ZmBN4vBqDtCJVDsU7khC7/iIpQ2TaOUWwIEYYG
o56o4nREAbQPnXE/zi136UPnrlPgFQcHUToGgRjV5KsLlB2IxSkdFyLZhIsu
4DHdQJEjov5QygaoCY/BXTQRW9IXT7sorwrDXnH8lUHGA0OMhzobD3P2Hxhc
fIvVHoDbva1VzIAOPCF2vLrzZ3A3O33Jo2juN2VuP/V7mNueHXlwn3ZNG/Y6
75zW72/unHafTYqeecMe5q7ldviWQ9Pu8CqHV9vpTw5Nu8NJGJp2hw+5Y9ou
7/Ebb5J+hgPR/mkPVIH4l8EHnWn3d9Ie6JI+0Bl9oBv6NQ5ovwWzaRB7quGJ
u5ug9/Ydv4Hr2O/TOdcxdAY7rmOP8/gLV63CSg3X2NQH8CzurgJ9HoXFLyyW
ZNy4kRbrmc6lq7hGTiXQMyx20CxDDmUIxxbpTSVnmeqmyCOI6wLcN53fFNmN
anoJ5qSGhtol0Nkj12xBSXvwYrH2uey4fWu5xZaDJgE/2yI2wXbb5ZtzrvSs
lQSHWK/BQ3WuhUlXal5nVG+pc0792paGDThlRbl28Ft1CKDQd0XkRR4YW8bA
0idgD283sqQCOJb1Gvh2/95Zb4qkIQ52LZfvn6uNyqkAYPuIMFmusaLQ6vAg
F7hVMO1GEMBFXTppoGYDrpwbsaMKKKO6o7GZ8pkCMqmxLSNbvRSNXjZ6Zx9O
uMKNE3bW1cISQESNloNtHzICwNklRUBvbNF7IytM/NuOjcmbmx+5KPinn3xR
sMVaX4yh0Vio0VSoGW7jiiUf63m22oNVJyz4BHUdglnVEEJkALLWWQUUrG6V
yruRlGkKLWHcZGupTbGivQXgTkwwCtV4/TfA0UWdifPrKxRRV3c1epnLjBQB
6F8VAL29vZ5qnKtd9lRgOJTpiUAbBuNL6tLoY/JIXD77yzXxCrsLQfyopwRN
jTYr1MqQap2KUxgWggGAJ1cPBUdRXpc3YwEAKR5cFUZRITEIPdv88AavWIPN
HvXErjC/swgEzbIstx0TCKuG5iOMJtUH6mFjqYAdjsKeGAriwzab4NjY1SGJ
mkKNdrbFiJszGBTANfUsmb78G1v9TL9X1OSSgJOVga2zb89IaoH3YO8NzprT
206hzQsVKFFubAuzHWztpCXmGNeYwBxxXSrXwtFaTRelrraIalDKw3kvCzmP
B8sbCaqAJ5ntB4STYCNnOnMAMMvioZDB+5XOqwZ9PDb59FiWcl5TX4szmb4L
JCo5e6sBTOfUzY7Wjp4+uXGgZZ2eMj70cE4jmkhMZ+u7lhVMPVpNElnu1wxN
c6vZJeiwaeMm04o6kVYFHu8I7Rabm0LvIGzem8Umwi6aUYda8AILww7f0HB0
y+ORlcXVVI6cne+ytY2ex8YWV3T+woLoRtsvOJ9mk0qdzlT0GhrNk/k2lXAI
X10LOZ8jdKYKIOmQaDqfXN+OmBYb36IV4tw7i0iPBz33FbH3Qs0jsA7IdVGi
n7sFStzossip0XTUmBHMeblWUDjkqAUqX4IysMUCdknXFTjiTjXqTc2iRBpR
xmq07605nwTdrHnhG+LIgwEhCTpfyax4z6Zi9HVAERu2PbdHtHQtd8/ICWn5
QmGfGTmqm1IlaTjV9vayBKLsg4jgWi90aSrs1hpi0agnNegcPKBThf7CujZV
BJcoZQUles0ddtFpwF4V5wdbSKEaYu+G7cA12NFp4nYXC50cO7fEWFhJBgrM
CbcKDZ0VnLY9st4FsoMm7/TVhrtad8ANKL9AcjeEGPQozkW13WC7oDuHGs/B
QbXmlyxfAUayoGYbNOhFjhKPM/5az8AjVNTUccUtzKAuupjbTkQLLWgCztQN
GpKWn0VinTsrijlhcocbb6mvMcWRquVJOWkIXJKRwO42h6k23BNfYaNM6MGd
v5z+6n22bk/Lpp55OcW14bAyaak3vvN2h4v2tXxtRDs+OrkwoYMLDGS5Ri1e
Us9WFwLT0zcuzcF2pBWxLcrLy44th3Ujawd8rw32/tVLjGdcyyHs3UlAD7c7
GyHOIV/QHdwo3JorC7Q3QAr7uG/LRBIG3YIP+k6QwSq6qkuoztMVxHGozdOh
KGneCpOslWoMUGgV7U4epugtkQ4EIV7CQ2kqH3fEGNEZ4/aNItPq/deLFg18
8GbtArgplvTz4jbPwAe029FVXFZy/jqeh93J9Yan9pbY7kesjt/FfbBEFuS3
FQ3iVOS0+JoVBNZNm2noPfT2c3q+t5yjUArCeIg2TxS2hy9Ejhi2I8QXdDNp
f4/207migcFBWhuMlDF2iO6JrJU0XgQck9Ez40We+PwBbzo+rM8oE5h0Gm63
Gxs14MmJA/+dx+Ww3dWsqHNezgDh7J8+xLXd8y/a1OpfuEPUJlQ+UOPlmEl7
aJenSKVez9CfWji29b4LT/7eARCXv3dvQttDqI883hab845r0kKHAbmbDQ39
4mHj4KbCK9YNmnlJ1Wa6RfB2MhVphpcIt5gFBA0kNz+OVel6xtDFoJ1BURCr
fkE0hLihbrz8vrlX1iOktPm5SjO+o0VRkqVnDfCw8o0el6kAAHJ3RPlD+L/8
AP/T+eHIWVjQBbpK1Fli8suvb19eiGLDbiEFg9F6IJIl+IIMPi8AqsxG6APo
tXP2lnB8LOmPQ9YRj97xQ9Fr8MJFYfvOsQ+NBLXtNos3r/5PMI0QPXkQonxn
rR89C//JkwtVQdCvBgcOkwFOeQgMZpjvMDaFCzYYc7QfwE89mD67GB8+ecKW
MyIYJveNy5YGugWmL7oTcBFE7Jiw+NVdAgMlnBaAMYaDOmenouWMtc+r4Qsr
Vu0u3k5NcGWxr/H+Hl5BJ6h2swayBkheEEO9kc5rQyrSUeSyIJiuZz/hRvZ4
t/UGz/0xp5uuMbc0h+Pl7I7c8u6Tuc9rj+KDZSnNpq8VBWZsY/vsjj6Af4Oe
RBgCWocS7ONGGneBLbj26+/CtI47m+wjx96sktChd7cjW55uheuYVVFncwEB
dXNLqQ0aPN+VSt+jH6V54yBACq+4dmwzYfzQ3O9drpGNAIzPDrad7h5+dHbY
vT3QULzZlAmyZHEoHDruNKXrtBM4Og87M/y1v3usFKd8u6yLeNZVspBtEXsw
dRFlFVbyBn1t5fkKOHAcgYa2yXSYONfhMku3K0U4teyMgxVmfgZ1uJNyQqHE
+NzZHRwNJ9R6g8u5jbrsdVqXFJBbO+9TxfEcfJRJ0wwrS74862QWCIo6svCT
w0uAVFqVxGpF9ykt4UOULGA2P20EiWdDF0uogYzssk9qTRTnuFETdV7ranu3
vXeuVMfU+3oTb9pfzO4mKJF40+cQWfLyxt77Bjy4qhzl4kyYAO2xt5eUX4Vh
N3RL1Oaniv7sFIVN/WfCfUW9FeNwBZooQ442FmFLjPsaiU9LJd0N9Z7vCHRS
A0ZZgLKhwkam75W7NLXSlbE0Zv7RJkZ0/VVWWFjG7UL8I8ulqoI1K76BT8YD
LCaXCXSOGzRO0DG+dBbMAogM2QsnynSLC6NUThdFO/dLGrEsEDJ7OXwNFevt
zeVcG4fbFYNAswesVYYbXdQmRh5cJa4HpVjkWdSIGBMLfWd7p3klbVXUEo/3
6AQaSZZzH0CbS9Gx8mXnemRzIhnjI4svrbfJ0JOJQEKwBBJ5WIucBrki+GVz
1d4VDl7ylft7OnKYeuBPJigMrWrqUuqYVTTJmMzV3mdENwMPK9P6AgKV622H
a/czAP6TFvZjIZFiX/sVe+KzsKhERpDTPdN3eCURtwaKZS2DZEn9A02CXgzq
fiSQ1hCMuRHBqOhg8fYy9DZAfy4nxwMaqyJ9hXEcOzVmqrtluQQLy1V8v56/
6ovqwQYLu1/sNdqJu4IaS4bd0gaC6cofbJESt6+0hvaTj63Kn2IrFRPQNo+0
jse5yir5buqTUv7EzdVtMH9rs/oYFEnbmu2HeusRjqcT2w233iodwHSQgExZ
bD23nQxNXp7775BgZLvQwN3/J6Ziwt/ceNt8c+OeJgKhWwHHBhcWwJnMQIbQ
BLP+C3sIoDHVnH/OseUDCwouw9ma6L840fqmCH5gwuacv+Lk594MnoEQcYNh
qtXpGKcJibIzLRuRvn4bfp7EdpU1dZrI2dXWJs1tQot6CN5N34EqTapi4ytx
LrGfWx/Cli2b7NsuJIgXLSQOhkPeIXEc/EgFh+Oc6TYKnGTQVND7w7F49UU0
8mjdl0TPA5ecwINrWyvrQt8WQU+ZCyFAmqmp4pmXpb/JUqNNY6FpCTUB8Kaz
EcAbO8meeg1zg09lEF7ZEns7VmvjPrbEsYTCjTp71rl23/kykI12PLAxIHHb
bIKxZu+F8uxuoKuwG6eAfQnOsTj3gK0zWSoX7aDNT7N6zpowEsusACoka21r
vNf4dZ2SE2rJFRgciHOz5AIA39CH1MTB9dXFYd+ZPfVlb/6oCpG6xMy3/YwM
Pb2bsq07+YMndFTe6alu+NZHaoTUed/R1HJJ230nvYmC1r79MnxJx1WbdF9Z
icsnyqeyiTot409pZSdJAzaBG0Kq4GNN3tEKPh3wIrSkcWo/BkStCorcpOPv
v3c06Z8RtkzBtNNWDOPKbxAXFuV896oVUwKBWe9FZik2RqtdGtqTvm86XvDk
dtWG/b0nT0T+Tt4sxZ/FQf7OHIsj+Nuc8D+nh+I7cSo4WwojAf47kEEcK2eG
xyc8/xBm2GcnPc9O/bPD73AhXPGJAPgM+79ILx4jwMcj+vfkMeOLv58+FtRG
5nAZZLbT41aoTTG42RTcpuu545j4mLZj1/D5H4mn5vIei3H6iVfzzBqLx5ZY
Q3C9Iesync4FzPmwOWjEadQU12tjVwyNaq+rEZhU2p43ItRjgRrpRIobqucs
NvzRp5kpMnQEydr6DkA0NzfstmBf3xrnWYOG+R7flbObhGTQ8YsgfSZ9l/Pk
6p4TBcEB9gA+t+1RtnXo4yNj3yRp9Oaz+4JWVK0Atwq/LAnm+zd31mLedjbw
+TQECRaQI72C2vWt7ZqHLVygItjL3/R3HXI5m08WE87CWkudafbqnb/Ch2y0
USqovP51ynml4GM+m+abkXY6urHP1LZgfhkVdlzlRZg9siSMCRW1iqUQ0jBL
A0rYb8Ndnb8+75If2Cn7SR/SEuMp2AuBkEH555E4T9/nxW2Ggk/dcABSth59
dl/PnMGb/b1/AScP5JeKVwAA

-->

</rfc>
