<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.7.0) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-cats-metric-definition-06" category="std" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.25.0 -->
  <front>
    <title abbrev="CATS Metrics">CATS Metrics Definition</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-06"/>
    <author initials="Y." surname="Kehan" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="C." surname="Li" fullname="Cheng Li">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>c.l@huawei.com</email>
      </address>
    </author>
    <author initials="L. M." surname="Contreras" fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author initials="J." surname="Ros-Giralt" fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author initials="G." surname="Zeng" fullname="Guanming Zeng">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>zengguanming@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="March" day="01"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>CATS, metrics</keyword>
    <abstract>
      <?line 90?>

<t>Computing-Aware Traffic Steering (CATS) is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. In order to consider the computing and network resources, a system needs to share information (metrics) that describes the state of the resources. Metrics from network domain have been in use in network systems for a long time. This document defines a set of metrics from the computing domain used for CATS.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Computing-Aware Traffic Steering Working Group mailing list (cats@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/VMatrix1900/draft-cats-metric-definition"/>.</t>
    </note>
  </front>
  <middle>
    <?line 94?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Service providers are deploying computing capabilities across the network for hosting applications such as distributed AI workloads, AR/VR and driverless vehicles, among others. In these deployments, multiple service instances are replicated across various sites to ensure sufficient capacity for maintaining the required Quality of Experience (QoE) expected by the application. To support the selection of these instances, a framework called Computing-Aware Traffic Steering (CATS) is introduced in <xref target="I-D.ietf-cats-framework"/>.</t>
      <t>CATS is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. To achieve this, CATS components require performance metrics for both communication and compute resources. Since these resources are deployed by multiple providers, standardized metrics are essential to ensure interoperability and enable precise traffic steering decisions, thereby optimizing resource utilization and enhancing overall system performance.</t>
      <t>Metrics from network domain have already been defined in previous documents, e.g., <xref target="RFC9439"/>, <xref target="RFC8912"/>, and <xref target="RFC8911"/>, and been in use in network systems for a long time. This document focuses on categorizing the relevant metrics at the computing domain for CATS into three levels based on their complexity and granularity.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>Computing-Aware Traffic Steering (CATS)</t>
        </li>
        <li>
          <t>Service</t>
        </li>
        <li>
          <t>Service site</t>
        </li>
        <li>
          <t>Service contact instance</t>
        </li>
        <li>
          <t>CATS Service Contact Instance ID (CSCI-ID)</t>
        </li>
        <li>
          <t>CATS Service Metric Agent (C-SMA)</t>
        </li>
        <li>
          <t>CATS Network Metric Agent (C-NMA)</t>
        </li>
      </ul>
    </section>
    <section anchor="design-principles">
      <name>Design Principles</name>
      <section anchor="three-level-metrics">
        <name>Three-Level Metrics</name>
        <t>As outlined in <xref target="I-D.ietf-cats-usecases-requirements"/>, the resource model that defines CATS metrics MUST be scalable, ensuring that its implementation remains within a reasonable and sustainable cost. Additionally, it MUST be useful in practice. To that end, a CATS system should select the most appropriate metric(s) for instance selection, recognizing that different metrics may influence outcomes in distinct ways depending on the specific use case.</t>
        <t>Introducing a definition of metrics requires balancing the following trade-off: if the metrics are too fine-grained, they become unscalable due to the excessive number of metrics that must be communicated through the metrics distribution protocol. (See <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> for a discussion of metrics distribution protocols.) Conversely, if the metrics are too coarse-grained, they may not have sufficient information to enable proper operational decisions.</t>
        <t>Conceptually, it is necessary to define at least two fundamental levels of metrics: one comprising all raw metrics, and the other representing a simplified form---consisting of a single value that encapsulates the overall capability of a service instance.</t>
        <t>However, such a definition may, to some extent, constrain implementation flexibility across diverse CATS use cases. Implementers often seek balanced approaches that consider trade-offs among encoding complexity, accuracy, scalability, and extensibility.</t>
        <t>To ensure scalability while providing sufficient detail for effective decision-making, this document provides a definition of metrics that incorporates three levels of abstraction:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Level 0 (L0): Raw metrics.</strong> These metrics are presented without abstraction, with each metric using its own unit and format as defined by the underlying resource.</t>
          </li>
          <li>
            <t><strong>Level 1 (L1): Metrics normalized within categories.</strong> These metrics are derived by aggregating L0 metrics into multiple categories, such as network and computing. Each category is summarized with a single L1 metric by normalizing it into a value within a defined range of scores.</t>
          </li>
          <li>
            <t><strong>Level 2 (L2): Single normalized metric.</strong> These metrics are derived by aggregating lower level metrics (L0 or L1) into a single L2 metric, which is then normalized into a value within a defined range of scores.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-raw-metrics">
        <name>Level 0: Raw Metrics</name>
        <t>Level 0 metrics encompass detailed, raw metrics, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>CPU: Base Frequency, boosted frequency, number of cores, core utilization, memory bandwidth, memory size, memory utilization, power consumption.</t>
          </li>
          <li>
            <t>GPU: Frequency, number of render units, memory bandwidth, memory size, memory utilization, core utilization, power consumption.</t>
          </li>
          <li>
            <t>NPU: Computing power, utilization, power consumption.</t>
          </li>
          <li>
            <t>Network: Bandwidth, capacity, throughput, bytes transmitted, bytes received, host bus utilization.</t>
          </li>
          <li>
            <t>Storage: Available space, read speed, write speed.</t>
          </li>
          <li>
            <t>Delay: Time taken to process a request.</t>
          </li>
        </ul>
        <t>L0 metrics serve as foundational data and do not require classification. They provide basic information to support higher-level metrics, as detailed in the following sections.</t>
        <t>L0 metrics can be encoded and exposed using an Application Programming Interface (API), such as a RESTful API, and can be solution-specific. Different resources can have their own metrics, each conveying unique information about their status. These metrics can generally have units, such as bits per second (bps) or floating point instructions per second (flops).</t>
        <t>Regarding network-related information, <xref target="RFC8911"/> and <xref target="RFC8912"/> define various performance metrics and their registries. Additionally, in <xref target="RFC9439"/>, the ALTO WG introduced an extended set of metrics related to network performance, such as throughput and delay. For compute metrics, <xref target="I-D.rcr-opsawg-operational-compute-metrics"/> lists a set of cloud resource metrics.</t>
      </section>
      <section anchor="level-1-normalized-metrics-in-categories">
        <name>Level 1: Normalized Metrics in Categories</name>
        <t>L1 metrics are organized into distinct categories, such as computing, communication, service, and composed metrics. Each L0 metric is classified into one of these categories. Within each category, a single L1 metric is computed using an <em>aggregation function</em> and normalized to a unitless score that represents the performance of the underlying resources according to that category. Potential categories include:</t>
        <!-- JRG Note: TODO, define aggregation and normalization function -->

<ul spacing="normal">
          <li>
            <t><strong>Computing:</strong> A normalized value derived from computing-related L0 metrics, such as CPU, GPU, and NPU utilization.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> A normalized value derived from communication-related L0 metrics, such as communication throughput.</t>
          </li>
          <li>
            <t><strong>Service:</strong> A normalized value derived from service-related L0 metrics, such as tokens per second and service availability</t>
          </li>
          <li>
            <t><strong>Composed:</strong> A normalized value derived from an aggregation function that takes as input a combination of computing, communication and service metrics. For example, end-to-end delay computed as the sum of all delays along a path.</t>
          </li>
        </ul>
        <t>Editor note: detailed categories can be updated according to the CATS WG discussion.</t>
        <t>L0 metrics, such as those defined in <xref target="RFC8911"/>, <xref target="RFC8912"/>, <xref target="RFC9439"/>, and <xref target="I-D.rcr-opsawg-operational-compute-metrics"/>, can be categorized into the aforementioned categories. Each category will employ its own aggregation function (e.g., weighted summary) to generate the normalized value. This approach allows the protocol to focus solely on the metric categories and their normalized values, thereby avoiding the need to process solution-specific detailed metrics.</t>
      </section>
      <section anchor="level-2-single-normalized-metric">
        <name>Level 2: Single Normalized Metric.</name>
        <t>The L2 metric is a single score value derived from the lower level metrics (L0 or L1) using an aggregation function. Different implementations may employ different aggregation functions to characterize the overall performance of the underlying compute and communication resources. The definition of the L2 metric simplifies the complexity of collecting and distributing numerous lower-level metrics by consolidating them into a single, unified score.</t>
        <t>TODO: Some implementations may support the configuration of Ingress CATS-Forwarders with the metric normalizing method so that it can decode the information from the L1 or L0 metrics.</t>
        <t>Figure 1 provides a summary of the logical relationships between metrics across the three levels of abstraction.</t>
        <figure anchor="fig-metric-levels">
          <name>Logic of CATS Metrics in levels</name>
          <artwork><![CDATA[
                                    +--------+
                         L2 Metric: |   M2   |
                                    +---^----+
                                        |
                    +-------------+-----+-----+------------+
                    |             |           |            |
                +---+----+        |       +---+----+   +---+----+
    L1 Metrics: |  M1-1  |        |       |  M1-2  |   |  M1-3  | (...)
                +---^----+        |       +---^----+   +----^---+
                    |             |           |             |
               +----+---+         |       +---+----+        |
               |        |         |       |        |        |
            +--+---+ +--+---+ +---+--+ +--+---+ +--+---+ +--+---+
 L0 Metrics:| M0-1 | | M0-2 | | M0-3 | | M0-4 | | M0-5 | | M0-6 | (...)
            +------+ +------+ +------+ +------+ +------+ +------+

]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="cats-metrics-framework-and-specification">
      <name>CATS Metrics Framework and Specification</name>
      <t>The CATS metrics framework is a key component of the CATS architecture. It defines how metrics are encoded and transmitted over the network. The representation should be flexible enough to accommodate various types of metrics along with their respective units and precision levels, yet simple enough to enable easy implementation and deployment across heterogeneous edge environments.</t>
      <section anchor="cats-metric-fields">
        <name>CATS Metric Fields</name>
        <t>This section defines the detailed structure used to represent CATS metrics. The design follows principles established in related IETF specifications, such as the network performance metrics outlined in <xref target="RFC9439"/>.</t>
        <t>Each CATS metric is expressed as a structured set of fields, with each field describing a specific property of the metric. The following definition introduces the fields used in the CATS metric representations.</t>
        <!-- JRG Note and TODO: Define each of the types, formats, etc.. Do we need to standardize them? -->
<figure anchor="fig-metric-def">
          <name>CATS Metric Fields</name>
          <artwork><![CDATA[
- Cats_metric:
      - Metric_type:
            The type of the CATS metric.
            Examples: compute_cpu, storage_disk_size, network_bw,
            compute_delay, network_delay, compute_norm,
            storage_norm, network_norm, delay_norm.
      - Format:
            The encoding format of the metric.
            Examples: int, float.
      - Format_std (optional):
            The standard used to encode and decode the value
            field according to the format field.
            Example: ieee_754, ascii.
      - Length:
            The size of the value field measured in octets.
            Examples: 2, 4, 8, 16, 32, 64.
      - Unit:
            The unit of this metric.
            Examples: mhz, ghz, byte, kbyte, mbyte,
            gbyte, bps, kbps, mbps, gbps, tbps, tflops, none.
      - Source (optional):
            The source of information used to obtain the value field.
            Examples: nominal, estimation, normalization,
            aggregation.
      - Statistics(optional):
            The statistical function used to obtain the value field.
            Examples: max, min, mean, cur.
      - Level:
            The level this metric belongs to.
            Examples: L0, L1, L2.
      - Value:
            The value of this metric.
            Examples: 12, 3.2.
]]></artwork>
        </figure>
        <t>Next, we describe each field in more detail:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Metric_Type (type)</strong>: This field specifies the category or kind of CATS metric being measured, such as computational resources, storage capacity, or network bandwidth. It acts as a label that enables network devices to identify the purpose of the metric.</t>
          </li>
          <li>
            <t><strong>Format (format)</strong>: This field indicates the data encoding format of the metric, such as whether the value is represented as an integer, a floating-point number, or has no specific format.</t>
          </li>
          <li>
            <t><strong>Format standard (format_std, optional)</strong>: This optional field indicates the standard used to encode and decode the value field according to the format field. It is only required if the value field is encoded using a specific standard, and knowing this standard is necessary to decode the value field. Examples of format standards include ieee_754 and ascii. This field ensures that the value can be accurately interpreted by specifying the encoding method used.</t>
          </li>
          <li>
            <t><strong>Length (length)</strong>: This field indicates the size of the value field measured in octets (bytes). It specifies how many bytes are used to store the value of the metric. Examples include 4, 8, 16, 32, and 64. The length field is important for memory allocation and data handling, ensuring that the value is stored and retrieved correctly.</t>
          </li>
          <li>
            <t><strong>Unit (unit)</strong>: This field defines the measurement units for the metric, such as frequency, data size, or data transfer rate. It is usually associated with the metric to provide context for the value.</t>
          </li>
          <li>
            <t><strong>Source (source, optional)</strong>: This field describes the origin of the information used to obtain the metric. It may include one or more of the following non-mutually exclusive values:  </t>
            <ul spacing="normal">
              <li>
                <t>'nominal'. Similar to <xref target="RFC9439"/>, "a 'nominal' metric indicates that the metric value is statically configured by the underlying devices.  For example, bandwidth can indicate the maximum transmission rate of the involved device.</t>
              </li>
              <li>
                <t>'estimation'. The 'estimation' source indicates that the metric value is computed through an estimation process.</t>
              </li>
              <li>
                <t>'directly measured'. This source indicates that the metric can be obtained directly from the underlying device and it does not need to be estimated.</t>
              </li>
              <li>
                <t>'normalization'. The 'normalization' source indicates that the metric value was normalized. For instance, a metric could be normalized to take a value from 0 to 1, from 0 to 10, or to take a percentage value. This type of metrics do not have units.</t>
              </li>
              <li>
                <t>'aggregation'. This source indicates that the metric value was obtained by using an aggregation function.
<!-- JRG: Define aggregation and normalization functions -->
                </t>
              </li>
            </ul>
            <t>
Nominal metrics have inherent physical meanings and specific units without any additional processing. Aggregated metrics may or may not have physical meanings, but they retain their significance relative to the directly measured metrics. Normalized metrics, on the other hand, might have physical meanings but lack units.</t>
          </li>
          <li>
            <t><strong>Statistics (statistics, optional)</strong>: This field provides additional details about the metrics, particularly if there is any pre-computation performed on the metrics before they are collected. It is useful for services that require specific statistics for service instance selection.  </t>
            <ul spacing="normal">
              <li>
                <t>'max'. The maximum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'min'. The minimum value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'mean'. The average value of the data collected over intervals.</t>
              </li>
              <li>
                <t>'cur'. The current value of the data collected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Level (level)</strong>: This field specifies the level at which the metric is measured. It is used to categorize the metric based on its granularity and scope. Examples include L0, L1, and L2. The level field helps in understanding the level of detail and specificity of the metric being measured.</t>
          </li>
          <li>
            <t><strong>Value (value)</strong>: This field represents the actual numerical value of the metric being measured. It provides the specific data point for the metric in question.</t>
          </li>
        </ul>
      </section>
      <section anchor="aggregation-and-normalization-functions">
        <name>Aggregation and Normalization Functions</name>
        <t>In the context of CATS metric processing, aggregation and normalization are two fundamental operations that transform raw and derived metrics into forms suitable for decision-making and comparison across heterogeneous systems.</t>
        <section anchor="aggregation">
          <name>Aggregation</name>
          <t>Aggregation functions combine multiple metric values into a single representative value. This is particularly useful when metrics are collected from multiple sources or over time intervals. For example, CPU usage metrics from multiple service instances may be aggregated to produce a single load indicator for a service. Common aggregation functions include:</t>
          <ul spacing="normal">
            <li>
              <t>Mean average: Computes the arithmetic average of a set of values.</t>
            </li>
            <li>
              <t>Minimum/maximum: Selects the lowest or highest value from a set.</t>
            </li>
            <li>
              <t>Weighted average: Applies weights to values based on relevance or priority.</t>
            </li>
          </ul>
          <t>The output of an aggregation function is typically a Level 2 metric, derived from multiple Level 0 metrics, or a level 2 metric, derived from multiple Level 0 or 1 metrics.</t>
          <figure anchor="fig-agg-funct">
            <name>Aggregation function</name>
            <artwork><![CDATA[
      +------------+     +-------------------+
      | Metric 1.1 |---->|                   |
      +------------+     |    Aggregation    |     +----------+
           ...           |     Function      |---->| Metric 2 |
      +------------+     |                   |     +----------+
      | Metric 1.n |---->|                   |
      +------------+     +-------------------+

      Input: Multiple values                   Output: Single value

]]></artwork>
          </figure>
        </section>
        <section anchor="normalization">
          <name>Normalization</name>
          <t>Normalization functions convert metric values with or without units into unitless scores, enabling comparison across different types of metrics and systems. This is essential when combining metrics from a heterogeneous set of resources (e.g, latency measured in milliseconds with CPU usage measured in percentage) into a unified decision model.</t>
          <t>Normalization functions often map values into a bounded range, such as integers fron 0, to 5, or real numbers from 0 to 1, using techniques like:</t>
          <ul spacing="normal">
            <li>
              <t>Sigmoid function: Smoothly maps input values to a bounded range.</t>
            </li>
            <li>
              <t>Min-max scaling: Rescales values based on known minimum and maximum bounds.</t>
            </li>
            <li>
              <t>Z-score normalization: Standardizes values based on statistical distribution.</t>
            </li>
          </ul>
          <t>Normalized metrics facilitate composite scoring and ranking, and can be used to produce Level 1 and Level 2 metrics.</t>
          <figure anchor="fig-norm-funct">
            <name>Normalization function</name>
            <artwork><![CDATA[
      +----------+     +------------------------+     +----------+
      | Metric 1 |---->| Normalization Function |---->| Metric 2 |
      +----------+     +------------------------+     +----------+

      Input:  Value with or without units         Output: Unitless value

]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="on-the-meaning-of-scores-in-heterogeneous-metrics-systems">
        <name>On the Meaning of Scores in Heterogeneous Metrics Systems</name>
        <t>In a system like CATS, where metrics originate from heterogeneous resources---such as compute, communication, and storage---the interpretation of scores requires careful consideration. While normalization functions can convert raw metrics into unitless scores to enable comparison, these scores may not be directly comparable across different implementations. For example, a score of 4 on a scale from 1 to 5 may represent a high-quality resource in one implementation, but only an average one in another.</t>
        <t>This ambiguity arises because different implementations may apply distinct normalization strategies, scaling methods, or semantic interpretations. As a result, relying solely on unitless scores for decision-making can lead to inconsistent or suboptimal outcomes, especially when metrics are aggregated from multiple sources.</t>
        <t>To mitigate this, implementors of CATS metrics SHOULD provide clear and precise definitions of their metrics---particularly for unitless scores---and explain how these scores should be interpreted. This documentation should be designed to support operators in making informed decisions, even when comparing metrics from different implementations.</t>
        <t>Similarly, operators SHOULD exercise caution when making potentially impactful decisions based on unitless metrics whose definitions are unclear or underspecified. In such cases, especially when decisions are critical or sensitive, operators MAY choose to rely on Level 0 (L0) metrics with units, which typically offer a more direct and unambiguous understanding of resource conditions.</t>
      </section>
      <section anchor="level-metric-representations">
        <name>Level Metric Representations</name>
        <section anchor="level-0-metrics">
          <name>Level 0 Metrics</name>
          <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. L0 metrics contain all raw metrics which are not considered to be standardized in this document, considering about their diversity and many other existing work.</t>
          <t>See Appendix A for examples of L0 metrics.</t>
        </section>
        <section anchor="level-1-metrics">
          <name>Level 1 Metrics</name>
          <t>L1 metrics are normalized from L0 metrics. Although they don't have units, they can still be classified into types such as compute, communication, service and composed metrics. This classification is useful because it makes L1 metrics semantically meaningful.</t>
          <t>The sources of L1 metrics is normalization. Based on L0 metrics, service providers design their own algorithms to normalize metrics. For example, assigning different cost values to each raw metric and do weighted summation. L1 metrics do not need further statistical values.</t>
          <section anchor="normalized-compute-metrics">
            <name>Normalized Compute Metrics</name>
            <t>The metric type of normalized compute metrics is "compute_norm", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-compute-metric">
              <name>Example of a normalized L1 compute metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: compute_norm
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 5
Source:
      normalization


|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
            </figure>
          </section>
          <section anchor="normalized-communication-metrics">
            <name>Normalized Communication Metrics</name>
            <t>The metric type of normalized communication metrics is "communication_norm", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-communication-metric">
              <name>Example of a normalized L1 communication metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: communication_norm
      Level: L1
      Format: unsigned integer
      Length: one octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits

]]></artwork>
            </figure>
          </section>
          <section anchor="normalized-composed-metrics">
            <name>Normalized Composed Metrics</name>
            <t>The metric type of normalized composed metrics is "delay_norm", and its format is unsigned integer.  It has no unit.  It will occupy an octet. Example:</t>
            <figure anchor="fig-normalized-metric">
              <name>Example of a normalized L1 composed metric</name>
              <artwork><![CDATA[
Basic fields:
      Metric type: composed_norm
      Level: L1
      Format: unsigned integer
      Length: an octet
      Value: 8
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="level-2-metrics">
          <name>Level 2 Metrics</name>
          <t>A Level 2 metric is a single-value, normalized metric that does not carry any inherent physical unit or meaning. While each provider may employ its own internal methods to compute this value, all providers must adhere to the representation guidelines defined in this section to ensure consistency and interoperability of the normalized output.</t>
          <t>Metric type is "norm_fi". The format of the value is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
          <figure anchor="fig-level-2-metric">
            <name>Example of a normalized L2 metric</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: norm_fi
      Level: L2
      Format: unsigned integer
      Length: an octet
      Value: 1
Source:
      normalization

|Metric Type|Level|Format|Length|Value|Source|
    8bits    2bits  1bit   3bits 8bits  3bits
]]></artwork>
          </figure>
          <t>The single normalized value also facilitates aggregation across multiple service instances. When each instance provides its own normalized value, no additional statistical processing is required at the instance level. Instead, aggregation can be performed externally using standardized methods, enabling scalable and consistent interpretation of metrics across distributed environments.</t>
        </section>
      </section>
    </section>
    <section anchor="comparison-among-metric-levels">
      <name>Comparison among Metric Levels</name>
      <t>Metrics are progressively consolidated from L0 to L1 to L2, with each level offering a different degree of abstraction to address the diverse requirements of various services. Table 1 provides a comparative overview of these metric levels.</t>
      <table anchor="comparison">
        <name>Comparison among Metrics Levels</name>
        <thead>
          <tr>
            <th align="center">Level</th>
            <th align="left">Encoding Complexity</th>
            <th align="left">Extensibility</th>
            <th align="left">Stability</th>
            <th align="left">Accuracy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center">Level 0</td>
            <td align="left">High</td>
            <td align="left">Low</td>
            <td align="left">Low</td>
            <td align="left">High</td>
          </tr>
          <tr>
            <td align="center">Level 1</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
          </tr>
          <tr>
            <td align="center">Level 2</td>
            <td align="left">Low</td>
            <td align="left">High</td>
            <td align="left">High</td>
            <td align="left">Medium</td>
          </tr>
        </tbody>
      </table>
      <t>Since Level 0 metrics are raw and service-specific, different services may define their own sets---potentially resulting in hundreds or even thousands of unique metrics. This diversity introduces significant complexity in protocol encoding and standardization. Consequently, L0 metrics are generally confined to bespoke implementations tailored to specific service needs, rather than being standardized for broad protocol use. In contrast, Level 1 metrics organize raw data into standardized categories, each normalized into a single value. This structure makes them more suitable for protocol encoding and standardization. Level 2 metrics take simplification a step further by consolidating all relevant information into a single normalized value, making them the easiest to encode, transmit, and standardize.</t>
      <t>Therefore, from the perspective of encoding complexity, Level 1 and Level 2 metrics are recommended.</t>
      <t>When considering extensibility, Level 0 metrics allow new services to define their own custom metrics. However, this flexibility requires corresponding protocol extensions, and the proliferation of metric types can introduce significant overhead, ultimately reducing the protocol's extensibility. In contrast, Level 1 metrics introduce only a limited set of standardized categories, making protocol extensions more manageable. Level 2 metrics go even further by consolidating all information into a single normalized value, placing the least burden on the protocol.</t>
      <t>Therefore, from an extensibility standpoint, Level 1 and Level 2 metrics are recommended.</t>
      <t>Regarding stability, Level 0 raw metrics may require frequent protocol extensions as new metrics are introduced, leading to an unstable and evolving protocol format. For this reason, standardizing L0 metrics within the protocol is not recommended. In contrast, Level 1 metrics involve only a limited set of predefined categories, and Level 2 metrics rely on a single consolidated value, both of which contribute to a more stable and maintainable protocol design.</t>
      <t>Therefore, from a stability standpoint, Level 1 and Level 2 metrics are preferred.</t>
      <t>In conclusion, for CATS, Level 2 metrics are recommended due to their simplicity and minimal protocol overhead. If more advanced scheduling capabilities are required, Level 1 metrics offer a balanced approach with manageable complexity. While Level 0 metrics are the most detailed and dynamic, their high overhead makes them unsuitable for direct transmission to network devices and thus not recommended for standard protocol integration.</t>
    </section>
    <section anchor="cats-metric-registry-entries">
      <name>CATS Metric Registry Entries</name>
      <section anchor="cats-l2-metric-registry-entry">
        <name>CATS L2 Metric Registry Entry</name>
        <t>This section gives an initial Registry Entry for the CATS L2 metric.</t>
        <section anchor="summary">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name">
            <name>Name</name>
            <t>Norm_Passive_CATS-L2_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-L2: Metric level (CATS Metric Framework Level 2)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has not units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value</t>
              </li>
            </ul>
          </section>
          <section anchor="uri">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description">
            <name>Description</name>
            <t>This metric represents a single normalized score used within CATS (L2). It is derived by aggregating one or more CATS L0 and/or L1 metrics, followed by a normalization process that produces a unitless value. The resulting score provides a concise assessment of the overall capability of a service instance, enabling rapid comparison across instances and supporting efficient traffic steering decisions.</t>
          </section>
          <section anchor="change-controller">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition">
          <name>Metric Definition</name>
          <section anchor="reference-definition">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/>
Core referenced sections: Section 3.4 (L2 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions)</t>
          </section>
          <section anchor="fixed-parameters">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest capability, 10 indicates the optimal capability)</t>
              </li>
              <li>
                <t>Data precision: decimal number (unsigned integer)</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect L0 service and compute raw metrics using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes). Collect L0 network performance raw metrics using existing standardized protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>).</t>
            <t>Aggregation logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation).</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization).</t>
            <t>The reference method aggregates and normalizes L0 metrics to generate L1 metrics in different categories, and further calculates a L2 singleton score for full normalization.</t>
          </section>
          <section anchor="packet-stream-generation">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect L0 metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <!-- KY: C-SMA can see service instance IP when it is co-located with Service contact instance, right? -->

<t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles">
            <name>Roles</name>
            <t>C-SMA: Collects L0 service and compute raw metrics, and optionally calculates L1 and L2 metrics according to service-specific strategies.</t>
            <t>C-NMA: Collects L0 network performance raw metrics, and optionally calculates L1 and L2 metrics according to service-specific strategies.</t>
          </section>
        </section>
        <section anchor="output">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-1">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.3</t>
            <t>Score semantics: 0-3 (Low capability, not recommended for steering), 4-7 (Medium capability, optional for steering), 8-10 (High capability, priority for steering)</t>
          </section>
          <section anchor="metric-units">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on standard test sets (fixed workload) to ensure the output score deviation of C-SMA and C-NMA is lower than 0.1 (one abnormal score in every ten test rounds).</t>
            <!-- KY: Do we need more details in calibration discussions? -->

</section>
        </section>
        <section anchor="administrative-items">
          <name>Administrative Items</name>
          <section anchor="status">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-l1-metric-registry-entry-compute">
        <name>CATS L1 Metric Registry Entry: Compute</name>
        <t>This section gives an initial Registry Entry for the CATS L1 metric in the <em>compute</em> category.</t>
        <section anchor="summary-1">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-1">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-1">
            <name>Name</name>
            <t>Norm_Passive_CATS-L1_Compute_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-L1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Compute: Metric category (Compute)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the compute category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-1">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-1">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>compute</em> category within CATS (L1). It is derived from one or more compute-related L0 metrics (e.g., CPU/GPU/NPU utilization, CPU frequency, memory utilization, or other compute resource indicators) by applying an implementation-specific aggregation function over the selected L0 compute metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative compute capability (or headroom) of a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better compute capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-1">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-1">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-1">
          <name>Metric Definition</name>
          <section anchor="reference-definition-2">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (L1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-1">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest compute capability, 10 indicates the optimal compute capability)</t>
              </li>
              <li>
                <t>Data precision: decimal number (unsigned integer)</t>
              </li>
              <li>
                <t>Metric type: "compute_norm"</t>
              </li>
              <li>
                <t>Level: L1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-1">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-1">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect compute-related L0 raw metrics (e.g., CPU/GPU/NPU, memory, and relevant platform counters) using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes or equivalent telemetry systems).</t>
            <t>Aggregation logic (within compute category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected L0 compute metrics into a single intermediate value prior to normalization. The selection of L0 compute metrics and any weights used are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) compute value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes L0 compute metrics to generate a single L1 compute score ("compute_norm"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate communication or service metrics).</t>
          </section>
          <section anchor="packet-stream-generation-1">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-1">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-1">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying L0 compute metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-1">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-1">
            <name>Roles</name>
            <t>C-SMA: Collects L0 compute raw metrics and calculates the L1 compute normalized score ("compute_norm") according to service/provider-specific aggregation and normalization strategies.</t>
            <t>C-NMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-1">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-1">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-3">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low compute capability, not recommended for steering), 4-7 (Medium compute capability, optional for steering), 8-10 (High compute capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-1">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-1">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative compute workloads (fixed test workload profiles) to align the mapping from L0 compute metrics to the L1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items-1">
          <name>Administrative Items</name>
          <section anchor="status-1">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-1">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision-1">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-1">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-1">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-l1-metric-registry-entry-communication">
        <name>CATS L1 Metric Registry Entry: Communication</name>
        <t>This section gives an initial Registry Entry for the CATS L1 metric in the <em>communication</em> category.</t>
        <section anchor="summary-2">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-2">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-2">
            <name>Name</name>
            <t>Norm_Passive_CATS-L1_Communication_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-L1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Communication: Metric category (Communication)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the communication category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-2">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-2">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>communication</em> category within CATS (L1). It is derived from one or more communication-related L0 metrics (e.g., throughput, bandwidth, link utilization, loss, delay, jitter, bytes/packets counters, and other network performance indicators) by applying an implementation-specific aggregation function over the selected L0 communication metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative communication capability (or headroom) associated with reaching a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better communication capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-2">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-2">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-2">
          <name>Metric Definition</name>
          <section anchor="reference-definition-4">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (L1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-2">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest communication capability, 10 indicates the optimal communication capability)</t>
              </li>
              <li>
                <t>Data precision: decimal number (unsigned integer)</t>
              </li>
              <li>
                <t>Metric type: "communication_norm"</t>
              </li>
              <li>
                <t>Level: L1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-2">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-2">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect communication-related L0 raw metrics using existing standardized protocols and telemetry systems (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>), and/or using network performance metric definitions and registries such as <xref target="RFC8911"/>, <xref target="RFC8912"/>, and <xref target="RFC9439"/> where applicable.</t>
            <t>Aggregation logic (within communication category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected L0 communication metrics into a single intermediate value prior to normalization. The selection of L0 communication metrics and any weights used are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) communication value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes L0 communication metrics to generate a single L1 communication score ("communication_norm"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate compute or service metrics).</t>
          </section>
          <section anchor="packet-stream-generation-2">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-2">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-2">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying L0 communication metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-2">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-2">
            <name>Roles</name>
            <t>C-NMA: Collects L0 communication raw metrics and calculates the L1 communication normalized score ("communication_norm") according to provider-specific aggregation and normalization strategies.</t>
            <t>C-SMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-2">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-2">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-5">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low communication capability, not recommended for steering), 4-7 (Medium communication capability, optional for steering), 8-10 (High communication capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-2">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-2">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative network test profiles (e.g., fixed traffic mixes and path conditions) to align the mapping from L0 communication metrics to the L1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items-2">
          <name>Administrative Items</name>
          <section anchor="status-2">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-2">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision-2">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-2">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-2">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-l1-metric-registry-entry-service">
        <name>CATS L1 Metric Registry Entry: Service</name>
        <t>This section gives an initial Registry Entry for the CATS L1 metric in the <em>service</em> category.</t>
        <section anchor="summary-3">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-3">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-3">
            <name>Name</name>
            <t>Norm_Passive_CATS-L1_Service_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-L1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Service: Metric category (Service)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the service category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-3">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-3">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>service</em> category within CATS (L1). It is derived from one or more service-related L0 metrics that characterize the health and performance of the service instance itself (e.g., service availability, request success rate, admission/overload indicators, tokens per second and/or requests per second, application-level queue depth, and other service KPIs) by applying an implementation-specific aggregation function over the selected L0 service metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative service capability (or headroom) of a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better service capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-3">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-3">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-3">
          <name>Metric Definition</name>
          <section anchor="reference-definition-6">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (L1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-3">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest service capability, 10 indicates the optimal service capability)</t>
              </li>
              <li>
                <t>Data precision: decimal number (unsigned integer)</t>
              </li>
              <li>
                <t>Metric type: "service_norm"</t>
              </li>
              <li>
                <t>Level: L1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-3">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-3">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect service-related L0 raw metrics from the service runtime and service management plane using platform-specific telemetry systems (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes or equivalent monitoring/observability tools). These metrics are service-dependent and may include availability/health status, success/error rates, overload or admission control signals, and throughput indicators (e.g., tokens per second for AI inference services), among others.</t>
            <t>Aggregation logic (within service category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected L0 service metrics into a single intermediate value prior to normalization. The selection of L0 service metrics, any weights used, and any gating logic (e.g., forcing the score to a low value when the instance is unhealthy) are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) service value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes L0 service metrics to generate a single L1 service score ("service_norm"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate compute or communication metrics).</t>
          </section>
          <section anchor="packet-stream-generation-3">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-3">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-3">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying L0 service metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-3">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-3">
            <name>Roles</name>
            <t>C-SMA: Collects L0 service raw metrics and calculates the L1 service normalized score ("service_norm") according to service/provider-specific aggregation and normalization strategies.</t>
            <t>C-NMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-3">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-3">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-7">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low service capability, not recommended for steering), 4-7 (Medium service capability, optional for steering), 8-10 (High service capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-3">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-3">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative service workload profiles (fixed request mixes and known-good baselines) to align the mapping from L0 service metrics to the L1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds). Calibration MAY include failure/overload scenarios (e.g., simulated dependency failures or saturation) to ensure score behavior is consistent with operational intent.</t>
          </section>
        </section>
        <section anchor="administrative-items-3">
          <name>Administrative Items</name>
          <section anchor="status-3">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-3">
            <name>Requester</name>
            <t>To-be-assigned</t>
          </section>
          <section anchor="revision-3">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-3">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-3">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-l1-metric-registry-entry-composed">
        <name>CATS L1 Metric Registry Entry: Composed</name>
        <t>This section gives an initial Registry Entry for the CATS L1 metric in the <em>composed</em> category.</t>
        <section anchor="summary-4">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-4">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-4">
            <name>Name</name>
            <t>Norm_Passive_CATS-L1_Composed_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-L1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Composed: Metric category (Composed)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the composed category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-4">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-4">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>composed</em> category within CATS (L1). A composed metric is derived by combining multiple lower-level metrics that may span different categories (e.g., compute, communication, and service) and/or multiple components along the request path.</t>
            <t>Typical examples of composed metrics include (but are not limited to) end-to-end delay, application-level response time, or other synthesized indicators that are computed as a function of multiple contributing factors (e.g., the sum of compute processing delay and network transmission delay along the selected path).</t>
            <t>The composed L1 score is obtained by applying an implementation-specific aggregation function over the selected contributing L0 metrics (and/or previously computed L1 category metrics), followed by a normalization function that yields a unitless score. Higher values indicate better composed capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-4">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-4">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-4">
          <name>Metric Definition</name>
          <section anchor="reference-definition-8">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (L1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-4">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest composed capability, 10 indicates the optimal composed capability)</t>
              </li>
              <li>
                <t>Data precision: decimal number (unsigned integer)</t>
              </li>
              <li>
                <t>Metric type: "composed_norm"</t>
              </li>
              <li>
                <t>Level: L1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-4">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-4">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect contributing L0 raw metrics from the relevant sources across categories. For example, compute- and service-related L0 metrics may be collected by a C-SMA using platform-specific telemetry systems (e.g., Prometheus <xref target="Prometheus"/>), while communication-related L0 metrics may be collected by a C-NMA using network telemetry and protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>), and/or using network performance metric definitions and registries such as <xref target="RFC8911"/>, <xref target="RFC8912"/>, and <xref target="RFC9439"/> where applicable.</t>
            <t>Aggregation logic (within composed category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected contributing metrics into a single intermediate value prior to normalization. The aggregation function MAY combine L0 metrics directly, and/or MAY take as input one or more L1 category metrics (e.g., "compute_norm" and "communication_norm"). The selection of contributing metrics, any weights used, and the composition model (e.g., sum of delays, bottleneck/maximum, or weighted utility) are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated composed value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes the selected contributing metrics to generate a single L1 composed score ("composed_norm").</t>
          </section>
          <section anchor="packet-stream-generation-4">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-4">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-4">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying contributing metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-4">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC6991"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-4">
            <name>Roles</name>
            <t>C-SMA: Collects L0 service and compute raw metrics that may contribute to the composed score, and MAY calculate the L1 composed score ("composed_norm") when it has access to the required inputs.</t>
            <t>C-NMA: Collects L0 communication raw metrics that may contribute to the composed score, and MAY calculate the L1 composed score ("composed_norm") when it has access to the required inputs.</t>
            <t>CATS Controller (or other CATS component): MAY compute the L1 composed score when the contributing metrics originate from multiple agents and are combined at a common computation point.</t>
          </section>
        </section>
        <section anchor="output-4">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-4">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-9">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low composed capability, not recommended for steering), 4-7 (Medium composed capability, optional for steering), 8-10 (High composed capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-4">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-4">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative end-to-end test profiles (fixed request mixes and controlled network/compute conditions) to align the mapping from contributing metrics to the L1 composed score. The calibration goal is to minimize score deviation across measurement agents and computation points within the same administrative domain (e.g., less than 0.1 over repeated test rounds). Calibration MAY include failure and saturation scenarios (e.g., compute overload, network congestion, and dependency failures) to ensure the composed score behavior is consistent with operational intent.</t>
          </section>
        </section>
        <section anchor="administrative-items-4">
          <name>Administrative Items</name>
          <section anchor="status-4">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-4">
            <name>Requester</name>
            <t>To-be-assigned</t>
          </section>
          <section anchor="revision-4">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-4">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-4">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
    </section>
    <section anchor="implementation-guidance-on-using-cats-metrics">
      <name>Implementation Guidance on Using CATS Metrics</name>
      <t>&lt;Authors’ Note: This section has been moved to <xref target="I-D.ietf-cats-framework"/> at the suggestion of the chairs, since this document focuses primarily on metric definitions rather than implementation details.&gt;</t>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>TBD</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>TBD</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC6241">
          <front>
            <title>Network Configuration Protocol (NETCONF)</title>
            <author fullname="R. Enns" initials="R." role="editor" surname="Enns"/>
            <author fullname="M. Bjorklund" initials="M." role="editor" surname="Bjorklund"/>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <author fullname="A. Bierman" initials="A." role="editor" surname="Bierman"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs). This document obsoletes RFC 4741. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6241"/>
          <seriesInfo name="DOI" value="10.17487/RFC6241"/>
        </reference>
        <reference anchor="RFC6991">
          <front>
            <title>Common YANG Data Types</title>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <date month="July" year="2013"/>
            <abstract>
              <t>This document introduces a collection of common data types to be used with the YANG data modeling language. This document obsoletes RFC 6021.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6991"/>
          <seriesInfo name="DOI" value="10.17487/RFC6991"/>
        </reference>
        <reference anchor="RFC7011">
          <front>
            <title>Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of Flow Information</title>
            <author fullname="B. Claise" initials="B." role="editor" surname="Claise"/>
            <author fullname="B. Trammell" initials="B." role="editor" surname="Trammell"/>
            <author fullname="P. Aitken" initials="P." surname="Aitken"/>
            <date month="September" year="2013"/>
            <abstract>
              <t>This document specifies the IP Flow Information Export (IPFIX) protocol, which serves as a means for transmitting Traffic Flow information over the network. In order to transmit Traffic Flow information from an Exporting Process to a Collecting Process, a common representation of flow data and a standard means of communicating them are required. This document describes how the IPFIX Data and Template Records are carried over a number of transport protocols from an IPFIX Exporting Process to an IPFIX Collecting Process. This document obsoletes RFC 5101.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="77"/>
          <seriesInfo name="RFC" value="7011"/>
          <seriesInfo name="DOI" value="10.17487/RFC7011"/>
        </reference>
        <reference anchor="RFC8911">
          <front>
            <title>Registry for Performance Metrics</title>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="B. Claise" initials="B." surname="Claise"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="A. Akhter" initials="A." surname="Akhter"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This document defines the format for the IANA Registry of Performance
Metrics. This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8911"/>
          <seriesInfo name="DOI" value="10.17487/RFC8911"/>
        </reference>
        <reference anchor="RFC8912">
          <front>
            <title>Initial Performance Metrics Registry Entries</title>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="K. D'Souza" initials="K." surname="D'Souza"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This memo defines the set of initial entries for the IANA Registry of
Performance Metrics. The set includes UDP Round-Trip Latency and
Loss, Packet Delay Variation, DNS Response Latency and Loss, UDP
Poisson One-Way Delay and Loss, UDP Periodic One-Way Delay and Loss,
ICMP Round-Trip Latency and Loss, and TCP Round-Trip Delay and Loss.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8912"/>
          <seriesInfo name="DOI" value="10.17487/RFC8912"/>
        </reference>
        <reference anchor="RFC9439">
          <front>
            <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
            <author fullname="Q. Wu" initials="Q." surname="Wu"/>
            <author fullname="Y. Yang" initials="Y." surname="Yang"/>
            <author fullname="Y. Lee" initials="Y." surname="Lee"/>
            <author fullname="D. Dhody" initials="D." surname="Dhody"/>
            <author fullname="S. Randriamasy" initials="S." surname="Randriamasy"/>
            <author fullname="L. Contreras" initials="L." surname="Contreras"/>
            <date month="August" year="2023"/>
            <abstract>
              <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.</t>
              <t>This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.</t>
              <t>There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9439"/>
          <seriesInfo name="DOI" value="10.17487/RFC9439"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="26" month="February" year="2026"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   functional components, describes their interactions, and provides
   illustrative workflows of the control and data planes.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-20"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Guanming Zeng" initials="G." surname="Zeng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="2" month="February" year="2026"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a given service
   instance by considering the dynamic nature of computing and network
   resources.  In order to consider the computing and network resources,
   a system needs to share information (metrics) that describes the
   state of the resources.  Metrics from network domain have been in use
   in network systems for a long time.  This document defines a set of
   metrics from the computing domain used for CATS.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-05"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="2" month="February" year="2026"/>
            <abstract>
              <t>   Distributed computing enhances service response time and energy
   efficiency by utilizing diverse computing facilities for compute-
   intensive and delay-sensitive services.  To optimize throughput and
   response time, "Computing-Aware Traffic Steering" (CATS) selects
   servers and directs traffic based on compute capabilities and
   resources, rather than static dispatch or connectivity metrics alone.
   This document outlines the problem statement and scenarios for CATS
   within a single domain, and drives requirements for the CATS
   framework.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-14"/>
        </reference>
        <reference anchor="I-D.rcr-opsawg-operational-compute-metrics">
          <front>
            <title>Joint Exposure of Network and Compute Information for Infrastructure-Aware Service Deployment</title>
            <author fullname="Sabine Randriamasy" initials="S." surname="Randriamasy">
              <organization>Nokia Bell Labs</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Roland Schott" initials="R." surname="Schott">
              <organization>Deutsche Telekom</organization>
            </author>
            <date day="21" month="October" year="2024"/>
            <abstract>
              <t>   Service providers are starting to deploy computing capabilities
   across the network for hosting applications such as distributed AI
   workloads, AR/VR, vehicle networks, and IoT, among others.  In this
   network-compute environment, knowing information about the
   availability and state of the underlying communication and compute
   resources is necessary to determine both the proper deployment
   location of the applications and the most suitable servers on which
   to run them.  Further, this information is used by numerous use cases
   with different interpretations.  This document proposes an initial
   approach towards a common exposure scheme for metrics reflecting
   compute and communication capabilities.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-rcr-opsawg-operational-compute-metrics-08"/>
        </reference>
        <reference anchor="performance-metrics" target="https://www.iana.org/assignments/performance-metrics/performance-metrics.xhtml">
          <front>
            <title>performance-metrics</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="DMTF" target="https://www.dmtf.org/">
          <front>
            <title>DMTF</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="Prometheus" target="https://prometheus.io/">
          <front>
            <title>Prometheus</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
      </references>
    </references>
    <?line 1233?>

<section anchor="appendix-a">
      <name>Appendix A</name>
      <section anchor="level-0-metric-representation-examples">
        <name>Level 0 Metric Representation Examples</name>
        <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as L0 metrics. This section provides illustrative examples.</t>
        <!-- JRG: The following two paragraphs seem redundants, as we have
already explained it in the previous section. So I suggest to remove them. -->

<!-- The sources of L0 metrics can be nominal, directly measured, estimated, or aggregated. Nominal L0 metrics are initially provided by resource providers. Dynamic L0 metrics are measured or estimated during the service stage. Additionally, L0 metrics support aggregation when there are multiple service instances.

L0 metrics also support the statistics defined in section 4.1. -->

<!-- TODO: next step would be to update the examples once we agree with (and update as necessary) the above changes regarding the CATS metric specification. -->

<section anchor="compute-raw-metrics">
          <name>Compute Raw Metrics</name>
          <t>This section uses CPU frequency as an example to illustrate the representation of raw compute metrics. The metric type is labeled as compute_CPU_frequency, with the unit specified in GHz. The format should support both unsigned integers and floating-point values. The corresponding metric fields are defined as follows:</t>
          <figure anchor="fig-compute-raw-metric">
            <name>An Example for Compute Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric Type: compute_CPU_frequency
      Level: L0
      Format: unsigned integer, floating point
      Unit: GHz
      Length: four octets
      Value: 2.2
Source:
      nominal

|Metric Type|Level|Format| Unit|Length| Value|Source|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits
]]></artwork>
          </figure>
        </section>
        <section anchor="communication-raw-metrics">
          <name>Communication Raw Metrics</name>
          <t>This section takes the total transmitted bytes (TxBytes) as an example to show the representation of communication raw metrics. TxBytes are named as "communication type_TxBytes". The unit is Mega Bytes (MB). Format is unsigned integer or floating point. It will occupy 4 octets. The source of the metric is "Directly measured" and the statistics is "mean". Example:</t>
          <figure anchor="fig-network-raw-metric">
            <name>An Example for Communication Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: "communication type_TXBytes"
      Level: L0
      Format: unsigned integer, floating point
      Unit: MB
      Length: four octets
      Value: 100
Source:
      Directly measured
Statistics:
      mean

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
        <section anchor="delay-raw-metrics">
          <name>Delay Raw Metrics</name>
          <t>Delay is a kind of synthesized metric which is influenced by computing, storage access, and network transmission. Usually delay refers to the overal processing duration between the arrival time of a specific service request and the departure time of the corresponding service response. It is named as "delay_raw". The format should support both unsigned integer or floating point. Its unit is microseconds, and it occupies 4 octets. For example:</t>
          <figure anchor="fig-delay-raw-metric">
            <name>An Example for Delay Raw Metrics</name>
            <artwork><![CDATA[
Basic fields:
      Metric type: "delay_raw"
      Level: L0
      Format: unsigned integer, floating point
      Unit: Microsecond(us)
      Length: four octets
      Value: 231.5
Source:
      aggregation
Statistics:
      max

|Metric Type|Level|Format| Unit|Length| Value|Source|Statistics|
    8bits    2bits  1bit  4bits  3bits 32bits  3bits   2bits
]]></artwork>
          </figure>
        </section>
      </section>
    </section>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact initials="M." surname="Boucadair" fullname="Mohamed Boucadair">
        <organization>Orange</organization>
        <address>
          <email>mohamed.boucadair@orange.com</email>
        </address>
      </contact>
      <contact initials="Z." surname="Du" fullname="Zongpeng Du">
        <organization>China Mobile</organization>
        <address>
          <email>duzongpeng@chinamobile.com</email>
        </address>
      </contact>
      <contact initials="H." surname="Shi" fullname="Hang Shi">
        <organization>Huawei</organization>
        <address>
          <email>shihang9@huawei.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+192XbbRrboO78Cx3mIZJO0JLvTCc/Uiqeo29OxlE66H9oL
BIskjkGAjUEyEznr/sb9vfsld481AKAGx3bstL0SmyQKVbuq9rx37RqNRoNB
ndaZmUQ37h2eHEdPTF2mSRXdN/M0T+u0yG8M4um0NKetFjcGSVybRVFuJlFV
zwaDWZHk8Qo6mpXxvB6lpp6PoEk1WtELo5ntcbT31aBqpqu0quBbvVnDS0cP
Th4O8mY1NeVkMIOeJ4OkyCuTV001GcDgdwZxaWIA4kXR1Gm+uDE4K8pXi7Jo
1ghZsVrTz6PDM2gXnQAM8zSJjmtjSmr9ymzghdkkwkkMIwaqGgzipl4WMGY0
GkTwJ82rSfS3cfQXs4xz+mXeZBlPjH6L/hYX9HtRLuI8/SnGGUGvyzSPoyfF
NM0MPTarOM0m0SYuXuFrf0qwwYqej5NiRW2SoslrXEB6OwDh3jh6nLbGv7c0
+UJ/Dof/ronPTBqdmGSZF1mxSE3lQ5GMsz8tqclVxn48fjKO7sHOlKaMqxYQ
j8dR52kIy4nJzLzI0yT2QciatFqli8ZkAIK8vGrKNMuKP9X2DQbPg+XP4+hF
UY0epWWc1S1Q/gz7mbYfh7D8TxNn0OUqetCUxdoMo6M8Gftg/W9ZVH/6Z52O
/yktOxA8Gv8dlr019KMmzleAV5F9dq39+AneWkgXl2zMgFYrnTY1Yuko4vGf
FEv4dxZ9WzRJPIvTckAQTKJnZZwvEANlpBU3HE+14Z8KakHDaW9/L/LFGnHr
fqP9tPBZeps1P0nTDj5rX99B79HxMtWOeCVcF9UyBXJYfONPe5Dm86Jcwdqd
At1H0dHo/tjxj6YySVyZalSafzZpaVYmryttViblqFhX8dkC/gGcwuWPs1FC
/MAI76HW8JQGyZPg5ygS/tfznB5bBsGbzB+YQ9Hbcbkw9SRa1vW6mty+fXZ2
Nk7jPB5D29sxcLhFTgDf7um/77fx62W9yqDr+09OHgYA4g9vC9FsVc8JImjx
vCxgrKVpwvm7n99mkLV9e5wWtweD3N/PFw/vfXVwd18/fvONfvzj3r5+/Pob
/+OBfPzm7p1vuhgxLwHVkP13H3WEzWQwGI1GUTyt6jJO6sHgMlER7aCE2I3S
KoqjWh4Cxqe5PI/XMNk4WUb1Mq6jYl2nq/QnU8FXA5JQGhVz+25dQEcLWIo8
qkx5miYGGUuNmx5NNxGKuXTGb2EXsw0QEryWx3UD4EFHiUIcxfksyk2NU49K
UxVNmZhqDEwNtgi6wKG0O+rrkjeHAFi1AZhX8MzMKny/WuKiWIIs8mhHMHOX
JzwzVQL8yE4YUIJmC188kFSPmANe2IFnBfCAPFrGpzBzA+sBX4C68R9twuDA
e0UJwGUFLkq6MuPoZAkbAjpGg9QU0f4a3KHK1Dj8yh8vnLqMCgPNqFvc3jFj
xSqdzYC/Db6AJazLYtYkOOPB4Fj2CTb6FBcTBoJFmZl1VmywS9d5Eq9j4IGA
aghNAtKE10XngwMui6oWxMlAxuEQVVQ1gEExTCmtmL0DdIdHEb6UFfEM9ubw
xe2/vqCNm5WAPWVmoO9Ts0yTjLZuhYtTwGAlowB8qhRIYjmg5TRZna4z00E8
nlBpGCAYWkA/jcu0aAC6tDaEDqiAQcuqQVROcelxxklab2hquLI1/K/IKyx6
RoIXG8HWPHgNbA5eheF3/qd4sBsZ+CHBQQH78SVvXWCfAQWb9booa8YvUA1o
UwTHKm8OiL6WFQBcWQZ9XoO8U9lzeAvw4+eftzCZN28AW0j1/YRYAiwjwJMa
oLQaKGdIWE+vFTnihu6UL/YcEcHOTgGzsP2qyWVvaCgRrP5Qxym+y3tjf/YI
hjfaoqKlqSEyj3wWgxL3E7TRwfFFwHQAMo0zDwVht0xJMp7obUPgmDyeUp8m
SWF4XV+75jP8HeltiACWBgCR3cGnCm0ES5mJ7ibdgo6S0KYB4QFiKZv0VguQ
4lImF2dgs8w2zOyYZxGuAbynRGfK0AA+M16Mh4CFIvXevJEvKA3xC8KlP+zr
D7+Oi87hAyhWEcxajDleF6bkzJzG0MjuS93PV5Wl4gYV0KQ0JoJXTVZF0xh5
bkGsKS3p1cy81r1bgCLaZMBw6s0YWTDYFKe468gd8bkzQcFMCwEnqBGaeQH2
wxnBbEqYsLfGF9Az6gRXZRTYVMSB95EYpP8dtXRQLywZ0wi4KtrgnjQ4Ujo/
ug8jHN87Gh3d3+20ZsyKDhc42517o+Mnh67RU9nldqOn1OgLWDjUOkGhA8JE
koPl++IL2HrYmdFj3BmVzoPBIWx+U2fb1qxX9Ubc86U92Bgz6FOUA5bLBKdi
zpPvj08AU6MKWDSS65BJmjENXkqBHaWIGtg9E2GJ1gLgwVkK3AtoEn6Iq4KJ
HXGjaioUO/Q9AfE6jg5ns5SV/2wzhC7tqDAHsNuY6mADYHWJO9LIJp+hECFo
hcKrZdFkM5E7NM8V9M8cfl2mqO3wvHZAIULct4zbiqohQJsUi1yJCdclnc+B
/XjktIo3qGVlDQlG2ASgDoMyiTQC2Lk6Oos3iNFgbs2IFeUsT0B4poioSPO4
O0A8qr2QQIic8utrRrKFSJWZMLcWBZXxzIyK+XwSpazP+Sy5LooI93YEZIvo
QjiAnA3hjppcNxdsRGxM75vXIAkqEHMRe3Z8cGhZVrCNuEdOzgAeAgspmsUy
gMBqSTgp2Im6SIpsHO0cA7NhpL2aIfjmjfBE6BB4X9Vao95hqvEu86YSdhhx
q39xkiKGBq3lwV3Oi5qlgadE+fo1iTgRYwh65MHvBBjqIAWgyrpuLIoDS8wN
LnFcbrAbpj7k1BmQC2DvGexZAyKWCCtTtuwmPAGkYpZewiiIPCDqyvhMn7OQ
wdmSnokKI2AQMmlCtAqpFnCRFesVaNSkubC2C6Ngi3wBEzuNM8QKJjlQICvg
+7WwcJWwVpPeyKstxQgW4LviDGZQDkV59hEdFnpI5gtio3ldA4xD0qNq3I82
e5mjGFI1gjXfWUobzLxASQsVa30TrYBiXpPKZl4JFaHmLLqfEZx2BpjSUyW6
Oky9mKn5wIIQFjhJGuBL8IkpiIDidadpVAInTP/EqeOuaXS2TK1WhZ17WDYz
wCQzwngD/CdBe9xi1GgVv4L2Q9IPnWgV9azaykeYZcNUSlDSZRc9mY97J7a2
WN/RzZssdfaincd7u5PohUOw8c2bIJhQc/TJSZAMFhf5P/BGv8sh/RgZVLf5
JdgunDiKkeIMdCGAmdaPSYyMLFELxOAAkgBzauOrgGMf0H0AdB8AVf2OnBkZ
aakikFRdMltmgMr7KQ8YLxalWcREE4/3bCvSlqxS7PobWstQlTmndkMX4+gB
Tlx978gCqma1Ai1KoXNU93hfF2i6sXPgleLhYyFMK2V1nchFiFtZwTabKlic
A1icA1icYx7EWxse7FoLAqIHKIVQxzYHJIkAY2EHFEqdz4G0GSLSwyqkxEFy
H4brzgsUI0FOxkurGSnKKlRIvKt1jJyCqApZfMApgSSyhigQ5Acx/QzsDBJo
BSucz7+fRN8CV4keoiyGDoHOpwVoF8g/3U9OVhKQQ/rHN1EwerHCzZ8Capyl
s3ppf6lgEeyX4JU1rTQyp2a1JlMbQHqEID3sGxp0FeRhSEvVW43XBboXgqcI
gVXFuc3wKu8xceCCWpjUMzFUFQI6hRXeEI+Cra9gO2rcN/4JlDSDODkkBw3s
WuWPi4Mc18DhFmYSHZ7CjpOIrmAIgwpePENNDN8+AwvG8Bd86b7J4s0kOgF7
K6rjV4YEPHBVlNOkyMJig8IKGOaQC2WdQZqfFyiuVfbHdcz+n4LwSU32JEO/
8ty5TFDTEL6NNhcQfEu/UI/KMl2AGB8F9DZkBsk4jfpnqBVWrNNWIcBJnKPe
RhINZSBJq3WB5h4zY3h+6Nw66FwGxWhFIZMjlKXzGN1Bh8+Pdh3Di6MXD45P
UFmH31kEyjhVkZFONlLldxzdtxq18zlga9K02N5EYWAnSfIiQTWO+D4gNuxD
sE7xFCUNv4puzaYatxgZ9g/GFukrGx5JCESnMEUhhDocLFsB8O9M12AmADeb
Z0UsGJ7mbCaW7G0M20M7eAMW+wWwyJLYicgBsMEy0o89mIe+RyDwEBzAd1EH
1afX5+kR9S5F1W5Byi/qPC1bKg+9Eogfh49PnkU/PPJdaLA2pLAgPrScsgo5
YKIKNQ8Yt3qObBntkZLG0cOitG4nu53XVPozmJvnLU6yopl5BqwoI5442J9E
T51UeWKFdnTPSmogiP1AxkkIUKWQteP6hLsV6MPQyTZUvXdo5T5RlYLI4t9S
IopAZQc6Lqr01lnqKSrRDywOja9ADPvUhVTh88n5ppXbqEA3OSHvTfY/upUi
6YtEQd5qErOsMVrTgbV+HxkletCjlaFLPSmYDGqx2RXycfS8qMVL6GYpYtiA
xP2PfxuNoj+/eAQbWQMPP3l2/9nQmkjeXPwZhLOLRqP/YtXHSqgJaDeH/oRZ
01D1hjyBdm8tyTrO6TAA1IEhCmDeaJCDofTRYR1qXHFo98KFw4euXUd5MrI4
oq4ypiDshaPVBQjCgNORF0dMvJjFK9kzbr0R768yPuBmH2oytqAErhCCNCe2
gvOepnmsVs02Ogzgs8SHrMi8jtF6QyfWbFQXI6OcytFMLJ7/ZkXmUJZxA4CD
XLFxtI7rJSz0A2Cz0GNO+GlFsIfMIv6a9UyCNAExiLEKbNh5M6DX/g0AHceE
7lHPlRw4mgNmzzLlOrx2qFBbl7IyJor1ANWTOQ1vB3NtGzZnKSybWWEAwVp2
vdu8w47zMwO6DS4Sm0ObXVwjFta14ZBcC4vEG25jNzGqPMKdxPWDfZCTHDUQ
AzJffHDCJ72dcoK0PYwXeohPC7bROULI7FJVw46O4zCiRz4dWAOsI6bQU7D0
rCUOWwmTZ47cQ0UI0iXWmBUFffvga2Sht4V9nbKVzhHa1wmFHJNljKa+QcwJ
PEQXywxVEURseqTsh8WWpuXWqIOlsh6tygY7JGJBvCIjD69E3ZyzEJW0ZmVK
1LJoDUMNW0N6RZbOWAmEvlehcTtEoUkynHYI9xDkFWwy+rP61tMPkkLn83TR
lJarHeWwthW74UfAtM7ikqLY5B/wENh3CmD6RgHDF+qUJzqeGdTw6R1fV7Yo
A0oDYseeh6MPERQT7fuuJCFKXXDMiEpAbpPIwCkt0zUsE+iGGM+yOpULp1/g
YoIRf/nlF0pKuezPrZH8ubW9OeACk9EkOoevTw7gr/Mr9/6PS3pv/env2ILJ
sHb+vnAS51u/BU+6I9/S/m+1XwieuC/UA+z/E/UlQ/sn+6N9b6Rz9y88OeAf
+Msd/LAzHo93eyH5x1ZI/uFDQt9+1UJ0V+KWrvStzhs9a9R+uzP3YBXaH4K3
b+mw/gf8dKv/0S2eOxCfbsF59GQPNuA8og8H+uGOfrirH/6gH77q3QXBs1vX
+sB0+PMk+gKYkSZiCclSjtl/3niMdI/0G+QagzbCzW68GVAg2H/40CZ4IM89
FtkYc57OiepALv1HW5PQe2U2LudBmQ+9EJfJMq2BmwOrGkdHLnK5LM4Co853
cXhOJBJKfqYPyxZr5TCblGAiKEMccciwPw5uFaTNrVYFqnbWRsdkaD9AIxqj
8m2y01E/IE8++R4IMM5/wBF5IYfRBgxdEmb+iBJmMnG1aUdE2N7W1CFlvUuD
OReoRSFwZrbA3k7TsuCcRlZIvO2KHqYmm2m4XlxHdmkplUV1GvZ/oKCgxCyA
zq5dsKUqsymkzW4psCNsaDsyVQ2TSqslq7VqhGA+uQ2VsozxlWHT54iwix6G
xK02jBo7KooeeIhm5jXCXbHeH7uJWTfInBbFj1vQL5pJJ7E01fo4CFhbWSle
dVoG55bzlBjrg5GUCBqNV1W8eT7AIYbiFgZ2MiECqx732VQmgAUWQs+hBFbQ
qVYnY1D8CtC+rT7rJfWQnvPfZEUjaxih86R6yYBMhOGMBHVe0jmAgAudyIgB
4cpyBA0fsEkGIkiUwJfJusH0InLfvgRF7dVL9lTLrr+cng2DHvQ9stNcM/mq
T1FdCt/TIeiJfY2/0cv0eWzn+pBWrjtNGxqUmFW491smm2KMkxyL7QFeVvUs
2inWbKPtdsfTTbLEx2xO2IBV+chOCF5mzO0YogI2Pe0FF6A1xrz84x/uor85
SVMH8mOTL+plD4yIQbIQbLDw4CvgX0RegN0F2AnIh/oX6GAYwXBfD6P9r4bR
Hfj21V037PdAPt1BKXhIgwJlX7z8q+VPw2iBf2E8YRi94n9W9E/wzoKfTNcV
tsK/V/T3gv6u+W/y/AIOgaxyQB6zg/LCreQmALOvoOu+FlPMk2mv4bYp5cUq
hVGGyFVTdTEHvrFwYp4N58GMrAXeT6pLMJBbgRlgbfm3g3oVv4YVTSkuFmPo
qSl97AKB2B2eDTRvl0FIo6hFC3TbOI/3hqDvwv8Hrvu/Injd7hnqq+HRPiDm
nTH02aM+AZtX3akrZlFdempe1+j8sGnZvoCBJVwVpQpdicULuz1BxrqD7HX3
5s0Ju0L4LZFEagKrQwbMvFcpsAfV3uyysfHINNl2cGsoy0s5F5bpRevQBSbS
2EYYSScDA69ioZrFU00xYyXGhcdnBn105DkAYzOv0zkH+ddNiS7ENiulNWAu
Ge0wvbQXAGZJqUiisWAU7kL+7CZ9tjSUJ+MQN62cxBUNgSS2WWCcM7ZxoRHH
hTgAS0uyxBSAwukFPHI4AcvGZSbI94eRpTo7L/2ld4LXkQVX4v+4dThonm1c
QnjaZeRpZbVrcS252SpQ7IF8lUuKGqmVCm83/akP2LElNdLFwoWz8QIrnmg8
llA+TnDijWS/uBHE1clJPDV6CClNGTZc8tt5Qht1+lk8Em8LLrnNrkAxGO1k
9O/FOHl10RjtULB7lzbFUTZZOXG+kVB47KnhSJ/+IrbUULuYunChhMXVAykr
PJZmZDcbTI6irGNKPC41aQC9rp7LnahtCZ8y8smHiaIBWRGcbJeVCJtBVyag
JZhCdbaRRUUhH+2gUG8vqG+UyLJxajEZVQhgH4F7aRoEKWuV0Ji+kX04xzw5
2Cklg6aihD14uyqSNNasJt8Nx15gCt9jHjGwdAsAe6olJiPKALPSPjoP7ArN
ryvTRWpdnZfoCLrLADonqPIeU0ixZGEiHTlTJMdcsobTEjHlM2so5ZOd3yB0
WFB+KbrFl3hgYJVmMZ1WCmINN2LXytpXHt4LBsgTDxFiUiOyjXWE9qZ6iaAY
R2EYx4ocImYdj0eKX6erZqV2P6eLlt6BpzQ/LTLEO+57bCfr9KcvmRb8X1Rl
u8LcbEhJk2Ixym570tiBG3eWMvpbTvClcLFLhxROxriAM9KerKu3s5REe0Bf
swLFcVFbExC1EAaSuJtFAE+P1GUJf7zqypzFflIex+U0TxSlqs5JvS9hgBpj
gjZHjKa3hz+DZud92SOydq3BJk/Qal6YIHqk9qnNIC5cvi+xEjd/T1m+8q64
6dqNAcS+OBBjDXprwl8t2l1xuBuhfcpUaGdF00nzJUdu1stNRXo7qtsp6swU
LLV56cRBbeYmCJnY5pMoxlIm46GA5R3+QaZDx8q8tOnOcEPKsKP86tIo68Kk
nXSRk7sHfTkcWji1uegdynAOJi+CZuOmEujjtGeUR2hhLJbbQCKIsjh5ZTed
GLa1hIBp28/bGbcLmLgFY929ctlJDsZ1XEJ/eIQG9Y45hxnJ8ZljPpgZeTq4
+rnseRwXnDJzEfgb0gMkymVmToDRAQqUSBIQrzSlg/PRfKVN5+u17jkh4YgC
WKywAmW2gdZBYtVCxE5X0rCglZj92Emq/AQ+vXUnsJXSS4wBx4V5m15AD5RO
4BNRywWdBHm1O2SSXmyMsdUKS8/5rx6jICuTEdvbN2J3Lg7vv2DPZiGxekex
mJaTYm16dD21gLENWMGeJc3QLk22Jn8+yQpSslXz5WawDJKU7rOMtO3tbFmV
sk5ka0c7tKDtdWqlF4HlCPoIR2WJUHt02fYguGyWAmv/pA3tGhtooWKIU6V0
TsbpL76wPE1Z7dOA1T5UVotHdjR0S/pey6p2bHJ4Cfem4yetcx42P0MFCuml
8BplLLNdx6H/IBsdW2A6eVpTmACn2josYHPSAFUqHLwvTCCHD2k9ggUZDA57
Y/6ckmNcNrwv/KpWErjvvz4NZTH8F7BEYVxnSz+m7HM4lvfukLTknMG8ObqD
KbyOxEO18R4mbFXIJYKj5xecuEahNnXC2GZ/oOvezQ8Pf6tCgHmjdFpJOsO6
K6tV0S/3/dQ3dKqjesB8TBOsBa2RzJcANKywMjo5cENoyKuOmcxPmJneFs48
iY6Jf1c2V6SqyVGBScVV7etT1Bl28YNm5VhQKCsYIOF8HXLbyD5bjiRHTxOy
OdZlWsgRUeQ2IAQxjwsB3pIPxEqZGASxpstYSy5IerG71cr2J90vFp511Xfh
FZsPGiQlhFH77k+teP65evn2x/vROT75rzBiLc22907NfVrT3/zGQdR8PB77
XdPfyqzkNwFEYDu4dPw2uNvG96abv910+xdT2h9h3t8keqK7JcjW/fOMMMum
VXHoI/DJArqNCM3UJdvHzjiGDYwv4PyDwdMtOjelpJd1i+eRhwAQSrVo1qmJ
E4bJtRiEQ4eoZkCFjNllW3UDyyh/hVNb9unO3RPTZMYs7irH4+I2w2fG4ZJ2
MSVvGGEgNk88hRvd0WmWpZwBKpP0+ahr56wtewhI86NUJPGx4/H2leXDeqt4
3RIkUzxjoYeBnGtHnLI0xzzao9OEfyBGUBpWJaby1NmLbIjVWOUJjxNUUZa+
Yv57nC5WRTqz4ABerQqwJtAAideajiqAdcEijQf4L8jd13TaD1OPoxcGP5qq
wzHRP5pb5Re3VrVp6pXtkb+POPkv0B8mGK7ReG23Yz9K4x+O9Zbd0yLmcYI5
vOgg4cR1OhgDg6rqAHPjM4fe2Q5VU1US6hE80jEDBryVq25nA/2Pu6zHMp5+
fe1K3O/6UIQsimNJW0i/zaW+Vy7Qw6dwg0NG1U8iyKqAUT1jTfQJm7JIycfE
WZAOvwsIXXNyjplvkBJrKwgh6ktxvTMyRG02BTkgESmIdkLeYZkGLEgYOjKd
kxHEsjh6BK3Z/ybOdpv5yEzRHXZPQOFDJVBP5cqRqR/o0Ow2PwhipvJl74xf
L//18moc/x3K6Qtpou6MqeeG4MZc0KDNrFv5ni3NM5YkXpjtXaTRmBiErO4+
sS0a0eXTxKSjjf4pNXnsuRd0C+ft9FL2rlDwxqmQ3A7tD/KIjCXDJwb5sGjI
aoR5I+MwSYyHp7fOhSDDgj8bdz4m3AZM7AROzKdlmPNJtITVssqs4rwm28vf
fDy2xOfrKhD1eEKPXZUucbu9cX32De58hmf7MJqYy2l2yh2DgZsp1Y5BA0vK
NWCgHI1E0jU7doan6fcaGnyke5XW6YI9zlinx65XQefNw/S24++eff/4vgsW
AKillwDm5zZXYuympb4NNBOYRzj/1ppAEznKl1ENm+IsRGSXz+aFuVpVZdqZ
b5y6JbElSVpmAxVniDoBLz1HJjwBj6uLFZJUFUHaaqsi22lmMJAwAx5hc+PJ
CprXoGCkdD6Kiz3w5jEkaz1XlFGaXJzUyEEsWE462tVTmM7cKQvZBIqs5bxR
tN7oFxGfzozKdhHXo2IDXWxyY5LdCnYQSWKiAkBNtID9yT05/FuULAuEgfLp
GO/9w/cOUpQycnhRnEnWaCpwTdGLTrkDxLAIyZqc6R35dujg8bQ/ZJzsvAzO
K4jgfBFmn7GmrADas9fHhjL+g4V0ReNm2BwmbQ/ja6Z+f/Y/2NINH2ykk65n
Jsu8s4au4plVg/g1Mwd0rH2xxEdoJK70889YjvHNGxJEMefK21O8Xjq8f+Sf
agQhEw2LbMj6x6SaufoRNpQSVMmi6XrUNgyKhflHWLmghXr0KNjL3mzzWip0
UNoqrrZBqxyrzLyODrlahBcxD3L73Xbte0flw2OQXrCFaNRfjcMMlRou7gIS
oMi/rINztPQzrWWN53+mpnO6kQ2Zy1QFe7Ks9wwl8avwFLXn6FYZlmIMFE+P
efNT4UN0Is5/eEecE9aBNPffSatQwo2pCAAxkOCkVqfmoCS91vYwc5yhH7de
rkjrsOu85XwaFx+lcJ3lklgrybM6KFfIoaIeNg/PUjHM3oQkxkXRvnlTElL5
hoJ6kBBbvvCjK+KGcphz4sXAJZDmYU/r0C8u5A0/FfPGUCKQlaZ14CbmIm7E
niO3ruTSIJLRdzpcViRJsyYlh1IlrLt7wtr0t3SQnrNpNcHriYPV5ZoiKPKc
k81greS7pHt2gLLNKfORw+sIg/zOSWXRHwYc8dfR89CjMDgXcDCV65yGPucB
z7njc+rnnDthg+XrqRgTB/xhH/6Bb3fomzykLx2DgvekddxP7QtZOfYjejsI
SBNuonpH2mjh8eurI4f3UgtF3JOPBFFaAL0XdNm/GF3eK7ZcgC7eeeTrIE1n
c9la7TIUYu3X4Ci+KCB8cXnaV8STDqK8U5aC8L0DJFEYQhz5+jfEkS0oci1O
4m2e4IP1EVkUOGy5jfzzryOSTUO/Zx2fyvRpJksSl5iblm96sh44RbxU8a/O
BJKlKrz9I696eplMJsmqQGuWqzUzdySdTmCjo65WCaASefGMPCqSytA6ZAQG
OKAwJbN557tr/wSOq6JqTdqE9cJOVVUJkXoLxCEXW+6U6QoJB9u8nKc39GyK
nxlrM5g+HJsVcFpkc/AuyOa3ZK0+2VBAanRwVZo58EiFT1S0S3bxNsVZVXjO
2yoMObN3antgEwnASFkRm+phA+mK/e1BkQb9PBdfgXTRb86flhRiSY2yY9Bq
jKmiqolnYaBcnMsu6wVr05RU0Ua89u3iw+xgssEUW8+STQjrCOq6HK3to248
V9C7fViOBJaGaKgmoGAOIU3lKgpzFbyCTnGDHZf5B8g9uwro+jF5/B4f+MfL
NNdiLlahZwDMzAJPUodHqCkAMZvRkXHOleJaiH7VVw4Ni60saUBA+LREwVFv
cWxSdB7D6KepOXP1aARz+ZjiGJVYIdUI3fEPND36nj14Dz/6ZRDPMV4hrOr8
UGonng/OJ+pVn9yajLp/bm3/em4/ACzqigBYvgMbKAr/wPPirP/ruTZ3vexz
gGGWNqtWL8GP/tdz/eh6OegMq68FAPpfvV6QcXhhQT1K0o+FlaAhKelc1btd
BY+qxksOidZ90TSZoYdmNlEM5aDU3HF2bGXIseL72dhty37AaNnkgIwzysQg
FyA6DaoYY4aASFK6K7TmnbfDOw7pEgJrv5RD6mq7uoR8Di4EPiC65qaihO8a
vYiPw3VwlcAo7ThXh021Ll51azZg0lMhTh2XMCe8lG5fwHqCcnqEmFeHSVFZ
9hLzQyz4TWXIiUiXxMRVPbS456IvXJSKto0SmciNEnTsF6giBtItqOgXctXk
VXuCl90keNiTfYZBCtEVl7oV6uO0Wy3HoQcEsLD72vocOmU1yLGmdcv9TPdw
El1RJI5fmgEd0wBdA1Na7GGYoT37PWyBb9j3U1IW5dBlS6/Zz8tscN5f+/WC
UKfcz4CGEJVUg1F+YD+48/YF9WGHXVrFBH1ArTMvbbPoEmMCCiYGJ5ScbJFd
0iD9SrkuqIanLADN2fnrNpjhIc+91g2Gh7CBpmxJS3Hjcca90GtArig5liTV
kS2s+GwNkE9jC1frsF9WrUK5F9ODG45DXLZKp2QxbCUMDQ50Z8tIv4rzeGEQ
7bvIvCiYj12IuddB2HUW24XgWs/TppyZXJN7bYnsLnJqnT5dL54xZTdeFyNd
jcKqjtto6Hu4ORzJ2cJyhKbuXUkqexsWYHAlBocUmpNTZzGGX6raqmgGT2IE
GyQH5sgpSrjMNeT9aydaNXm9eILtJa2k9qab+GUIRmdCtqAXaI5qpPnI1bfe
Gr+xaBCogIIIdE0H9MsxBHtXmOGcEubGbpH0uhatNs5TZD9zH6q4fb0WlsAk
geQ5cZfXig4F4drrbRHDy9DLqyNPqf0oCBIbzcBMFzYUeArKLmBv5jzreHbK
9bmrZAlsgxOkgst6SnddTY/YlBhYp8w3q9mO2D2Grp6APqWJcobR+W4LYJCv
nS92GcosMUpvp+JLVcD0ICuXY3LBsSSvtqaeiGUe3HQQmJPz9RSlw3Q0iEst
PxjW9XjB1UE3oKHXXPtSC3/Yeklhm02rBghediNnX1NKMAtb27xq7XOlx3XR
xXPM1aOkS1f7mvNdPds0hem9ZjmHnYVjTFi0s1YWHd0fKuBP4xVg/Pcvjuwv
9+kE3ZpjSfIbXXmImcMlk6v8/FdUOyX7G0DFWz2O+BhyaspdwP/Dp4fk75CD
joblomsT/fjjj3b6T+lWP+maV6blyxlHGK8jFw3mjR/L+vJlTOu1cEfsx0Vg
4Avna718HpM9+ZIKgz0+ePni4T0Y/kfo+28vNaXoJWdA1pS1GFPJ3hcNLO4D
zArIxeVxk9ywE98DE+14nllKIdqFZjIitnSHLNnchqcCh+2HLded4Ky7Lekj
HAN79eCehEWBIuI9fNvUSTGamhHHwQAmeEkrXLP9wkvHP2GvugIWHPZTSQ4W
NLArY1v4tfUkFYvWHLAJszv88XU7fOwSlBalyDvE0Cf7Oe2H8uVEUtE6YTl2
PfOxpcy6f36TKWwPl+A2FfZzIUg+0ymvt9JytEwh+UnXamF5tV6taWA8Y45B
DrwDOeVdwKLASyuvLNNV74HwfDRlvE77ziN4t5zRXTGUdEIas70aYftNUbpN
95ZULN6RPdAyXlLLT4XsB4P98R4zKcs6NGdBWr6w+Og/at+0w3vg3Vv45s3g
XkEiSl63CFtNLNXfGd/F7Q9zLNwwu0Pb8u74INq54uGUXYH8YfoaRn0eIwHi
9ReYRBq+wttLGauTaG+0vxft7LWOqq8LzByqvW0dRvvtRppX5RrRbUf36dyN
1raa0B6tbC4unusOfbm7difwbD1gj8dztgoPEH3NKud0MLvYlSTQsCmpC6+o
CowEb/+hSgE5l2IUWz/U4tGs15IK5H0npdUl0TAb5FuEOulLLeThScEeeLcU
2IqUdAMxf0bSbmc/0IVtnjrODlDg5jVd22LdEazVrHzlvOJTsPhBCq26e0Kj
n392X968QUn1lwY2BvQQqjjgQdRXbasLkc1MCYwwB4pA8PTByb1nTx/y4XG8
WBQPjx89f3j0I/+Et4q+eYNF1H2Ep5qTE15RPnp+KQX61DPe1+HtmZZDyY30
Rtnt5KP/2mEPdFjNKA+635XMEyf0BKds7mEVVuj2k5OCCrl+wkruZ4y0rBQ1
YZM4S+QWnxgVtkpFozAFRGq8L7mV+yJ4/TxOXoFBdFyDTbaKHjEQfE7i9qG0
0QvZHqZZLTeyPZsiavPMgc/RyVj/lWMMhGDL+16u+mBgf+bFmRBTT3OiwUqf
yTInDml1PdAdskGmJQcXlD++aHI6IuY4JN9bh1yL4z9yb+SV7oCj2FPq1ELN
/ay23DA3jg79KiwXXHc3ZDGdoJvg6Ln19GvtklxThqiGlAVgrEVlJlG6Pr07
ktdGeTH6SfQJ+P2r9u9cLY1zb8lUYir95hskSXuz6ktdgpdHzyd2fWxkx4Py
fQBBB+T/ApojXawnuXvdyBaCQWmYfNVWUoxUg6c+t139N4xKZBBc927giaCX
P4DUK86s7uiXHSFEOqPn0Q7qoSDlBd2GwamZf3eRTJB6X939d5T3Mahbkz4M
Lej2P5qmlRDVFUQE07qeU0cXtyP3x2L4e6a7j4jtsICX0I13mOEthSEol8iG
9wYKLhAfp2jrBu7QNddu5xP4qqfaE4je/qkE82qoKBcDy4iiKcIdfSOhXzVk
mCLF+7eUHHfHd2BYYsWatFihjnYHFEbAMV8d63cPsF4MKuTd0R+jHQkp+a+5
olJh+69JD6RwlN9cD3KGzWUlhCII8QcDtcNUEwfxMVX54H3x2Tle3QzcLE+W
q5gvA7at/MNM7PKoUSGtqDrSnHRcvXV519PVvL1miYZuFevGZs6ByEcYjfyB
q6dT8GYPtQXkT/GUxZ90gbdvkDDBk2kERUnns3bHHlPyyml61dtILPvTcjX/
K2E1dNx6hr4xwnOMPBzxOR0WjnSnDawgFyawOEgH2NG8seYqXmw8s89P09DS
cb+hqANcPtg7+Gq0tz860Of3CJWkNu0LgzuCYhoWxPMb7ff7jexh5V/lQHJX
mbA796bwuJvu+pDPrqX36Vrafynb+JG4mPav42Lax14FfvuexY0defLB/VDX
cUPZbVfhruBf7p/6de4pHbdLcS2n1X7HaUUxB99VpcnG3ZtlVFu/9/z724/g
/9b1OVyewSvW1ndDHZZ5YFOmdae5K78ARv5UjqtJxaXQTneaRW8pAlsjnAvf
8AzaWe0SKc29YbZdRxTUiwiPblk78FLPm0wudddf2FJJDlesC24HCzyYeFYW
xWo3dMi19V679V7lzW7pHy2kHnjfxpS9AotlT2pLFbapqWtvhzzAOiUoNWXy
y9ZxC1X6Nh+PY+9Knj1U1PbftWdvGGiIeJVncJinfT7r/XgCO5t5kUew0/ht
PYMjX5BMWidJ8LFLtHZtG7bEAoX0X9y/2MOUfU9elzEr9x1KrU6ZtLofsUof
3a68+0H8kpRT9s8mBU5DdjcpbagMSTGMXrdhtKP3/rak6e57dyhKljgVSLpI
jIQZI5SlCvpeypdLoEJA9pd/bkwSr06WPnfmA399MgoxUYv2UAyKkjP6xeEH
9oLinraKZdC6Ye0PqnjkDmLv2PB5trHruWsnLGnrelsZW4ceb3srd2t7NX23
q3/7orbj8XZCDrWLdQojim+NLJPxtY608rKd55rzIqrbTjo242FQrNO7wrt1
3sYr3Scg7346fluvSmnP0n925f7OXLkfqXu1L+rGNW+s7xL5i0f0HUOqTf+9
rs3bqvX2WyLdcoF9ztin7m7pDuf4F/CTHlzoJ+1RVq/jL+15/Sp+057Xfnv/
aav0ogKpzlPrTSXHpv6KGts8BWhIIkO3Cyk/Kx4lPcrSIySFQogcpE4XZZ+0
3bF6Qsmjd9Ci8jpI6AR5gUmBgXd0VmBCJNVOxYRCIj4RJ5nkurAvlwx5mL5h
J1vouP39uF2dDvDOna+u688u2A/pgvUOnn/CjljvEuxed6x7/sk4ZT19+7dw
zfbR41s5aC+8bly5qbte3LtvAZhsmr8KfbIZMHK5OW0Y/S/etFjy9VrV7TWZ
HpV1F0hcmDy4fSHk9+3E7ak+8XG5cgMM2+LQbV9KUuLxLwb8t3Hz9gP92dn7
CTt7e7f0Ypdv7yvv0PHbrg/z2f17HfdvP8u/fjonsYi2//WaSZ5DzWPnIbff
6hpW5SNHNKmPqVdNjDr++pt9Gku/HJDXBV7w7iqSCqfI5mEl8MjfJU7jHmn/
m7qO+0onvWMH8hb5+Ht3I3vTfl/O5J6VvcCl7LX2HEttDvi+3ctk2/++HMs9
2/DZvfzZvfwe3MudlNkQ+67kZPba97uaOxwh1Lp/paP5+LOj+VJH8xZF+Xru
5i2dXM3pvOXlj871rGoeeWLVwawMW5zQIixW6WuRo+u4XnrViS/3R/fL2c9e
6Q/plRaG/2790SIoP3uiP5gnWoX5p+qDFvh7vM/y5NPwO1sN8UN7nDsUd31f
sx7m6fEycwX2ZYzl7Iy96HBp4qxeMuv33AGiNXQOe8FqmmyunNaejzoFVcMK
wpL5I3J9OgyPKs6Q+DcdxLiNHDm8wA1rmhevTE7Wkyh46rKQ3vxHQ/UpkB3M
2AmNGhQsa3SZO5+3QviX50fvw8PdMtQ+Jt+2w+KPLE25B7DPnutP1XPd3cwL
fNbdxu/KWy09f/ZTX9NP3SMufDvZ1unTnSvFReNVtgzyjkEVMVsTlLd6sd8u
KXlVwEbSrWm3C3ZwCTehhOddcrlWHmv2pCOICbQS81rqfdkdDiTZbZGNFVkB
Q5Vnt01Z4vYjeg8jK8zwLkwVcVxirMioWF+c2Up/Gt/1JJ8N/nbkH+704RHi
iOyqFihEdz4VJCURV13oVW/rMr+dP70tKt+pJ73V+bDjQB9aApRiP7JMYgwX
pS0UyMyPYMPKkHK3/NLkYVllKtrNCLLZ/V2553Ut34tjvo0F21zy2k5dbwGD
/1Bu+F7vxifrjG8v/Wc3/Gc3/IcsonG5A94Weu663kP6/5zl/Vs43/t0/Wu4
3ftev4LDve+1j87VrkB28rk121t9Is7RTrcRjxYFSEzsla4mucTb3iM8Pwk/
u7/qdP2iattzIBIAx3mEqsTkeH2CVYurdNWwaaI6e7LR18ggqEA3L53mKbYT
T39qlvEpKpFUl8heTsF3B69FXMdc0zWv32U4gOzV37Q2CN4B9O6Lg2CvnwMC
H7Q6CN109alGBHQCW+qD4KNPIyZgr9X6TSqEhGTXExU4bN/71SpzywY5mQ9K
jFSISZznQYgA3SHVOu4vb+hMjAtuH5dgj3rw7ZAEY84TzwpRfVQyYuAZTUq+
YDe4XLV7IZ3Ijx28hFuvhNUa8nWxC2JgNqqLkcHq4Zwm340Y8OUMeAtwigzG
FjqpNjn8W8m9HtZPQ2sTu3orM8yFjL34wNyfqFSYJ+kN5o3v6EFBC+qQzAtN
Te8mJ4KWdVWN3vv1y+WxXTzrYMHV0zKXdrlUMUBkKKZY0l6qFr+7GEgwU/9I
g+w9YPop3oYkd7nTumGqjaKyGtUXl1V2QRPchA3dbtaNmlyhPIqQ8OfAwycf
eOjZzUsKpLRav8sKKfY2ys+xh+vkyIesozfw4KYmt0aLIeMkUus+Zy28Ely+
1RMNRymHF2czMMp2uGDhuwtg7NK98dkVzoBtg+ephcelcykUFLR/y7rPn8iR
gFDp+o2iFwGmvpP4Ra90RctYx/YwQ130dsewHV38FSMU5FjyMkB6pKvOPKyV
QHuzJcu9E2PpW4JtURanL6fsWilmaCyIQc+qDykyFd3MAwwxN8mr26v4NRj7
K9LE7N3qdO6x/lQjLBaB330sZbsWdoXTDgyUX0HDSbBPM77RuwKfIxyfIxwf
QZlwZ1aHN48FfgXx4JLrCuWABkf8CjgXUa2twU7OLE68szdhSzyDhMWW+uLb
D0t8fNAjdTnDi+LYbLrTE+tlAMoUkSq3h/eBYiP7vRwENMxFmuNESCO1Br74
zkmxZpfAlIxrdBHQUhZaBY/Xk66E+1ePH/WabNcsE9R5/4p1gjrvfXQhJM9l
1TqwsS2ClCgJWGfRbVsS6UoHOLapDb2UwkqhP5NFEdPNj6j7SGzoGnEnxyw9
ErlqOOpdR6DYWrShpG4cymaFSKRqaC0mWMQFDGW9nz1xqnbB/BYL+n2GqQZf
REeBvh49atIZJ5fn0ffEurz4Abz3H4cNEFBZ/b//838xDm8mURC+QtEwNQbt
iVN2VlygOiEnZk/rQnZHeWmyjFNMOK/o6m6K3cyKpCHUnMOHCqgLmAPMJeWb
RXvMYP8K6tAoUc49xpsGkDU2xGTuyYXE7DSBHfj2Pi0QRp96nw1GoxGwiuQV
LeThGnEqfR2REm2vzux1t0UPxHOOOhvdFheADqhmeBln2Avg18ynOcVzoU5P
JQB1qmGzf4iG55kBMRVXmtTobr0Pr6zGy+SKki4yV2cB7HFl0L2B4/388/0n
Jw/fvIHnfD4h5pxFMm6dETwOUcFmwoMu11ga0JCBXhPx5xePJsS02LVMsvKs
iNag9S/KeL3E/syK7ksGkAGNZWK0RoM4A7tntoFe1xl7zvGCG733lr3aXtSv
iI4U2dglh1iKjVdjvneCQCK7WjxZnLmobFeU+rwAso6zoUvME9YJHAfxGO94
npGJ7MxMzIajt9p3r0ucFzqRBSOvki3grr5uWN37fMVquwMdmzJvdfRo1pQ2
WVLT9Grg6mDYzFjq4JjBTfByu2Dg91DNC7kvjqW6VfvMCe6nD1dWFbY/ggHx
vqrJVSKXBlPiq2ol+8EGPLv/bMKxWbol/axoshkuPOxZs56pyuqiT8gkzlDh
K41hloyhDW1LNzGjthqXm122/qe47wlFB/B2Yr372QbWhZ1UfpRz7K4muScE
6HlPW2F84lBB5X6KRDlrEKZi6cKIDh1wCMA8VPBbxQRZwK+8QDFe1xJPTcax
LnUewdAvvUsDaE1wEHRxWzWW9uDRdz+NhQBRr4yqJa22bh7dytz2sLMsmYOM
Ra1kRFqBRHREAwmuVRdw5xISIt2DcSCuhO6ryWDwyy+/DL6NK9tyMojoj3DQ
E3Ll905QGqpPf0++W7uxBf7Qgs4KjTRHFXKC62G7yxf1cgIQNmC7JLUBPZOf
/BWnOokOQJU+JjJVUIUzDAbnHtDnBNc5Q3NOw5xz3+fc0zl3ck6dfD0FfRY/
HPCHffgniu7ylzv0z50D7xst28+T6It5uhipYx0wZ6RIgnrxf944tDKHL67u
YvCNNwPG7rBM3QVIXuu1zoDNGK2QKGjNnnH0h+2cvP4WP+x2sR/w7GwL3m81
cwG5uD+OJ1OiB3Qc+keJLF5KwxuMj4T2APkTIPWIe9h58u3uWFCEE8VDJEF2
GqIJnes7A6oFZEia9Sa6K1gx9kSGajAuwn/jfltK3LAeWI8tYktokAPEslGX
k0RfGSiZ/o88/XdJGU++vSph7O/ttQijswaDYzt1bYOzf0vKcZ29HRFpy4CY
xHi4EjH10wuSFFHUfUoJCCiJf6JklldpTgFGP6lBBjxbpqCQpRhDmGcNh5Cn
GqaHbQIVuS4oQMIOmeHWtIQxqPMN6Rmcn0BubGtJ8pXFQZKDmllT6MyI/yUu
SzzZw847PptoL6TTDGIxgRXFwdSKy5qsKnmp7ggI9y5ne+gJWkfhBPNL2Ikb
1xZW/YRcWZ4AKhWYv+p5RLDhd6JwdPQ4GvcimFemTAf2OyVEB/FOU+1eWVzd
2R//oUWXnq7XR5Hx64+JIGk1LyfHDrEBGf5/9HmUgB/yAAA=

-->

</rfc>
