<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.31 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-lg-bmwg-benchmarking-methodology-for-rov-00" category="info" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="ROVBench">Benchmarking Methodology for Route Origin Validation (ROV)</title>
    <seriesInfo name="Internet-Draft" value="draft-lg-bmwg-benchmarking-methodology-for-rov-00"/>
    <author initials="L." surname="Liu" fullname="Libin Liu">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>liulb@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="N." surname="Geng" fullname="Nan Geng">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>gengnan@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="February" day="28"/>
    <area>Operations and Management [REPLACE]</area>
    <workgroup>BMWG</workgroup>
    <abstract>
      <?line 44?>

<t>This document defines a benchmarking methodology for routers that implement ROV. The methodology focuses on device-level behavior, including processing of validated ROA payload (VRP) updates, the interaction between ROV and BGP, control-plane resource utilization, and the scalability of ROV under varying operational conditions. The procedures described here follow the principles and constraints of the Benchmarking Methodology Working Group (BMWG) and are intended to produce repeatable and comparable results across implementations.</t>
    </abstract>
  </front>
  <middle>
    <?line 48?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Route Origin Validation (ROV), as specified in <xref target="RFC6811"/>, allows routers to use validated Route Origin Authorization (ROA) information, which is distributed via the RPKI-to-Router (RTR) protocol defined in <xref target="RFC8210"/>, to classify BGP routes as Valid, Invalid, or NotFound. Deployments of ROV continue to increase across networks, and router vendors have implemented ROV processing as part of their control-plane functions.</t>
      <t>While operational experience is growing, there is currently no standardized methodology for measuring the performance impact and behavioral characteristics of ROV on routing devices. As with other protocol features evaluated by the Benchmarking Methodology Working Group (BMWG), a consistent and repeatable test framework is essential for:</t>
      <ul spacing="normal">
        <li>
          <t>Comparing router implementations,</t>
        </li>
        <li>
          <t>Evaluating scalability under controlled conditions,</t>
        </li>
        <li>
          <t>Characterizing the control-plane costs of ROV processing, and</t>
        </li>
        <li>
          <t>Understanding how ROV influences BGP convergence and routing stability.</t>
        </li>
      </ul>
      <t>This document defines a benchmarking methodology for routers that implement ROV, which builds upon the foundational benchmarking principles defined in <xref target="RFC1242"/>, <xref target="RFC2285"/>, <xref target="RFC2544"/>, <xref target="RFC2889"/>, and <xref target="RFC3918"/>. The methodology focuses on the Device Under Test (DUT) and uses controlled, reproducible inputs to isolate the effects of ROV from external dependencies. In particular, the benchmarking framework assumes the presence of an RPKI-to-Router (RTR) update source, which may be an RPKI Cache Server or an RTR traffic generator capable of delivering synthetic Validated ROA Payloads (VRPs).</t>
      <t>The objective of this document is to define a set of metrics and procedures to quantify:</t>
      <ul spacing="normal">
        <li>
          <t>The latency of ROV state updates within the router,</t>
        </li>
        <li>
          <t>The impact of ROV on BGP control-plane performance,</t>
        </li>
        <li>
          <t>The scalability of ROV processing under varying VRP and BGP table sizes, and</t>
        </li>
        <li>
          <t>The control-plane resource utilization associated with enabling ROV.</t>
        </li>
      </ul>
      <t>By providing a consistent framework, this document enables vendors, operators, and researchers to evaluate ROV functionality under controlled and repeatable conditions, improving understanding of implementation performance and supporting informed deployment decisions.</t>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="scope-and-goals">
      <name>Scope and Goals</name>
      <t>This document specifies a laboratory-based benchmarking methodology for evaluating the performance of router implementations of ROV as defined in <xref target="RFC6811"/>. The scope of this benchmarking methodology includes:</t>
      <ul spacing="normal">
        <li>
          <t><strong>ROV processing performance</strong>: Measurement of the time and resources required for a router to process VRP updates received via the RTR protocol.</t>
        </li>
        <li>
          <t><strong>Impact on BGP control-plane performance</strong>: Quantification of how enabling ROV affects BGP convergence times and routing table stability.</t>
        </li>
        <li>
          <t><strong>Scalability under controlled conditions</strong>: Evaluation of the router's ability to handle large VRP sets, rapid VRP churn, and BGP updates influenced by ROV.</t>
        </li>
        <li>
          <t><strong>Resource utilization</strong>: Measurement of CPU, memory, and internal control-plane load associated with ROV processing.</t>
        </li>
      </ul>
      <t>The goals of this document are:</t>
      <ul spacing="normal">
        <li>
          <t>To define a repeatable, controlled methodology for benchmarking ROV-enabled routers.</t>
        </li>
        <li>
          <t>To provide standardized metrics that allow for comparison across implementations.</t>
        </li>
      </ul>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>The terminology used in this document follows the conventions of <xref target="RFC1242"/>, <xref target="RFC2285"/>, and subsequent BMWG publications. The following terms are used with specific meanings in the context of ROV benchmarking.</t>
      <t>Route Origin Validation (ROV): A procedure defined in <xref target="RFC6811"/> that compares the origin AS of a BGP announcement with the set of authorized origins derived from validated ROA objects. ROV results in one of three states: Valid, Invalid, or NotFound.</t>
      <t>Validated ROA Payload (VRP): The processed output from a relying party containing prefix-origin pairs that routers use for ROV decisions. VRPs are transported via the RPKI-to-Router (RTR) protocol.</t>
      <t>RPKI-to-Router (RTR) Session: A protocol session between a router and an RPKI Cache Server. In benchmarking, RTR sessions may be emulated or generated using traffic/test tools to deliver synthetic VRP updates.</t>
      <t>ROV Update Processing Latency: The time from when a router receives new VRP data (via RTR) until the updated ROV state is reflected in the router's local Routing Information Base (RIB) or implemented in routing decisions.</t>
      <t>VRP-Triggered Revalidation Latency:
The time interval between completion of VRP installation and the moment all affected prefixes have updated validation states.</t>
      <t>BGP-Triggered ROV Validation Latency:
The time interval between receipt of a BGP UPDATE message and completion of the ROV validation procedure for that route.</t>
      <t>BGP Convergence Time: The time required for the router's control plane to process BGP updates and reach a stable routing state, while ROV validation is active.</t>
      <t>Resource Utilization: CPU utilization and memory consumption of the router when performing ROV-related tasks, including processing of VRP updates and applying ROV policy.</t>
      <t>ROV Churn: A burst of VRP changes (e.g., many ROA additions or withdrawals) that may trigger significant re-validation and BGP recalculation, which is used in stress tests.</t>
      <t>ROV Scalability Limit: The maximum number of VRPs, RTR sessions, or ROV-triggered BGP changes that the router can process while maintaining normal operational performance.</t>
    </section>
    <section anchor="test-setup-and-laboratory-environment">
      <name>Test Setup and Laboratory Environment</name>
      <t>This section describes the required test topology, equipment, DUT configuration, RPKI data emulation, and traffic generation conditions. The goal of the test environment is to isolate the DUT and subject it to clearly defined RPKI-RTR and BGP test, while providing accurate timing and state measurements.</t>
      <section anchor="test-topology">
        <name>Test Topology</name>
        <figure anchor="test-topo">
          <name>The test topology for ROV benchmarking.</name>
          <artwork><![CDATA[
+-------------------+    RTR    +----------------------+
|    RTR Emulator   |---------->|          DUT         |
|(RTR Update Source)|           |     (ROV Enabled)    |
+-------------------+           +----------------------+
                                 /\          /\
                                                          |           | Data-plane Traffic
                              BGP |  +-----------------+    
+---------------------+           |  |      Tester     |
|BGP Traffic Generator|-----------+  |(Data-plane Load)|
+---------------------+              +-----------------+
]]></artwork>
        </figure>
        <t>The test topology consists of four primary components: the DUT, an RPKI-RTR update source, a BGP traffic generator, and a tester for generating data-plane load. The DUT is a router equipped with ROV capabilities, supporting the RPKI-RTR protocol and applying ROV policies to received BGP routes. The RPKI-RTR update source may be either a real RPKI cache implementation running in isolated mode or a dedicated emulator capable of producing arbitrary VRP sets and update patterns. This RTR source connects directly to the DUT using the RPKI-RTR protocol and provides precisely controlled VRP updates, including serial increments, cache resets, and bursty or delayed update sequences.</t>
        <t>The BGP traffic generator establishes one or more BGP peering sessions with the DUT and is responsible for delivering a full global routing table, on the order of 800,000 to 1,000,000 prefixes, along with controlled withdrawal or re-announcement events. The generator should be capable of presenting both stable baseline routing conditions and timed ROV-affected prefixes whose validation status will change in response to VRP updates.
A tester is connected to the DUT to introduce controlled data-plane load during benchmarking. When present, the tester <bcp14>SHOULD</bcp14> generate stable and deterministic traffic loads so that the impact of forwarding load on ROV processing can be evaluated. When data-plane load is applied, its rate, frame size, and traffic profile <bcp14>MUST</bcp14> be documented in the test report.</t>
      </section>
      <section anchor="dut-configuration-requirements">
        <name>DUT Configuration Requirements</name>
        <t>The DUT must be configured with ROV enabled on all BGP sessions receiving test routes. The router must establish a stable and fully functional RPKI-RTR session with the RTR emulator. To ensure that performance results are attributable solely to ROV behavior, all non-essential features on the DUT, such as additional routing protocols, unnecessary telemetry mechanisms, and unused services, should be disabled. Logging related to ROV may remain enabled for debugging purposes but must be rate-limited to avoid skewing CPU measurements or affecting test repeatability. All system parameters relevant to routing performance, such as multipath behavior or maximum-prefix limits, must be documented prior to testing.</t>
      </section>
      <section anchor="rtr-data-source-emulation">
        <name>RTR Data Source Emulation</name>
        <t>The RTR emulator must be capable of generating synthetic VRP data sets with user-defined characteristics. This includes the ability to create arbitrary combinations of prefixes and ASNs, overlapping VRPs, conflicting VRPs, and other edge cases relevant to validation logic. The VRP datasets should mimic realistic global distributions where appropriate, but must also support scaling tests where VRP volumes are substantially higher than today's norm. The data source must further support generating controlled bursts of VRP updates, ranging from 100 to 10,000 VRP changes per second, and must allow for both additive updates and withdrawals. These capabilities are essential for evaluating the DUT's scalability and robustness under high churn.</t>
      </section>
      <section anchor="bgp-traffic-generation-requirements">
        <name>BGP Traffic Generation Requirements</name>
        <t>The BGP traffic generator must present the DUT with a stable baseline routing table prior to initiating any benchmark. This ensures that the DUT begins each test run in a known, converged state with predictable CPU and memory utilization. The generator must also provide a set of ROV-affected prefixes whose origin AS can be manipulated in concert with VRP updates from the RTR emulator. These prefixes should span a range of prefix lengths and originate from diverse ASes to reflect realistic routing conditions. The traffic generator must support deterministic convergence triggers, such as the precise injection of BGP updates following a VRP change or the simultaneous application of both BGP and VRP events.</t>
      </section>
      <section anchor="traffic-profile-parameters">
        <name>Traffic Profile Parameters</name>
        <t>When data-plane traffic is used, the following parameters <bcp14>SHOULD</bcp14> be specified:</t>
        <ul spacing="normal">
          <li>
            <t>Frame size(s) used (e.g., 64, 512, 1518 bytes).</t>
          </li>
          <li>
            <t>Traffic rate (percentage of line rate or packets per second).</t>
          </li>
          <li>
            <t>Traffic pattern (constant rate, burst, IMIX).</t>
          </li>
          <li>
            <t>Source and destination IP address ranges.</t>
          </li>
          <li>
            <t>Whether traffic matches ROV-affected prefixes.</t>
          </li>
        </ul>
        <t>Each frame size and traffic rate combination <bcp14>SHOULD</bcp14> be reported separately.</t>
      </section>
    </section>
    <section anchor="benchmarking-methodology">
      <name>Benchmarking Methodology</name>
      <t>This section describes the general methodology for benchmarking ROV behavior on a DUT. The goal is to ensure that all tests are repeatable, comparable across different environments, and representative of realistic deployment conditions. The methodology defines how to establish a controlled and stable test environment, how to specify and vary input conditions, and how to measure key performance metrics associated with ROV processing.</t>
      <section anchor="general-considerations">
        <name>General Considerations</name>
        <t>Before any measurements are taken, the DUT must reach a well-defined steady state in which the RPKI-RTR session is fully established, the VRP set has been completely synchronized, and the BGP control plane has converged. A warm-up period is recommended to eliminate any cold-start effects that could bias measurement results.</t>
        <t>All sources of measurement noise should be avoided. Features such as logging, real-time telemetry export, or periodic background tasks can interfere with timing-sensitive measurements; therefore, such features should be disabled or rate-limited during benchmarking. CPU clock scaling, thermal throttling, or other variable-performance modes should be minimized if the test setup allows it.</t>
      </section>
      <section anchor="test-control-and-input-conditions">
        <name>Test Control and Input Conditions</name>
        <t>Accurate benchmarking depends on precise control of the input conditions applied to the DUT. All tests should begin from a consistent baseline consisting of:</t>
        <ul spacing="normal">
          <li>
            <t>A predefined VRP set size (e.g., tens of thousands to millions of entries).</t>
          </li>
          <li>
            <t>A stable and realistic baseline BGP RIB-in (e.g., ~1M global routes).</t>
          </li>
        </ul>
        <t>From this baseline, input variables may be modified to stress different aspects of ROV behavior. These variables include the VRP churn rate, ranging from steady incremental updates to high-intensity bursts, and the type of RPKI-RTR updates provided to the DUT, such as incremental updates versus full-table refreshes. Each of these conditions may trigger different processing strategies within the DUT, and therefore must be explicitly controlled and documented.</t>
      </section>
      <section anchor="metrics-and-measurements">
        <name>Metrics and Measurements</name>
        <t>Benchmarking ROV behavior requires collecting quantitative performance metrics that reflect how the DUT processes validation information and incorporates it into the BGP decision process. Therefore, this document proposes key performance metrics including ROV update processing latency, ROV validation latency, BGP convergence time, VRP storage size, CPU and memory utilization, and ROV state rebuild time.</t>
        <t><strong>ROV update processing latency</strong> measures the time from receipt of an RTR update (incremental or full) until the DUT has fully updated its internal validation state. This metric captures the efficiency of ROV data structures and algorithms.</t>
        <t><strong>ROV validation latency</strong> measures the time interval between a router's receipt of a BGP UPDATE message that contains a new or changed route, and the completion of the ROV procedure for that route, producing a validation state of Valid, Invalid, or NotFound. This metric isolates the internal validation step, excluding the larger BGP convergence process, and provides insight into the responsiveness of the DUT's validation engine.</t>
        <t><strong>BGP convergence time</strong> with ROV enabled measures how long the DUT takes to converge on BGP prefixes whose validation states change due to VRP updates. This reflects the real operational behavior of ROV as it interacts with the control plane.</t>
        <t>The <strong>VRP storage size</strong> inside the DUT should also be recorded to evaluate the scalability of the implementation when operating with large VRP datasets. Alongside this, <strong>CPU and memory utilization</strong> should be monitored to identify performance limits or resource-intensive operations triggered by ROV.</t>
        <t>A recovery-related measurement, <strong>ROV state rebuild time</strong> after RTR session reset, quantifies the time needed for the DUT to re-establish a complete and correct ROV validation state after an RTR session reset or cache outage. This metric reflects robustness and recovery behavior under fault or restart scenarios.</t>
        <t>Finally, the DUT should be evaluated under high-pressure scenarios by measuring its behavior when processing VRP bursts, such as surges of 100-10,000 VRPs per second. This measurement reveals whether the implementation can sustain abrupt workload increases without dropping updates, stalling, or entering unstable states.</t>
      </section>
    </section>
    <section anchor="benchmark-tests">
      <name>Benchmark Tests</name>
      <t>This section defines the individual benchmark tests used to evaluate the performance and behavior of a DUT implementing ROV. Each test focuses on a specific aspect of the ROV processing pipeline, including VRP ingestion, validation, interaction with BGP, scalability limits, and robustness under stress and failure conditions. All tests assume the laboratory setup and input conditions described previously.</t>
      <section anchor="test-rov-update">
        <name>ROV Update Processing Latency</name>
        <t><strong>Objective</strong>: Measure the latency from the arrival of an RTR PDU until the new VRP information is installed in the DUT's internal ROV tables.</t>
        <t>The <strong>test procedures</strong> for ROV update processing latency are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Prepare baseline state  </t>
            <ul spacing="normal">
              <li>
                <t>Establish RTR session between DUT and the RTR emulator.</t>
              </li>
              <li>
                <t>Preload DUT with a selected baseline VRP size (e.g., 100k VRPs).</t>
              </li>
              <li>
                <t>Ensure BGP is fully converged.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Inject controlled RTR update  </t>
            <ul spacing="normal">
              <li>
                <t>From the emulator, send a single incremental update modifying a known VRP.</t>
              </li>
              <li>
                <t>Alternatively, for full-refresh tests, send a full VRP set replacement PDU sequence.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Timestamp PDU transmission  </t>
            <ul spacing="normal">
              <li>
                <t>Record the exact moment the first update PDU is sent.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Monitor DUT internal state  </t>
            <ul spacing="normal">
              <li>
                <t>Use device instrumentation (API, CLI, or telemetry) to detect the exact moment the VRP table reflects the update.</t>
              </li>
              <li>
                <t>Confirm the VRP entry has been added, removed, or modified as expected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Calculate latency  </t>
            <ul spacing="normal">
              <li>
                <t>Latency = (VRP applied timestamp) − (RTR PDU sent timestamp).</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat for multiple VRP table sizes  </t>
            <ul spacing="normal">
              <li>
                <t>E.g., 50k, 100k, 500k, and 1M VRPs.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat at least 10 times per condition  </t>
            <ul spacing="normal">
              <li>
                <t>Compute mean and standard deviation.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="test-rov-validation">
        <name>ROV Validation Latency</name>
        <t><strong>Objective</strong>: Measure how long the DUT takes to apply updated VRPs to the validation states of affected BGP prefixes.</t>
        <t>The <strong>test procedures</strong> for ROV validation latency are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Establish baseline  </t>
            <ul spacing="normal">
              <li>
                <t>Load BGP full table (e.g., 1M routes).</t>
              </li>
              <li>
                <t>Ensure all prefixes have a known baseline validation state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Select a controlled prefix set  </t>
            <ul spacing="normal">
              <li>
                <t>Pick a set of prefixes (e.g., 1,000) whose origin AS is tied to specific VRPs.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Trigger validation update  </t>
            <ul spacing="normal">
              <li>
                <t>Modify VRPs so that these prefixes change validation state (Valid-&gt;Invalid or Invalid-&gt;Valid).</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Timestamp VRP installation completion  </t>
            <ul spacing="normal">
              <li>
                <t>As measured in <xref target="test-rov-update"/>.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Monitor DUT validation table  </t>
            <ul spacing="normal">
              <li>
                <t>Continuously query validation state for selected prefixes.</t>
              </li>
              <li>
                <t>Note the timestamp when all prefixes reflect the updated state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Compute latency  </t>
            <ul spacing="normal">
              <li>
                <t>Validation Latency = (all validation updated) − (VRP installed).</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat with varying set sizes  </t>
            <ul spacing="normal">
              <li>
                <t>E.g., 10 prefixes, 100 prefixes, 1,000 prefixes.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="bgp-convergence-with-rov-enabled">
        <name>BGP Convergence with ROV Enabled</name>
        <t><strong>Objective</strong>: Measure BGP convergence time for routes impacted by ROV state changes, and compare to BGP-only convergence without ROV.</t>
        <t>The <strong>test procedures</strong> for BGP convregence with ROV enabled are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Prepare baseline  </t>
            <ul spacing="normal">
              <li>
                <t>Establish full-table BGP adjacency.</t>
              </li>
              <li>
                <t>Enable ROV on DUT.</t>
              </li>
              <li>
                <t>Ensure stable initial convergence.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Select test prefixes  </t>
            <ul spacing="normal">
              <li>
                <t>Choose prefixes that will transition from Valid to Invalid once VRP updates are applied.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Trigger VRP state change  </t>
            <ul spacing="normal">
              <li>
                <t>Send VRP modifications via RTR.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Monitor BGP behavior  </t>
            <ul spacing="normal">
              <li>
                <t>Observe best-path selection changes.</t>
              </li>
              <li>
                <t>Timestamp withdrawal or replacement of Invalid prefixes.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Measure convergence  </t>
            <ul spacing="normal">
              <li>
                <t>Convergence Timer Starts: The convergence timer <bcp14>SHOULD</bcp14> start at the timestamp when the DUT completes installation of the relevant VRP update.</t>
              </li>
              <li>
                <t>Convergence Timer Ends: The convergence timer <bcp14>SHOULD</bcp14> end when both the BGP RIB and FIB reach stable state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat test with ROV disabled  </t>
            <ul spacing="normal">
              <li>
                <t>Use identical routing changes for baseline comparison.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record:  </t>
            <ul spacing="normal">
              <li>
                <t>Time to withdraw Invalid prefixes.</t>
              </li>
              <li>
                <t>Time until new best paths stabilize.</t>
              </li>
              <li>
                <t>Differences relative to ROV-disabled baseline.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="vrp-scalability-tests">
        <name>VRP Scalability Tests</name>
        <t><strong>Objective</strong>: Evaluate DUT performance with varying VRP table sizes.</t>
        <t>The <strong>test procedures</strong> for VRP scalability tests are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Generate VRP datasets at sizes:  </t>
            <ul spacing="normal">
              <li>
                <t>E.g., 50k, 100k, 500k, 1M.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Load each dataset into the RTR emulator.</t>
          </li>
          <li>
            <t>For each dataset, measure:  </t>
            <ul spacing="normal">
              <li>
                <t>Full-table synchronization time.</t>
              </li>
              <li>
                <t>VRP update processing latency (from <xref target="test-rov-update"/>).</t>
              </li>
              <li>
                <t>ROV validation latency (from <xref target="test-rov-validation"/>).</t>
              </li>
              <li>
                <t>Memory consumption</t>
              </li>
              <li>
                <t>CPU utilization during sync and steady state.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record failures  </t>
            <ul spacing="normal">
              <li>
                <t>Session drops</t>
              </li>
              <li>
                <t>Timeouts</t>
              </li>
              <li>
                <t>Missing VRPs</t>
              </li>
              <li>
                <t>ROV process crashes</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Repeat 10 times per size for statistical stability.</t>
          </li>
        </ol>
      </section>
      <section anchor="vrp-churn-and-stress-tests">
        <name>VRP Churn and Stress Tests</name>
        <t><strong>Objective</strong>: Stress-test the DUT under rapid VRP changes to measure stability, performance, and correctness.</t>
        <t>The <strong>test procedures</strong> for VRP churn and stress tests are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Baseline setup  </t>
            <ul spacing="normal">
              <li>
                <t>Load a stable VRP table (e.g., 500k).</t>
              </li>
              <li>
                <t>Establish full BGP table (e.g., 1M prefixes).</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Generate controlled churn patterns  </t>
            <ul spacing="normal">
              <li>
                <t>Rapid add or remove spikes: 100-10,000 VRPs per second.</t>
              </li>
              <li>
                <t>Sustained churn: continuous modifications for 5-10 minutes.</t>
              </li>
              <li>
                <t>Mixed churn: adds, removes, and changes simultaneously.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Measure DUT behavior  </t>
            <ul spacing="normal">
              <li>
                <t>VRP update backlog or queueing.</t>
              </li>
              <li>
                <t>ROV validation delays.</t>
              </li>
              <li>
                <t>CPU spikes.</t>
              </li>
              <li>
                <t>BGP convergence degradation.</t>
              </li>
              <li>
                <t>Missed or dropped VRP updates.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Check correctness  </t>
            <ul spacing="normal">
              <li>
                <t>Verify that no stale or inconsistent ROV states remain.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record crash, stall, or throttling events.</t>
          </li>
        </ol>
      </section>
      <section anchor="resource-utilization">
        <name>Resource Utilization</name>
        <t><strong>Objective</strong>: Measure resource consumption under various ROV workloads.</t>
        <t>The <strong>test procedures</strong> for resource utilization are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Establish monitoring tools  </t>
            <ul spacing="normal">
              <li>
                <t>CPU sampling (100-500 ms interval).</t>
              </li>
              <li>
                <t>Memory usage tracking.</t>
              </li>
              <li>
                <t>Hardware counters if available.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Measure under conditions  </t>
            <ul spacing="normal">
              <li>
                <t>Idle ROV.</t>
              </li>
              <li>
                <t>Full VRP sync.</t>
              </li>
              <li>
                <t>VRP churn.</t>
              </li>
              <li>
                <t>BGP convergence triggered by ROV events.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Record  </t>
            <ul spacing="normal">
              <li>
                <t>CPU load curves</t>
              </li>
              <li>
                <t>Peak memory consumption</t>
              </li>
              <li>
                <t>Any evidence of saturation (e.g., 100% CPU, memory exhaustion)</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Identify thresholds  </t>
            <ul spacing="normal">
              <li>
                <t>Points where performance degrades or ROV processing becomes unstable.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
      <section anchor="rtr-session-behavior-tests">
        <name>RTR Session Behavior Tests</name>
        <t><strong>Objective</strong>: Evaluate robustness and recovery of DUT under RTR failure and failover scenarios.</t>
        <t>The <strong>test procedures</strong> for resource utilization are listed below:</t>
        <ol spacing="normal" type="1"><li>
            <t>Session reset test  </t>
            <ul spacing="normal">
              <li>
                <t>Establish normal RTR session.</t>
              </li>
              <li>
                <t>Trigger forced session reset from emulator.</t>
              </li>
              <li>
                <t>Measure time to reestablish RTR session, ROV state rebuild time, time until validation state becomes consistent again.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Cache failover test  </t>
            <ul spacing="normal">
              <li>
                <t>Configure DUT with two RTR servers (primary + secondary).</t>
              </li>
              <li>
                <t>Terminate primary RTR connection.</t>
              </li>
              <li>
                <t>Measure failover time and data consistency after switch.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Full resynchronization timing  </t>
            <ul spacing="normal">
              <li>
                <t>From emulator, force a full Reset Query sequence.</t>
              </li>
              <li>
                <t>Measure full VRP reload time.</t>
              </li>
              <li>
                <t>Compare across different VRP scales.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Incremental update performance  </t>
            <ul spacing="normal">
              <li>
                <t>Send controlled incremental PDUs.</t>
              </li>
              <li>
                <t>Measure processing latency and correctness.</t>
              </li>
              <li>
                <t>Introduce occasional malformed or unexpected PDUs to test robustness.</t>
              </li>
            </ul>
          </li>
        </ol>
      </section>
    </section>
    <section anchor="reporting-requirements">
      <name>Reporting Requirements</name>
      <t>An ROV benchmarking report <bcp14>MUST</bcp14> provide enough detail to allow reproducibility and meaningful comparison across different DUTs. Each report <bcp14>MUST</bcp14> include the following elements:</t>
      <ul spacing="normal">
        <li>
          <t><strong>Test environment description</strong>: The report <bcp14>MUST</bcp14> specify the DUT hardware and software versions, the testbed topology, and all ROV-related configuration parameters required to replicate the setup.</t>
        </li>
        <li>
          <t><strong>Input conditions</strong>: The report <bcp14>MUST</bcp14> document the VRP set size, RIB-in size, the presence and rate of VRP churn, and whether RTR updates were incremental or full.</t>
        </li>
        <li>
          <t><strong>Metrics and results</strong>: Each measured metric <bcp14>MUST</bcp14> include its definition, a brief description of the measurement procedure, and results presented in tabular numerical form (including minimum, average, maximum, and at least P95 values). Graphs <bcp14>MAY</bcp14> be included for clarification.</t>
        </li>
        <li>
          <t><strong>Deviations and anomalies</strong>: Any deviation from the expected behavior <bcp14>MUST</bcp14> be described, including the conditions under which it occurred and whether the test was repeated.</t>
        </li>
        <li>
          <t><strong>Summary of observations</strong>: The report <bcp14>MUST</bcp14> include a concise summary of overall DUT performance, scalability limits observed, and any significant effects of enabling ROV on BGP behavior.</t>
        </li>
      </ul>
      <t>In addition, the report <bcp14>MUST</bcp14> include, at minimum, the following parameters:</t>
      <ul spacing="normal">
        <li>
          <t>DUT hardware model, CPU architecture, memory size, and software version.</t>
        </li>
        <li>
          <t>Complete DUT configuration relevant to ROV and BGP.</t>
        </li>
        <li>
          <t>Testbed topology description.</t>
        </li>
        <li>
          <t>VRP table size.</t>
        </li>
        <li>
          <t>VRP churn rate.</t>
        </li>
        <li>
          <t>RIB-in size.</t>
        </li>
        <li>
          <t>Number of RTR sessions.</t>
        </li>
        <li>
          <t>RTR timer configuration.</t>
        </li>
        <li>
          <t>Presence and parameters of data-plane traffic (if used).</t>
        </li>
        <li>
          <t>ROV policy mode (e.g., reject Invalid).</t>
        </li>
        <li>
          <t>CPU sampling interval.</t>
        </li>
        <li>
          <t>Measurement repetition count.</t>
        </li>
      </ul>
      <t>For each metric, the report <bcp14>MUST</bcp14> provide:</t>
      <ul spacing="normal">
        <li>
          <t>Metric definition.</t>
        </li>
        <li>
          <t>Measurement method.</t>
        </li>
        <li>
          <t>Minimum, average, maximum, and at least P95 values.</t>
        </li>
        <li>
          <t>Number of samples collected.</t>
        </li>
      </ul>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>This document defines a benchmarking methodology for evaluating ROV on routing devices. As such, it does not introduce new protocols, modify existing security mechanisms, or create new vulnerabilities within the RPKI system or BGP itself. All benchmarking activities are intended to take place in isolated laboratory environments. Nevertheless, a number of security considerations apply to the execution and interpretation of the tests described in this document.</t>
      <t>Benchmarking ROV necessarily involves the generation, manipulation, and replay of RPKI objects. These test artifacts <bcp14>MUST NOT</bcp14> be injected into production RPKI repositories, production RPKI caches, or live BGP routing systems. Test-generated RPKI data sets <bcp14>SHOULD</bcp14> be clearly separated from real-world trust anchors, and laboratory RPKI caches <bcp14>SHOULD</bcp14> use isolated test Trust Anchors to prevent accidental propagation.</t>
      <t>Similarly, BGP routing information used in the tests including simulated full tables, invalid prefixes, or artificially crafted origin-AS combinations, <bcp14>MUST NOT</bcp14> leak into production routing domains. All BGP sessions used for testing <bcp14>MUST</bcp14> be confined to a closed environment without external connectivity.</t>
      <t>Tests involving stress conditions, such as high churn rates or large-scale VRP updates, may cause elevated CPU or memory consumption on the DUT. Operators performing such tests <bcp14>SHOULD</bcp14> ensure that the DUT is not simultaneously connected to any production network to avoid unintended service degradation.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no actions for IANA.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-normative-references">
      <name>Normative References</name>
      <reference anchor="RFC2119">
        <front>
          <title>Key words for use in RFCs to Indicate Requirement Levels</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <date month="March" year="1997"/>
          <abstract>
            <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="2119"/>
        <seriesInfo name="DOI" value="10.17487/RFC2119"/>
      </reference>
      <reference anchor="RFC8174">
        <front>
          <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
          <author fullname="B. Leiba" initials="B." surname="Leiba"/>
          <date month="May" year="2017"/>
          <abstract>
            <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="8174"/>
        <seriesInfo name="DOI" value="10.17487/RFC8174"/>
      </reference>
      <reference anchor="RFC6811">
        <front>
          <title>BGP Prefix Origin Validation</title>
          <author fullname="P. Mohapatra" initials="P." surname="Mohapatra"/>
          <author fullname="J. Scudder" initials="J." surname="Scudder"/>
          <author fullname="D. Ward" initials="D." surname="Ward"/>
          <author fullname="R. Bush" initials="R." surname="Bush"/>
          <author fullname="R. Austein" initials="R." surname="Austein"/>
          <date month="January" year="2013"/>
          <abstract>
            <t>To help reduce well-known threats against BGP including prefix mis- announcing and monkey-in-the-middle attacks, one of the security requirements is the ability to validate the origination Autonomous System (AS) of BGP routes. More specifically, one needs to validate that the AS number claiming to originate an address prefix (as derived from the AS_PATH attribute of the BGP route) is in fact authorized by the prefix holder to do so. This document describes a simple validation mechanism to partially satisfy this requirement. [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="6811"/>
        <seriesInfo name="DOI" value="10.17487/RFC6811"/>
      </reference>
      <reference anchor="RFC8210">
        <front>
          <title>The Resource Public Key Infrastructure (RPKI) to Router Protocol, Version 1</title>
          <author fullname="R. Bush" initials="R." surname="Bush"/>
          <author fullname="R. Austein" initials="R." surname="Austein"/>
          <date month="September" year="2017"/>
          <abstract>
            <t>In order to verifiably validate the origin Autonomous Systems and Autonomous System Paths of BGP announcements, routers need a simple but reliable mechanism to receive Resource Public Key Infrastructure (RFC 6480) prefix origin data and router keys from a trusted cache. This document describes a protocol to deliver them.</t>
            <t>This document describes version 1 of the RPKI-Router protocol. RFC 6810 describes version 0. This document updates RFC 6810.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="8210"/>
        <seriesInfo name="DOI" value="10.17487/RFC8210"/>
      </reference>
      <reference anchor="RFC1242">
        <front>
          <title>Benchmarking Terminology for Network Interconnection Devices</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <date month="July" year="1991"/>
          <abstract>
            <t>This memo discusses and defines a number of terms that are used in describing performance benchmarking tests and the results of such tests. This memo provides information for the Internet community. It does not specify an Internet standard.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="1242"/>
        <seriesInfo name="DOI" value="10.17487/RFC1242"/>
      </reference>
      <reference anchor="RFC2285">
        <front>
          <title>Benchmarking Terminology for LAN Switching Devices</title>
          <author fullname="R. Mandeville" initials="R." surname="Mandeville"/>
          <date month="February" year="1998"/>
          <abstract>
            <t>This document is intended to provide terminology for the benchmarking of local area network (LAN) switching devices. It extends the terminology already defined for benchmarking network interconnect devices in RFCs 1242 and 1944 to switching devices. This memo provides information for the Internet community. It does not specify an Internet standard of any kind.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2285"/>
        <seriesInfo name="DOI" value="10.17487/RFC2285"/>
      </reference>
      <reference anchor="RFC2544">
        <front>
          <title>Benchmarking Methodology for Network Interconnect Devices</title>
          <author fullname="S. Bradner" initials="S." surname="Bradner"/>
          <author fullname="J. McQuaid" initials="J." surname="McQuaid"/>
          <date month="March" year="1999"/>
          <abstract>
            <t>This document is a republication of RFC 1944 correcting the values for the IP addresses which were assigned to be used as the default addresses for networking test equipment. This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2544"/>
        <seriesInfo name="DOI" value="10.17487/RFC2544"/>
      </reference>
      <reference anchor="RFC2889">
        <front>
          <title>Benchmarking Methodology for LAN Switching Devices</title>
          <author fullname="R. Mandeville" initials="R." surname="Mandeville"/>
          <author fullname="J. Perser" initials="J." surname="Perser"/>
          <date month="August" year="2000"/>
          <abstract>
            <t>This document is intended to provide methodology for the benchmarking of local area network (LAN) switching devices. This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2889"/>
        <seriesInfo name="DOI" value="10.17487/RFC2889"/>
      </reference>
      <reference anchor="RFC3918">
        <front>
          <title>Methodology for IP Multicast Benchmarking</title>
          <author fullname="D. Stopp" initials="D." surname="Stopp"/>
          <author fullname="B. Hickman" initials="B." surname="Hickman"/>
          <date month="October" year="2004"/>
          <abstract>
            <t>The purpose of this document is to describe methodology specific to the benchmarking of multicast IP forwarding devices. It builds upon the tenets set forth in RFC 2544, RFC 2432 and other IETF Benchmarking Methodology Working Group (BMWG) efforts. This document seeks to extend these efforts to the multicast paradigm.</t>
            <t>The BMWG produces two major classes of documents: Benchmarking Terminology documents and Benchmarking Methodology documents. The Terminology documents present the benchmarks and other related terms. The Methodology documents define the procedures required to collect the benchmarks cited in the corresponding Terminology documents. This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="3918"/>
        <seriesInfo name="DOI" value="10.17487/RFC3918"/>
      </reference>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA61d63IbyXX+j6eYSJUyKQO0KK9sLWOvTVGXZVmUaJLajeO4
UoOZBjCrwQw8F3KxK/l3fucJ8ix5lDxJzrX79AxAeVNWubwkLj2nz/U7l27O
ZrNJ26VV/h9pWVfuJOma3k2KTUM/td2Tx4+/fPxkkqXdSVJUizp5mJytXPZh
0vbzddG2RV112w187/zlzatJ2rj0JHm3cU3awTttAgsnF2mVLt3aVV3y56uX
l29Oz17+ZXK3PEmeX3z7ejLJ66xK17BC3qSLblYuZ/P1Hfyfq7LVOm0+FNVy
tnbdqs7rsl5uZ4u6mTX17ezx48mkK7oSvvncfDa5CJ9N4LPJVd13LnnXFMui
Sr5JyyIn2pKDq3ffHE7S+bxxtycJ/EKrTMq0AtJcNZmkPSzUnExmsPH2JJkk
CdP5ppjDSm+KHl6pG/jwv63qarns0yrr4fV0XsPu62YLb2dFt0Xyiu+AMvy9
7quugZfOVkWVwgtunRblSVIWfTn//Q/LrEznRy7vjzJ4/vC5b9Mqee1oHXrs
131654rkxmWrCrdbuPYnPHIJK1Vp9fsVrXKU1evJpKqbNTDn1p3A565enT05
Pv5Sfnx2/Osv5MdfPTs+1lefHD+WH4+ffPFEv/bk2VP98ekX+rUnz57pYr/8
8vgZ/DhBhTJPxP/NZrMknbddk2bdZHKzKtoEFKQn7cndoqgcKFVilSNZDwTe
oMCbNulWaZcU603JugcSPkpuVm7w+axvYUnQh9zdFpmble7WlfCAVXpb1M0U
ZJCVfY7P2TR15kDj4cd6kdyyJrkcFj5NNum2rNM8Ofjm6vIw6Tf4TjsFEhws
ANTAblDn5q67c65CWsg0nr++nIKEQEB1OduA6rmkcW3dN5lL+q4oix9IV6f0
YVyszVJQEXij2yIRuE5f5a4BcpotUaa2l5a4cF6QHfLGaQN5D0+AzbZZU8yB
+pVrHLChLOs7esKmgR0XwDQ2XlgDhQGbaPGB+Im91vZtza+9BglskgM070Na
BNwCsQEohW3USEjeZ7jXjUu7dF46edZ6kzb0K9DYl/DINGvqtg1SZLdyNCE9
WRd5XrrJ5GFyjhyEJfHdyeReiwdetkm7cVmxKIAa+MSPP4pSf/oEbyIj2qBD
dQL6YYVt1z4lDyFCwuVPDxOv0yi2u1WRrRLU4QK4WMx7XOK2SImPV5d/OJ91
9YyWbODrN1eHyJquzupSlD0QiKaGBAJF4CVADRdbVB+mtMVN0U6nwItb/gFM
4W3dvQL7z4+SF25T1ltkYauKg3pXVL3DJUHm4Lphp8LwChQVpNmy5jE3klsQ
YA1MAdNwQSRkAd9Y6wBaQI6d6EvRDDR80VeZivHbVQHStkrrvodfClAxh3xb
NvUdLEmW1NArWd808NRym1R1QpErbfLiB6Bi6AfWsKG+QYJIr11DgqGFQc+y
jramho7mskrRTuHpbVdknk0gWdw/rsMuAqzptE3uim6V1EhWkNkCtJnMy4EI
etKX+fan2wwwnewO6EDHRRIIlgLS7pJFAyEBJYQsAbbD5wrYwgLj1eRRckaW
hOuK6AYGNMUPvWQi8VPWrbA/EZGVLjduhL525tn0g/I2lm9Wt0HJglqQKuEC
7/EBJDn8/gr8Dn4Q7KbsUewtqTUseeuaJemBqiBR2gmdR//w6KDmOu+LMm/B
iYPkcXcLtCBVz2ht4yuH5orhEM2VfsGAGH6BkBh+gaBIbge2SC9gaPz06d5A
hTS9IE1kVgICAI04ePH+hr0tfS7Ib4rKQ96xQPUpqk3fkWMr2roEHaX13GLh
siC1RVOvwRKBUbjnHJQPHgRbRd0/r8i6i6wv04ZDXMSUoJrgpUA0rYQV15Iw
4QmAY3b6Pg6bCcc/lcY63cL6+p3kLM1gtWvXgHagh8PXb64Aq6aLRZEhqnGE
vpIs3ZC5wPNyVwLCIHNotxVQA8RrXJD4fcnxu6UA3h6ScsF3598BV+C77Mqs
thXEQpY6qFzryN2BxBp0HSgGE23hk38FeNiBzybzxLWR9VXmozjoNWxdgAM5
l4Ilzfo61a+J6wq+SYzF2J/xdP5rO3CDcdgxhAAOKDhJ2OW04F9bb783I4vf
hVlQ+nVWEIfJV7oK1sL1EYhNJs+3SMFtQU4g8ndegaYDntMKwB2JQ1OJG/Qj
O8nWpQ3oBwdudcKs0RJ00t0+buBjjctDjiOdyibvuICNsVeNQgwu2PabTd2Q
22JUAA/KfRyGH7OilTj48GFy5f7aF43jEP0GspAesibWww9umwBDQD0fXLy/
vnkw5f8mb9/Rz1cv//j+/OrlC/z5+uvTN2/8DxP5xPXX796/eRF+Ct88e3dx
8fLtC/4yvJpEL00eXJz+6QFz98G7y5vzd29P3zxISDWtZBDgAcfngnbB2FHs
aTsJOBO+8/zs8n/++/gLcHT/JMnFp0/yC6YX8MvdygnYrSuI8PwrWMF2km5A
OA2uAggNjbvo0rJlMAchpCIcC4x89GfkzF9Okt/Ms83xF1/JC7jh6EXlWfQi
8Wz8yujLzMQdL+14jOdm9PqA0zG9p3+Kfle+mxd/87sS/c7s+NnvvpogAL7O
wBaIca9r4MswNirexehY+gR1NgfEl98fLl0ACUMUBQawG12oh0nHYZFh9pH4
JKRZPeteKjgBcy15zkePBq7LEPTo0QlAK8R8HNIlXemKtVP3QF4K0D2bWk47
THUXnJjgwuQC1Rc3LnMQAgxuh3ijkO8IUlYi61z88mccMtL4Rw4FRcZeA8hE
DGTdY5JKOB4iIdxLG+EhcdAeFSk9138fokN6FAcyLSHm/AyeJCsAa1bw1BLD
FpBC/IGQB/bXgCXm9Hu26hsxXiRb2edxHWFhSsKFxKsdUWOHDM8u309BIdag
sLw6uRjJbw2XKf8eBp1YW+jRqHpLNJJxUAc3dsLU3ZjYHuLC1DJxaCmRAsNz
ZxyuNHtCL0/rctRzo9yFgAOhUspBaU3OiAGoVfszYbD+G9esC6oBbXl/XXgB
4WA+9tic8bcK328xgRDT3Y9fOajNWzAfXAOTlWTTg9pmqSkz8NKknEBGS8GB
iCCJiCvKMD2r4ENtIkAHWQuQU52HZefRZ3L6k+Q0wK09PodZywwVTFpLFn9N
oJS0Nq0qQPsZKx/RS1UXRneppPuwNn8V/VtDvoEAc1wSYvAIPMHdaD0DHldX
4vQa5xj2tSf35u6TyU6syrWmk1DZaZHHwCbA90wPqm5JkA4B+5Y4nBYVZy7A
pO9nwoFNWmhGpOkRVj2oegrEB6CChs4CBcBdtYhu/t6CBspw1/vXjorIIkLO
o1t+zZfLvIumYtKOTICyEqsxU/LSsk6rSYRb9yWxETYmmYLDfIl0lTOIX1B6
3dV1KQCfUgebN4TQgFsC9rzntOUyBKU3jO1ZNhR/SB4IZ8JeJK5greWOVoVV
0uQAmcnZEFhkSXzlx+UmTygwLC1K0C+1beO0yxp8PxWqkJbzUI9KnmOF5+Dq
/PkhcsBWcApb5AiwFMia3YCOLB1Gyyt3GwxP9zjxeyS/fEs5MosNja10Glhw
i2AyANtKSRCkprmu2fkCruO453LRTye1JmWAeTwbDuYRryMSgUXf/CQiSQyb
LriA95cvTm9egntqWwDgvjAZNkKqDs8x1ATngzYTDInpS85MCL8psJbv6Ymg
SCRGCTUJRzcDTmx0ZVwDloA5KGMBUynpOIsuR/QWWFnFxBZVWMPw+xCGTzDq
xtlclUsQplStX2/GgIEVXNCORkFwQSS7Lm2xnLivnG4RF1n5ZsOuiyJ4DSFm
K+Z2hjgD3cW8h2RMv5sBPlnCdw/c0fII8EJabclZprkgHdR4dOh5k95B8D9k
KaFj6Fh7IMtdVgTLQBkbNzPsUkwDupKWWPcYVHc1xLZdgwJCF6LOweKwN8W6
6Fj26/T7Yt2vk6pfz7GQQZtoY69FUQBZ2Hn1JjwoOyX6DfOBbq8iLPQ1lu3F
4VNrp4wqrQaWKowAfl67rt/QjkMnK3lZ3RZNXaGdSmrROu5oaILHEdVrs3jR
DUGQaYIvb/Db0+TF+xvUoEWx7BvhI7lz8n7soEPHI67q4POGPQ1Ech7p40Nd
IFWKNLbOhU8XFIPROSk6LqhDbgn5piIHClQoCl8GgZXVlEzVIstwD2TJ9Dsu
TP55HSCsZPfE2xthyGTyN/9v8vPZ+N/PE/iHBMC/Xe/jRyYf9UMviWugLEny
MXzgK3qf/+G29d/HyUeMvBq2rsn4D82HE/4ZkRUInjDsIX9xL6nyby+pyef+
/eLf7c+f//zef/E+XoBSSXpww7r0maVR2B93bYS2uZMBMQs+ehJQ4q7hFycf
cWEhAZu4XLf6GC/y8cDQ+wYg3uFulsdPTHaSazXsx5PkIWrwDA0yoZ75bx/c
qMGolXq4FyHvB580o7AflWodpQsLUCCshMM3thQpAeCC2p+ovU19vRe1blDj
5ZA7qt6y+af0VODhIuA1QiiBSwiE2RGgimNYU3dILmdj80CqCKMnLrCYaYpz
HrvazH5PFCq4nOuLAqEFx2Ts3qgHoAU1jBCYI0RDv5cRjB1UEpu+qrhsqN4L
om+dO6p4g5vKMeOC15xavql2S60f/VEzL4C1IBbN17k/wJRt0g7zaKIbGEeR
h4kF6VZUfsjBl2fYaoMdq/cUtLyXZZLetgjhAEhCAmKTZhPmLRZoIY0ChlAX
kpzmVPiCJd1OyrsU77fIAkDl6db5nXA+So05VtadOpU4AkhFu6I2CvES8Ax/
euOkP6AZg8/9NGIQ5IaMBxQfGb1gMrSvkCaLHvDrsqznsI+oODPVnk3d5Bzq
nz1+PH38+DFy9Rh/oF8U8WILuobvEgGGcwG7IOGAT6JU1d1SqOGQ6Hfcruq+
xAJfrB+O2oXwjHmNCTnDRqwFUlFRiQ+RlmNxsWZ8PRuD9LtVHZrkis57ZGJZ
Cl6hDIP5R2A2SqNO1dCLVpWPBwVUANSh7mRqwDBl4AmSnPu9kQdLviVUyrue
eqAAT5OarWaCygncbe64hEKtYK9L3CJq64C9QjsGFOIOqznweCKlroZdFgRo
6AO0NyyUDfeATgzcToGduwKMsCEYT00RasTE0AjWXyAqoSI3rK4FnpAXkudu
HDo7hiLI0DOLwKLWA1sQfmbdwxfnzqM160u1tlVzPR5NyJsO+0au/7Rd5B3F
N9PK3hxD6oIbQzvamnZNcDNaFfCmiS+qBzzC0pqrEHWxdGyl2k+TwJvg9WgQ
g+umdenYvXHg06Ef3FNVVzPTWdfWvjZgMbC1PSZerU8yjOmrTwR77lGhMZsE
R9w5dPMd/LR2aBhFuxbv1leURYAnpBmDqTHevGiJ20cACpZL6uprUsWEY3Rp
cK6r8oJh/zTv+fObvtnU2BaGjXu5omLNSsxJeKX0ti6AgA+OaneYAFoYS6GH
LD9IVgqjUnk+Baa1W7CsNdabQF2pkASkgsZXhLM9c0yH0jMRBNkVEJRWXg7k
oTlRmrGvSYhcYI7uwag7QJCaivhIGxcNsasGOoKoSnCuYGUaE7oZqFBQ+OAs
DeqIS0CUrlBIJXUE2TUzTR8GgyQSYLWPQepjCus4eAPOJwRrwFDzogqtFO9m
UU1Or99iYghhpwQvIe3algrTC4AnXXiFumgEN1y+xE21LpaG8dc4PZixierm
aG+ig2vgekaYhf2hhDk/1MQRkyZ0gKimBlGQ0/LaBgl3rYiLetGqQ/o1fOpt
XdK4AFopVpm7lEwP7HNVLHEfYNdgfHWebn/WUkLLFLMoBGfh0xZ9Q/vWBxoh
mthBcKId1B6woVEteYyhXifHEqU5RNs6wwbXdxgjmdOyTS3cU2Rlt3DrosKG
qUEQ+a2LgCntPhroGbbgwPPA9m1Dn/tBc6CgwuSf2z3IM+7JsCHsSEB2u/7d
4In2J0HUh2VS/XQvguCXvV1CNIU9dZwpb0OQFvtg523qGviEuaMyO1W42Of0
FTWBkw9VfVdNfXdMM28iCegEhMyPRz9milempjVES0FTtUXj5zrugz2hiyAB
HvxasZFCc0EVi8w10k2wZS7SsB1xjHTCP0RMsN2kVDsmLOWdQlK6atmtWLOY
EGQCrZwjNoWVTq81ZaGCsTHjMdJjluyRvppTjI2i9iQXqtrg1GX0BxMB4MV3
UjQC+m0RMzSMUmNjiRRE2wIjA8CjuhdkFBqnZGbct+HcQnAwl1xkG5cCkS59
UMK5wxh56ZalmDeVwS+ly8QzAY0gZz9EeoKz2q88QDtoD7kiKLXIX30xTZ4e
P5kmx0+PnyXzLWz5EDuRM08hoc8DcCkZZoAsYLYkfAP4ACDzA/rj4HUGK0gy
lxzQvC5VMMUDN1i5Or84/1f5hsRBBrkYKJmZ55forqh8STrW8qeBT+RKlT/r
tMswf9ppENRefYmmGtBqBFZpOya8GWYyQCUAhMwGnIT13od7ByfvLUOy4paf
bdEapIHGBQ7HFBW5eGghJaJCDlropOO2sJ9clj5tXgB3Gp5a8gVJP6ckjjTV
2bJgk2Y+aGiWdjc67IiTA0ilwdKDqabWTI0aUqb6VdZiDiI4AcYDgtEEFL4l
nxZESBNJFmH7wbf7++9kl69FOmeYS+d6VGQyee4WmI5jbIiAJ7Ub0w8yChRS
E+173Lmy9NgL4Geab7VNVkmNPipXaB4BAuZkIxQGxPClXJKsUpxLCZ0szBUA
BmYrYCK2gcNcvhn6kIYNftcHJwDHCWSH61m/QbYVtdQTYN21n4p3CG7JfyML
IHfIZ0AYOFwdzpQONmUFBSJmMyghKQ5wmGC4DLrQSGL4UFWjHw6JBSF+pO6V
Zjfqt0tONKakmTPqVIXUxX2PtkrtCd4M6O0cPNSywX41d3ooGFKzDc1AcjYq
k89A81vGRVbM/8Ij3qgCEj58yjXOhKgEYtOXnYk/hv6srLMPCjl5jBz7IB2I
sOv4NTR/cnKg/gUuP4s0u84jEjDwrWkGoDBdh5a7JjxTUXSm5n8mWoGack6m
deZNC4SlLYTINfHALeWaGjxVuaTVMTRSLRmYogknZOyvPPmIVGQ2wExceuQm
r3FPjgauTglMiXGpXZBjl+gGC8gkDcTnFKlGN1GUpaYv8ICmcDRTi8uZVD94
PU8A2tHV+fMZkCnr/+34wpbVeKFXDJ1wbEy+ORWWqAx9zx/kxyc90Ndxfy74
5hS9Xxh71nCgICwsJsmb9w4ErCXGRhmD+B9fyQSyFebgEBWA8hmdhGkRuHMK
ErwInqQjWuIicquA1Io3YKxdz0Lo17N/m0lb2C1g8yuM0hSiWZFaO+sadUMD
k0wJC48CdQ5PmtkRZSny58GCfSoNrgKL5l1cBSbw4TN3NpYLMzptJsAoLuyL
2tJuREdbllKZ4EFrCa27QhR35wUMr+TUEwYVHaJpo065GZ/g4bOsbjbYF0Wt
6NDF1T4C6OiELkV6pD4tnr/CLJkqMvsiaSiO0yEvqdkHUcj8+HTY2vev75od
nLIJA6RHlMnVxP0JEgs1TJ00jo5G0EpozY/upezRI/XvbZjDJBuxIxc8vi+L
HFhVBvmi+tpBGJQSxlUO2zoVUtBclcwEDkdEJLlkrmKm3XmCHGLSwg7hcyWh
a/qMP0UtoHIJiVW3Wrd+y2Nu79zraNQkDcMdn5s6kVBP7XtsauGMELZ5KDOS
kcLgN3bPp+wbSpnaFtGIYVQQue8kmeWnNKfacNBxJAO3mYIXUF3uVjJC2oz0
UzRoGneRYPvgM42daRMGEj305bJfLoqYBzv0yayluwwBBDaqZXsJolOgHoxv
PwD6JP+ty+ic7/0dEHRMnMrm/ajpwXwUP6TTE4MJjZCe+IFqdjl0oNQ0qSLo
KYdXHj0a2jpsuiDI7TcmuICKHpSEZdikyqPTE934+Eg37ljS6I/Qrt2rMCus
BUXEJMBYoaEAYT96tN//AL0GdwHkhr0wcfB9Ok0TeU6uDXN3jAGwhtpbc8Sw
TcI4jQwkAwyjrYNot35eyWDTqYydj70gUJgusKNh8wrqWU71xE9hXULlXG5m
vaS11bhZnMBxsiGjZw32YIdenknhZ4sPjR5PzoI6qGDxIP/YbL3WmaohAzJm
QlA8riUuUsgthLGUkbQZGA0gf/SJryBnAXc8HSqVbXWZmiRW8lvKIf0iKIdw
WBNl6J/PE2UhtqAyKWpS/APfW3Kuc/z48SzUam25xO/fZky3DqfA77TEMVZq
zGEARqETTtJ504PDxnNJ3KaTE7NshcDlJIeQThV5X0umYUfNMRDrNHyMqPVT
+zzDaMoclDS0o+oGZ/vsZfMCXGNvzyIKyqei09B0h4eSrE9JeWpCN62Hsxgg
8jnTcOwwDaPbDJtHsUZOZBQbj8jV7fPs5xLrTQgrgiJPo+Px5DXoTLx1ONrz
2VnmFkhPncO0KFGvbN0kpEB8GFECkB9sa/202yijCkeXQGOBY33LJamHyb1D
v8mPPGuDV1SwInzCIPROjxKaEw5CDH/NV4PTpilueahNLPvyxXsDgnRc2CJT
ai+RtoW+L4dEH5SRaFK71scHknA4pgjOTEeA9oI6KsZg0kbnhiDbhSTx+AjY
gFU70wJg/5RMaNxqlrz07s16KsVFOmIxroX7BeABZHW26+Bk/tk/lOKdSU3B
HXxI+DynIYULehi9fQHI1GkSPyA2eYJT5TQlaFIXg1b9kq9UdEo46K+j+SVk
HZ25HWZonJZuGYJRKwMJ1cfLuqclyQ61Bv3rQhDxTFI5Vmz/LBpA0Qwd5FGm
MhuC6qMjMsSIXx7RFDLIZL2hd2mYX25zCdu6IjTAG/sepxxkUJtK4wXO3spe
cAnyVlVH639xlFxwtGb/oio4UIr3rZMT9aS8TR8c78Hp5TlkJm/OyXP6ytMh
j+R3KJOdZOH2fbobgBXTOeAujUA0a/89rFNsQ8kvzXM+O72GkMgY2JcS4DN4
TUHWscZMnh4lZzIYHAza71Mdw2/pxEao1KgIDpP//c//otMQIircjH+THvCr
I5AGlpv5YgPqlJd2u3RC1yg56f/Txx/YCPBH/H80suMLMgla9td+WfhfCU6p
g4/LIbMNHxljbxhWxpsFep5wrbS+TMeYSJTSV0u8mxzP4lv/GOLAfh+5H4zT
cJ7PAingS44wxuHoTLVdYYG7PxF2nzcc53tjN4jrgCMMjs57paAH6MDw4WSp
LDf1VBe+uBUrqXgr7DzERyLUa/jHjBNfeDC4sGtyk3FfQJqH6CiChy2yD6Hh
6R+mBCKoOhx1PLFHorU1BQdeu9DNSDHJEDd0nhfkCFl8ZrLKtkAliRrB3wPS
rtlXkqeiicqPs6/orUN1R8HdjU6hmNzZ03TqkaKcIhtG9E9q9tbPGfpYuMZo
6KYVghCQFyDCHm0Glc2HNKueRhsgCXc+meD98Jkiqx5a2rInh4JGgCdREx45
qh3GCj4LFx/JLxenZdjpiN2xW6FQrfcKaOV45KeO7fTjcTQLeRzNRqpnGZ6p
8am8jKvvdSa7SgHhVpBWxvl8ZiiikZmPqbmkiNJ5PHpE59SzAS2YDXBieZ9r
UWIaN9iFFiT2OZkR2tqBs0zll/rj+XcAB0CiERKit+UeCeoY7PI9kqzw9EZp
9zpwMbJLUcOg/Ku6ttZMFk6ToQQ7OL4Q+iX9Q8Z6g0a2RCeDeMII4+fQxXCx
I4grEHDtZDiA47ecVU3koN0QsSC3fI4ErJBF3s1xMg/7NOAJaEqNjZVcyMp3
zA37gs8Zju4GaAaeVvca6Th6FlFZq1vWoUQHyprkGvPy9kQv54g03E+6cvIu
4zUDH6LhVasPbewm9aCXzo8FqYxh1YC0l1X+OcJQQkQFDXRoOf3q/DmZ3Cv4
Lzd7beJ8ZGER6Z43IN8mjLAmF40yM6Spw1w0GxB6YHrmOmAkBMInYbUb6ofW
XrA7hThQBcngMHubk6GkOLcjB/d/GHLxhXRf+LaCkrsZPO0587tTkjkrRYnY
02ZUSEhGrvClFgeo5WGKA5GvHiDLz8Iksj7z8DAmscuBvdaJ62jYMJXwcDKM
DyMce3zh2fXkiHEVqYcsFYrFo2wSHMYrrMWYT0811p/4fC74ztDtl8COrY9Y
VMESduXLB+TZdiCIIdLbAzRH3zeYebTGxeiAZmSZgyOd0jPHLQqSD6MTR+QT
JQOUwkprXSrniljyase6XuMNUzFlha/fteN967HFrEmxQ0n+Tww7ykcouSeg
hHliy8Yc330hhkCHRGlX11wgkrLawBj4zRkfbtIDLlRXsvdayHnLMP3iHzmN
R5lNwRZLVJ+J/6GPzNwPZ0f3Gs5zX1+hulWcWvhBzGC8gt7RakaZRQQVzGVP
ISVRb3Zorc3brr1PhHahB4pMBYGYCJk0xz3MpCFPKD6gjd9Xqw16xsVXfcSJ
3lmIo4BxNEeGPoUFcT6DzxuEDAP24BcAYlrN6hXRiYDtoGG5VXehcZinUT0u
GBs/TsCU9RK3ChC/d3LfyH4Tp1NMw1CBRsosGrwxhK65WzZp7hPuga3xiAxV
pONTV/6zYN50ka9VWLMv12BWRliN71ksaRAR299+bMQj5FZOH/jFn3rfQSYt
lXAu5vjJG39oScoFO86h7wXy/sYxexjdX2OGxVqiTqv1nw9gu68w+2yWL10p
am3SnREBoqEgAV3RTg9Q28EKk3Xr28J7nHfP/d8GtElvrOH3v06b/C4lRNhX
NIpaLJL0Fnwzmq1icWWQv/NHy9l+nfOcQb9Z+pUvHUIsOIq1m0fI79fFYUfN
SvaXXhUi1lA1N+ubWxtWLl36YdctA3FVtNrC+gjm+A6qFqfEpGzo677/bK8N
Stz3q7Sn3sOhoP1zbSDiNSztqsYLHwMZNV13yycTLERim3OtHMy38X6Os3yu
9c2dcPpEY+VzdR67Q5HHZft6crDVEJ5wYe13aO+jpotKTGfuH6bv11FrkRbc
kXLKFQOmwj+EwZKqwfMzmvS1q/KVkwarRaYh/RLB3Y0L/VLzuOmeWZUpf5EB
+KjwopKzN64uxZmhRfE9M57B8e71HJ05D9Hd1UIU3kzTJgd6NvrnEt/g56Ht
8wVODCH5w7iCHIXc4eGFH4EoveOMJlj8TrBQSS3iFijLVn4VRMFo8sD5Mb5F
bY5bG6GtQZLTXsMVye2PVM6y/YVdhKqDkTbODhh9JnWV0fy0phYS09F4x+0U
Y6VR2m9Aim3CXL54P4yvSuquntcA1sVfPPcHU+ssS1ue3ABDkAsfqYmu3QJ6
sJ5RM5bOMRAxrxxJjy6DnExOq9HJfJmY54OfemDFVXW/XGGLJMVuYS0nksIF
sOHAkFy/BZLZcctY4D6otU4M2gfaechwTMJxH1mv6rsZ3sfBLdWNXvRGJ0LN
ojqKHka9JOIRPq4XHf2CRsVz6Tp9O6cKtN41wkNbZXT5THThSHw6US8sqakw
Q+fq9c6vfhOu9xt0h3eR72f77Ag5z9jJSCv/Imdi2nC3sY5dxbfo6WCCHQS9
o1uwx1NynlA7Rikz4RRdUIC+pi1jIJEkceyC5gwKGf1L5k3hFlZmWgSyUxQ+
sEztI/WkmHSj0zneFYyX3QCszPhY25qm/WQ+gIaq+zWsAdIF/DPVc58iTu1O
XX75FB14TznJa8jRVm1ycfonvniU9sHDNRk8zicHR8yZF9qfkrm+qgYjLRyx
BzGF71+FVrw3Ww/8/TFrHQ6wUw4yiaWIiyO1XBLUoXfAq8vzSLZ+gPwubeVE
iZQ26RLHfk3BAPheU/0x3at8Kkby/jQx3povI1fBJAYln11jFvIgPduAJxHs
vUjmnujowkqZhwsD1JPJeeXPRU+ldjgid4qi9cLfd+aKzldFDgGH8ksZXW2y
VYEdYdJBAXzhnPzQbfCxpjMdsRrdRhSdkDV/pkGOWw28jbUO/kRcOAuvhYFx
fs14BOyawCtv/VVQ9hIo+TRecU0F04jYI3zv0roS49rw4uvxAbcDSBpwToh7
NbS2v2CLrxgREN04Gn2QyqYcHouSGk1k+K2LaLRq47pC2mu9TAX4uht7n7FO
SBAjabMbMw5p/BA+DiWv/2T/cTTgOe0qTJKLET4EFAFWi+YxPK30/7p83hzl
vedPC+BgG979AMvj9YB1Zy6/wPqxuVuAx0jAUcmpjVbJtdcLoEPkQ+b47du+
xAqOP3BsBvnpQho5wi+tEHAJrlzwIFW0Nbo/LhxZtn/eAzv0CTU5ogtszNiV
PRN3lLyFdBFPbJc8/msuRfPbySLuS+tfirzue/iUGdCXC6ijvgWX1aLbqKN5
/KMdxwz01oaixPMct3V5Gx0vZMfmD/v6eXnq72z1GEe4BpSPlJCzx+vzFzTB
q3dTcwT7Ti909H8jhc9o4zpoKS3VGrBuNXyXZj1Z1Hgnjb+WiAu8KFCkAGvI
4d7LcO0ald/DWUy9Dk2PYuY6tp+Ws7u6wayqobPSwDB/+bqRrqFIV8XrRL0i
EA9uaIlTXoKvN6SqAd6qRp0avKGuqTeQi0kQv4YAVSJh02h7dg4u3HWrIjd3
DBV6+2eYv6BLiOLWDfGQBFRkfPtAhn+Xyt/5OsOD3uaGhmmQYYnli6HwvIHX
WCGTkcTovhQimkaC+eIKDzLI11dyOQeeZMMPWjytvWb/lxo0Y7yVP5AhLEDd
leM7VGY3Zzt1iDZcF5Dw2RbUJJzfnlHuFd+RgOeEshRlSsESmYORgf7myviK
SN9bPJK/DIYCN7dEEgksLd8ODIduNRko2BPGRdr4tiAEK4bz8jdswtUmfeW9
lFyzElVR6Q8Jnb49/Yyrxwmxqk54ZJULz/gt+ZtEWAWe/B/dauWvVW0AAA==

-->

</rfc>
