<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-song-rtgwg-din-usecases-requirements-01" category="info" consensus="true" submissionType="IETF" xml:lang="en" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="DIN: Problem, Use Cases, Requirements">Distributed Inference Network (DIN) Problem Statement, Use Cases, and Requirements</title>

    <author initials="S." surname="Jian" fullname="Song Jian">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>songjianyjy@chinamobile.com</email>
      </address>
    </author>
    <author initials="W." surname="Cheng" fullname="Weiqiang Cheng">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>chengweiqiang@chinamobile.com</email>
      </address>
    </author>

    <date year="2026"/>

    
    <workgroup>rtgwg</workgroup>
    <keyword>DIN</keyword> <keyword>AI Inference</keyword>

    <abstract>


<?line 34?>

<t>This document describes the problem statement, use cases, and requirements for a "Distributed Inference Network" (DIN) in the era of pervasive AI. As AI inference services become widely deployed and accessed by billions of users, applications and devices, traditional centralized cloud-based inference architectures face challenges in scalability, latency, security, and efficiency. DIN aims to address these challenges by leveraging distributed edge-cloud collaboration, intelligent scheduling, and enhanced network security to support low-latency, high-concurrency, and secure AI inference services.</t>



    </abstract>



  </front>

  <middle>


<?line 39?>

<section anchor="introduction"><name>Introduction</name>

<t>AI inference is rapidly evolving into a fundamental service accessed by billions of users, applications, IoT devices, and AI agents.</t>

<t>The rapid advancement and widespread adoption of large AI models are introducing significant changes to internet usage patterns and service requirements. These changes present new challenges that existing network need to address to effectively support the growing demands of AI inference services.</t>

<t>First, internet usage patterns are shifting from primarily content access to increasingly include AI model access.</t>

<t>Users and applications are interacting more frequently with AI models, generating distinct traffic patterns that differ from traditional web browsing or streaming. This shift requires networks to better support model inference as an important service type alongside conventional content delivery.</t>

<t>Second, the interaction modalities are diversifying from simple human-to-model conversations to include complex multi-modal interactions.</t>

<t>As AI inference costs decrease dramatically, applications, IoT devices, and autonomous systems are increasingly integrating AI capabilities through API calls and embedded model access. This expansion creates unprecedented demands for high-concurrency processing and predictable low-latency responses, as these systems often require real-time inference for critical functions including autonomous operations, industrial control, and interactive services.</t>

<t>Third, AI inference workloads introduce distinct traffic characteristics that impact network design.</t>

<t>Both north-south traffic between users and AI services, and east-west traffic among distributed AI components, are growing significantly. Moreover, the nature of AI inference communication, often organized around token generation and processing, introduces new considerations for traffic management, quality of service measurement, and resource optimization that complement traditional bit-oriented network metrics.</t>

<t>In addition, AI agents are autonomous, goal-driven entities that can perceive their environment, make decisions, and execute actions. AI agents communicate with each other and with models/tools, requiring not only inference but also coordination, state management, and dynamic discovery. This further stresses the network infrastructure and is considered within the scope of this document.</t>

<t>These developments collectively challenge current network infrastructures to adapt to the unique characteristics of AI inference workloads. Centralized approaches face limitations in supporting the distributed, latency-sensitive, and concurrent nature of modern AI services, particularly in scenarios requiring real-time performance, data privacy, and reliable service delivery.</t>

<t>This document outlines the problem statement, use cases, and functional requirements for a Distributed Inference Network (DIN) to enable scalable, efficient, and secure AI inference services that can address these emerging challenges.</t>

</section>
<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>
<dl>
  <dt>DIN:</dt>
  <dd>
    <t>Distributed Inference Network</t>
  </dd>
</dl>

<t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>

<?line -18?>

</section>
<section anchor="problem-statement"><name>Problem Statement</name>

<t>The proliferation of AI inference services has exposed fundamental limitations in traditional centralized AI inference architectures.</t>

<t>Centralized inference deployments face severe scalability challenges when handling concurrent requests from the rapidly expanding ecosystem of users, applications, IoT devices, and AI agents. Service providers have experienced recurrent outages and performance degradation during peak loads, with concurrent inference requests projected to grow from millions to billions. The fundamental constraint of concentrating computational resources in limited geographical locations creates inherent bottlenecks that lead to service disruptions and degraded user experience under massive concurrent access.</t>

<t>While human-to-model conversations may tolerate moderate network latency, the emergence of diverse interaction patterns including application-to-model, device-to-model, and machine-to-model communications imposes stringent low-latency requirements that centralized architectures cannot meet.</t>

<t>Applications including industrial robots, autonomous systems, and real-time control platforms require low-latency responses that are fundamentally constrained by the unavoidable geographical dispersion between end devices and centralized inference facilities. This architectural limitation creates critical barriers for delay-sensitive operations across manufacturing, healthcare, transportation, and other domains where millisecond to sub-millisecond-level response times are essential.</t>

<t>Enterprise and industrial AI inference scenarios present unique security and compliance requirements. Centralized architectures introduce risks such as single points of failure (e.g., vulnerable to DDoS/APT attacks) and raise data sovereignty concerns when sensitive data must traverse long distances to centralized inference pools. Sectors including finance, healthcare, and public services often mandate localized processing to comply with regulations. However, distributed architectures also introduce their own security challenges: a larger attack surface, potential for model extraction or data leakage from intermediate nodes, and the need for trust among distributed components. The Distributed Inference Network (DIN) aims to address these by providing mechanisms for secure, verifiable inference within a trusted perimeter while enabling the benefits of distribution.</t>

<t>The rise of AI agents adds new dimensions: agents may be created, migrated, or terminated dynamically, requiring the network to support agent identity, discovery, and stateful communication across distributed nodes. Current networks lack the necessary abstractions to handle agent-scale dynamics.</t>

</section>
<section anchor="use-cases"><name>Use Cases</name>

<section anchor="enterprise-secure-inference-services"><name>Enterprise Secure Inference Services</name>
<t>Enterprises in regulated sectors such as finance, healthcare, industrial and public services require strict data governance while leveraging advanced AI capabilities. In this use case, inference servers are deployed at enterprise headquarters or private cloud environments, with branch offices and field devices accessing these services through heterogeneous network paths including dedicated lines, VPNs, and public internet connections.</t>

<t>The scenario encompasses various enterprise applications such as AIoT equipment inspection, intelligent manufacturing, and real-time monitoring systems that demand low-latency, high-reliability, and high-security inference services. Different network paths should provide appropriate levels of cryptographic assurance and quality of service while accommodating varying bandwidth and latency characteristics across the enterprise network topology.</t>

<t>The primary challenge involves maintaining data sovereignty and security across diverse network access scenarios while ensuring consistent low-latency performance for delay-sensitive industrial applications.</t>

</section>
<section anchor="edge-cloud-collaborative-model-training"><name>Edge-Cloud Collaborative Model Training</name>
<t>Small and medium enterprises often need to dynamically procure additional AI inference capacity while facing capital constraints for full-scale inference infrastructure deployment. This use case enables flexible resource allocation where businesses maintain core computational resources on-premises while dynamically procuring additional inference capacity from AI inference providers during demand peaks.</t>

<t>The hybrid deployment model allows sensitive data to remain within enterprise boundaries while leveraging elastic cloud resources for computationally intensive operations. As enterprise business requirements fluctuate, the ability to seamlessly integrate local and cloud-based inference resources becomes crucial for maintaining service quality while controlling operational costs.</t>

<t>The network should support efficient coordination between distributed computational nodes, ensuring stable performance during resource scaling operations and maintaining inference pipeline continuity despite variations in network conditions across different service providers.</t>

</section>
<section anchor="dynamic-model-selection-and-coordination"><name>Dynamic Model Selection and Coordination</name>
<t>The transition from content access to model inference access necessitates intelligent model selection mechanisms that dynamically route requests to optimal computational resources. This use case addresses scenarios where applications should automatically select between different model sizes, specialized accelerators, and geographic locations based on real-time factors including network conditions, computational requirements, accuracy needs, and cost considerations.</t>

<t>The inference infrastructure should support real-time assessment of available resources, intelligent traffic steering based on application characteristics, and graceful degradation during resource constraints.</t>

<t>Key requirements include maintaining service continuity during model switching, optimizing the balance between response time and inference quality, and ensuring consistent user experience across varying operational conditions. This capability is particularly important for applications serving diverse user bases with fluctuating demand patterns and heterogeneous device capabilities.</t>

</section>
<section anchor="adaptive-inference-resource-scheduling-and-coordination"><name>Adaptive Inference Resource Scheduling and Coordination</name>
<t>The evolution from content access to model inference necessitates intelligent resource coordination across different computational paradigms. This use case addresses scenarios where inference workloads require adaptive resource allocation strategies to balance performance, cost, and efficiency across distributed environments.</t>

<t>Large-small model collaboration represents a key approach for balancing inference accuracy and response latency. In this pattern, large models handle complex reasoning tasks while small models provide efficient specialized processing, requiring the network to deliver low-latency connectivity and dynamic traffic steering between distributed model instances. The network should ensure efficient synchronization and coherent data exchange to maintain service quality across the collaborative ecosystem.</t>

<t>Prefill-decode separation architecture provides an optimized framework for streaming inference tasks. This pattern distributes computational stages across specialized nodes, with prefilling and decoding phases executing on optimized resources. The network should provide high-bandwidth connections for intermediate data transfer and reliable transport mechanisms to maintain processing pipeline continuity, enabling scalable handling of concurrent sessions while meeting real-time latency requirements.</t>

<t>The network infrastructure should support dynamic workload distribution, intelligent traffic steering, and efficient synchronization across distributed nodes. This comprehensive approach ensures optimal user experience while maximizing resource utilization efficiency across the inference ecosystem.</t>

</section>
<section anchor="privacy-preserving-split-inference"><name>Privacy-Preserving Split Inference</name>
<t>For applications requiring strict data privacy compliance, model partitioning techniques enable sensitive computational layers to execute on-premises while utilizing cloud resources for non-sensitive operations. This approach is particularly relevant for applications processing personal identifiable information, healthcare records, financial data, or proprietary business information subject to regulatory constraints.</t>

<t>The network should support efficient transmission of intermediate computational results between edge and cloud with predictable performance characteristics to maintain inference pipeline continuity. Challenges include maintaining inference quality despite network variations, managing computational dependencies across distributed nodes, and ensuring end-to-end security while maximizing resource utilization efficiency across the partitioned model architecture.</t>

</section>
<section anchor="ai-agent-inference-services"><name>AI Agent Inference Services</name>
<t>AI agents are autonomous software entities that combine AI inference with decision-making and inter-agent communication. They can be deployed in various forms, including software agents running on user devices (e.g., smartphones, PCs), or embedded agents in IoT devices and robots. These agents interact not only with each other in multi-agent systems, but also with various tools, APIs, existing applications, web services, and software through function calling or other mechanisms. Additionally, human-agent interaction remains essential, particularly when agents require human confirmation for critical decisions. This diverse interaction landscape spans a wide range of applications, including personal scheduling, smart home automation, industrial process control, and real-time monitoring.</t>

<t>Unlike human-AI conversations, which may tolerate moderate latency, the communication between agents and tools or between agents themselves often demands extremely low latency to ensure timely execution of tasks. The network should therefore provide deterministic, low-latency connectivity for these machine-to-machine interactions, while also supporting dynamic agent discovery, seamless migration of agents across locations (e.g., from a mobile device to edge), and efficient coordination among distributed agents with reliable and secure interactions. Furthermore, as agents may be ephemeral or long-lived, the network needs to handle rapid creation and termination of agent sessions without impacting ongoing services.</t>

</section>
</section>
<section anchor="requirements"><name>Requirements</name>

<section anchor="scalability-and-elasticity-requirements"><name>Scalability and Elasticity Requirements</name>

<t>Distributed Inference Network should support seamless scaling to accommodate billions of concurrent inference sessions while maintaining consistent performance levels. The network should provide mechanisms for dynamic discovery and integration of new inference nodes, with automatic load distribution across available resources. Elastic scaling should respond to diurnal patterns and sudden demand spikes without service disruption.</t>

</section>
<section anchor="performance-and-determinism-requirements"><name>Performance and Determinism Requirements</name>

<t>AI inference workloads require consistent and predictable network performance to ensure reliable service delivery. The network should provide strict Service Level Agreement (SLA) guarantees for latency, jitter, and packet loss to support various distributed inference scenarios. Bandwidth provisioning should accommodate bursty traffic patterns characteristic of model parameter exchanges and intermediate data synchronization, with performance isolation between different inference workloads.</t>

</section>
<section anchor="security-and-privacy-requirements"><name>Security and Privacy Requirements</name>

<t>Comprehensive security mechanisms should protect AI models, parameters, and data throughout their transmission across network links. Cryptographic protection should be applied at appropriate layers (e.g., network, transport, or application) depending on the deployment scenario, with key management and authentication integrated. Privacy-preserving techniques should prevent leakage of sensitive information through intermediate representations while supporting efficient distributed inference.</t>

</section>
<section anchor="identification-and-scheduling-requirements"><name>Identification and Scheduling Requirements</name>

<t>The network should support fine-grained identification of inference workloads to enable appropriate resource allocation and path selection. Application-aware networking capabilities should allow inference requests to be steered to optimal endpoints based on current load, network conditions, and computational requirements. These identifiers, such as agent ID, workflow ID, or job ID, are typically defined by the application. Similar to DNS maps URL to IP address, the network may need to map applicaiton layer name or id to network-layer addresses or labels, and to enable efficient resource routing and orchestration. The network should also support agent registration, discovery, and stateful handover across distributed nodes. Both centralized and distributed scheduling approaches should be supported to accommodate different deployment scenarios and organizational preferences.</t>

</section>
<section anchor="management-and-observability-requirements"><name>Management and Observability Requirements</name>

<t>The network should provide comprehensive telemetry for performance monitoring, fault detection, and capacity planning. Metrics should include inference-specific measurements such as token latency, throughput, and computational efficiency in addition to traditional network performance indicators. Management interfaces should support automated optimization and troubleshooting across the combined compute-network infrastructure.</t>

</section>
</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>This document highlights security as a fundamental requirement for DIN. The distributed nature of inference workloads creates new attack vectors including model extraction, data reconstruction from intermediate outputs, and adversarial manipulation of inference results. Security mechanisms should operate at multiple layers while maintaining the performance characteristics necessary for efficient inference.</t>

<t>Compared to centralized architectures, distributed inference increases the attack surface but also enables localized processing that can reduce data exposure. The DIN should provide mechanisms to establish trust among nodes, such as attestation and secure key distribution.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>



    <references title='Normative References' anchor="sec-normative-references">



<reference anchor="RFC2119">
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname="S. Bradner" initials="S." surname="Bradner"/>
    <date month="March" year="1997"/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="2119"/>
  <seriesInfo name="DOI" value="10.17487/RFC2119"/>
</reference>

<reference anchor="RFC8174">
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname="B. Leiba" initials="B." surname="Leiba"/>
    <date month="May" year="2017"/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="8174"/>
  <seriesInfo name="DOI" value="10.17487/RFC8174"/>
</reference>




    </references>




<?line 187?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>The authors would like to thank the contributors from China Mobile Research Institute for their valuable inputs and discussions.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIACWNomkAA6Vc23IcR3J9n69oUw8WHTNYUlbYEmK9awikQrBJEBZIb2w4
HI6e7pqZEnq6RlXdAGcV+hd9i77MJzPr2tMAJfuBQWAuXVV5PSczC6vVauGG
um//p+5Mr86rwY5qoQ+Wf3LDFy9efP3ii8Wghw5vPnul3WD1ehxUW131G2VV
36jqWg0Pxt5Vn7+6un5e3Viz7tS+uh3qQe1VPyyrD05Vl7VTbllhqep79eOo
Lb/nni3q9dqqe3r41fV5+HbxnfLzXd1vzyvVLxo8f2vs8bzS/cYsFq1p+nqP
bba23gwrZ/rtyg7bh+2q1f1qdKqhx61s9rTVi5cLN6732jlt+uF4wLevXr//
dtGP+7Wy54sWa5xXX7z44p8Wjemd6t3ovJCw5X9c0Lm31oyH84qXWtypI15r
zxfVqsKB6L+LqySrxaIeh52x/L7s9hb7rP5N1/2iqipjt3Wv/1YP2M55dbnT
fV29NWvdKXpX7WvdnVd0sh/wheMPx39t6CN7/sRZY/b0qcaM/UBi4a+nhf6i
9I/41havq37rV3t0kYY+9OC/8huWWfTG7rHve6xUfVZ1sJRq2KnKKn906P3b
y793+Cb+//qrFy8WC9Jb+NJisVqtqnoNA6ubYbF4v9OugkpHUlTVKtfA8JTj
Zx68iblkYlBv1SQTy5VcYZGq/oTxPvPWq3teQdm6MpvqoOx97bA9KPGsunCk
Sx2/6vCubrCntYJMVPWgW9UdsddDZ45YhDZSN/iAwy/rYwXpddCroydjv5b2
ejh0umF1O/58q/iRS9hY3Wp6ve6qBsewdaf/huc0nRnb1bqmZ6at1BYaGlQz
jBb72dR4qdnVXQcd4nccyjV1V2MDejguqw5i6xv8AJ8YLb9Ea6vNRjea3jkj
261qvYe8TVW3LZ7KonfFc3GmTt1DVlsNs2oz+ap2q1a8VdhJh5WN5UMusZdB
QQxb0qqDkbVjhy/7DfS7mi2l9xEl7I924cbDwdih6szDKh5gp7e7FTwTH7Py
Cj2Hv6bmlXW2EEvb67aFwS8+gy0M1rRjQ/tbLIovwQRtfdAttKruTXdPx8QB
IJJqM/ZtTfYF/fhn/x5lL6sr8z5pm3aNlWsSizuryPyVLA3p35NQ2A3oY2Rl
7mBVTW+ZAz2N1uhqu+Uj7w2sEMYEAWh/Mtq209teQ781HgMVsv5wEFKHhbix
RyxeHeqBfndejHKs3JnOqvfBCvgR2IijnfXqIbeMYVcPlfoIi6C1gzp7Bdnk
BmXI5mC18DCIOGiYHBBB9YGNCuGob1mKj6nzW23dsHz8JBCE2+kN72RjzR57
1vvaaqwIyxlYrqw4kUcD0Tp8Fm/jl25sk1T950g/H0il4uGFC4vUFcUwWm9v
8MKGBIhl8MQHPeySkpYV9K3IM7z7YMGBPJ8cMZ2AhdlqSMrKAfLY8KDW1RrC
oi0joCMmYv97/EKagv3y0YMKXVAFn3WtaIEodjliFlPofJXe05tkNcEcKE1W
BBe2DqZIMrzH4Xyk8gLFk6BTe4R6bhEc+3bJWo2igc1iOYS0QSuRWkufd3pz
jGpyWLpT1W6EBawGs5L98XLWeXmLxlhJCMH4+MdqP3aDXvHT8/VYa9MI3hiH
/NAq1rki4EDZCKGyO37SXZHITW/2ZoSMjw55KGi/MCBAFK9fLNzUB4nBml0E
uGELa7ihN7pOrEkBd7Qt3KQ0OFal+nioewIqFS0x4BljD/9DwITIVRt9hfLd
NDBSzqRH0U5oHXyv1c1QI4/mERWG4g4EdOiQIeKH45kNPhNMCf/X3WrQe5XJ
k1ZGnmYRUoQUyXsV8dJJaOagbBCu7tuRkoe3IGs6kXHU333h8ZCGhUUVqiSr
7kzduhj11KlLIULR45SlNxrvWTAzvBaDFKIrQiVZyzcGzgpYM+wAJgHb4mPg
OA8KohhjEMBWwv58KqvdsHpQLq0N+DRJkmQRMFrgbgTWJVtPCHtZtO6Qjd8i
jBiYvXhRX1OaPwmJeNZ+7L3FLr22PKAkNAJz6yn83uHlEHdgS2INwTiWSXxO
gjoUBDf3qmINhxPB2hBpBX/9OJIzH2lTIU7sIYPR+vcFlEGMFu9Q1tp7lCs6
EOflHJcHt7UeVsZqMe+goL2CBBuyg6ueUomW88b0yYJMhoYga2CprYUR9RVF
Ku9+tCwCHMywUWRgEK22v/6i+nttTS/b3td3iuKDdmKorNqPgBcDZXsfV+LK
v/6SdKAk1qu62VWwI8RZSd54TYL/HwZjKAWIP3GaNENl+u746y9JqbAUxFpn
ICHQCgBtOSwj30IBDB6PAOm6+fUXWFljOP5K5NiMlrdAyQEARVB0kCcWs7BW
OzJ6FL/jo4jilezaA2M898CmN+T4nH2SQgUipOrMQXA3Ib+Y3CM4qCQkDY+s
7xFnfRjoB1oS8kT6PPHdqfnHAHBWXWZ4GVHcGighwOIOljfUIS6F5Efip7Uy
94wweUW0T9MxRMoxqg6ZJ5JObV/GgUONBzcjkBmnAshO9cAdxmU6T1EUdshs
CGdZVuCdNeGU+zpgWouMytE6uFeWYkuuhFAFSP2bqVII03C3Gdb0Wxg/wbhe
9sYso8MBApcYPo3IkyuWRAP7sEwsEq4k8P5ZdRkRhwTfV2qje44CbkFFhMX5
09sWdA2mTjaDhPHs7Yfb98+W8n91/Y5//v71f3y4+v71K/r59ruLN2/iDwv/
idvv3n148yr9lL55+e7t29fXr+TLeLUqXlo8e3vx12cil2fvbt5fvbu+ePNM
mGeuRwpjjNIkDSJj02lqtwhkmPJj9c3lza+/vPyy+umnvwOv/uLly69//tn/
8tXLf/4SvzyAyctqFFz8rxDxcQHfULWlp0DAhE7gGZ1kfrczD32FmKEg83/4
L5LMf59Xf1w3h5df/sm/QAcuXgwyK15kmZ2+cvJlEeLMSzPLRGkWr08kXe73
4q/F70Hu2Yt//DO5TbV6+dWf/8R2dlLJEsOBS3V6E9LnY8Sk2tWM2AzxwZws
TgLQYzy/eGhB7wma5BEufUwqD959a94MIoTKyX9O0sgSsM2+7djLUlhjvkLI
WNhGoKLEggmCMpIDqhdc+H8iuLc+iEGW95RkSFxIwXi8onRPNQCrwnYQ0Gra
LyOVFCXxcGDrVtTQjhxOYc93FWeBpeTa7FRJTvF8WP4HSFVYKWEvOfE+cHdy
P/8zE99CkZQgoQRNO9zwQqyUQYS5P4yiZY6rAnxY4ax/rLhVBts/7BgrdyYw
yADtdU/Oh2evzTBAYaq585GyI+pPBZGQCLSz4yEvIZFYsAJpJRMpEikkDdDg
uKKVScazjMXiLzv9Kca1r6ka05H5K8l69EPI5bEww0U0CuC8MuQj/K6kgJHh
ZgQhmVDcwdKbUfYCnXNfU1VS5RvNALBj5kpYhzJBz/WmkuhkuU4yUI4ainoa
chNhs71SA7PInPGnrWckBoHDMKQ/4YghmYes7+lOdcC+yLBd5FezrEx2Spkh
s0QpZIgtSvFJYFN9b3TLebmwNdjLgag2FBCIjEqVR4E4s+EFIcWTVw8rMykV
gS0aceSC69rCCK2gCuiqzlBVxgRhidY4MrJ+xGoDO/USaajuhl2DY3NdtHdc
khAkzHmNwW1r9hAAhzXIh33Yce1BqofrVfbSiuqWXZRrRboQ4kD4GNii7uAO
ryXv4juejUYFlyE/4rpQDfOQNZYvBTaC4ui6PymoXT5qd4nJYg9wfzeCTFBy
puoCgqfRZLxwrk2tO8JXn6uz7dmyuh87onekeRz91Stz+4eLm/cVvK1GFHku
JljTsRhnOqIKCoxzOEoYs71PDklH/MH9KHxWPLkLfJbOxLFy3mwOxHMo5DeD
sbnDALUJ2s31y0F+XMPBUioVKkvFDYo1FCtljaymQauTgH2JzaotcLfnZ9+Z
B8XkOSffpaSZYiVxMxWsCAFFFaa8eQ5YzNVW6yUKvVhKt0D8ZhDjYTuXqKQ+
DiHeke2THBHB76hGycmGA+JetZrjKL7iY4RQNAIPTLlJ9KclhFQ/kPz0W+D6
fFl/ffTJmMuWiuq72u3FYQW9w66QSTbCQjLeJeSwlj0qTtHwJqosPnA6YW4Q
CNYamWyjxWrjQSCbWPMmqxRMFdh820olosVTufxFGpD3KBkBH0u4AWPba6q2
0U8kM4iVCLOKxFjqeol85Sw46zDwwyvdcq3gyGYjbNpzGYKDm3GSb0LoyrXD
2oSDl4TXwXpgNLI42W9tj7HxFWAHwzIlW1kRflPhEEyCgE5jgxS/fFZloepW
uFZSvwdbLotnjES8kyimZ+ybIcDMemYW/uacNOQt+kgziKFvSWz8KG8LWbvI
9zXaaWH0DDsXJhSI6nICr7noZlXWaBuorhPOjz23P47g3vQ52AHTaPiWtKOy
8k7AiGukFKrSEGX1CXCjVZelxCaGGSmIJuYqNdwdmbuhohql+mBSgDe7PN61
VHJleTM/X1b/eXPtiogXexgIw71Kdev3u5RlcALy+prrOPf0CpbMjl/0I4JC
LwiMk4K4NIN1AAGa04bcJO+WUAXBR8NKuDzpK8LSmeCy80xfTkoWvudIH+FX
Y0SdaeYgfm02qqgOiRBBR8euDXRBijo4LqcDyuQcTxp7PAwB6ODUCMtserT0
TH1SLBK6hRubVnA75MkNiDW+86BbGAcfzcOwaQnKuzxj3aSAFFEOpjPbY1Cg
tJ3ySpjuqamoKI5BC/jHVjJNybF+wkgiRBnJwWEt38BKSCSEXie8iIt5bpiC
4JxNzSGz3OUzszqTkEM93kt2qsvU48W33nLee2/lQIvbPVUXGLHDAcZ9JquQ
2kNjMIvTnNy5HtlGflyWuxEzGhKJHJXAKR1UihgZOZMUhnjd+TiatXfLymei
zx7fhgDky1t4Uqc+akp/sZSNrXrq5nHnenTk3C7TKjZj1aOUEEQHsHHP0pCj
nEpB4mWUw4wQGEoU8knU2nNj76dEkWNQ2R3XVrfZyUPfCcd6cFP8Bw1ZGg/p
Q9LPrH5N3QXYXjxFFuphVeQwPgKno3O7KBeLb5n1rqQFPHqRr+VlPClYdqRH
WLawz1DwYKZc76E+l3XkPIwUZD47UpG2KeMdxGbGJmK7zGNDQAkxRs7vmR0j
n3gUtkyXWvxx0EHiW0AgsXpalP0jXZsiwGRWHj9Gv3fS4SuqJqMvPHsLJqco
9ug8u04HzIxKHxRXyeh0uh/puDSOACTNuSjVtcLJiG3pgty1Mca7aRVIAssr
sX8fR25VJ6mKt3WZyYMlyHyQFxAfOG3pn/S15R3BXsRYhWmlJMifd3HZDAxL
vsvcE+l/yOpJWI17W6zmWXefRhYPwlUZuymQlHlc7IPqCbFD7beYWUWQqz8B
OBJsgTK9DvQSR+fSjbEeeKTKQFaDElcwfZb8CRWU7O1UwcuTQyfvXNLayMdI
OhTsXWiluGHSYQye8WiUnrhK2iIjIictkE1V34MT13msdiXWCV1MZEVlJef7
U2eSn6Z8LzS8xAxgpgIZ/SrLQHSmf1eTmlOYW5iLJLl3yWO9ShF3qegFbOZ7
qJFX1R17dzCGorLhqxdBoD5OhZmrU4gwLRx6xw3gqIxmQfvesiOUP9L0VNkC
i6Mk3Fsq7JsOztxWcA3vgBTiBKKH2J7nsXxUqQTggttLVrHg0HJBjUVKLokd
fR8Udhun0eYjDU2Ajb8n0DwaYTIbyaL7SXgsvQmirFu93f+OEDI3HRFoWh0k
MQdlyHCRJ7WUdYJtFR1Kct3p2OAcB875FpTwhgonK8eQMFRts/lA7MbX0KBX
btGF/i3bjOykTEkxrvgRAzF7j3ATmfTmsvSDcn5KztPsMDtEkzuGPXGoqeAm
mTzbrYsUJKXoPMDmgxSP1hl857ZA4oHw3Ydyoc8yM2FqBgQEy/O1OKkFTbAF
O3qx7yNYL1QTBjEkIPuuA+M99VHG/Ni4A5idop2MBDUFDYgdIqj9xqqNBgJv
8WJLhI+sWVbN6nBBuDx45gMcFcBsvVd8lE0+4JbZAGvL+4VXdCYfN/Ek59tJ
svFcfR4+ccg5yJZDOOCdc4tpx2FJpkA4HOZ7LRL9iQ6C8TARThwzY/t8xKIk
KMibUM7GD5HEUYBYCy8wSqarrEA6g9yWqTQXGvepHehbWr50RZmVtyceQZ2Q
cnphrq1yVmLcp/N4sPcQqYrq4NOZuwxDM4b9aGVOMhaMw6qdZx0x3oi/uAjo
pknRS6L+GNJwDKTYchfWPg2OQwFuciehBHUjMx+rG4qCkhNvkSeH7ArBt9Pk
mQJNXnvz0yNZ32Hp4wQnZfoqxyZYDncrXBziiKSvdJuuPhKbpGkPPwF1Sl3l
6AwnZrhejy/MtX1COymIfgocYPDqfhY35AaOzQk75sJtqlPLFQMyolTNpO4y
zX4sfamTaB1JbSkFQ6otqYGKNZFqZg+iZhK1jYUPcw3V2OMU8P0mesce7K+f
kMsVnn9CIcZucKln125Voq8xZsWpzpzzncw9ZjHiSXJ3Vl3m1whOEesJqox0
MJw90cKlDKyd9sdbdVA9tNZo5R711glcxReo86vy4tj/xyWjU6TZ2ywzBfh4
VV1wCJoprj82f1g5sxkeuLVYzh6a/ZqEXQ6xkRbDvOFqX9+F9MOGsZLORNF3
4Dxz5PGpdVYSh2JDbZjbysuMucUN+f3ase99IuMgF8revpsI+GOHw85wyfrm
0j1nJ4mDyv4hWDAb+JA8xV3wcGMgfk76/3Ha8WROEk+SGW45bWycx1FI/kI4
nZ+ivLi5orpHuHFQTqLQjHw5nhtFECr4YQqOR7H9KL1sJ6XWM9CHUIOjRpLM
SfhuUTbWIDUyl1rJk1FAbq0G2Xs4zs8ix9voEGWKYeo4g+pj5dw4RUfT3yA9
COE0KA4ATXdFKssQjihxIZRkDzFy5vdxWOvVju41haqD5OFYEfaxt5zYnmsX
wHk+9J2+C5MlPPeczZQsyXGh/vnRkmKkpGy5hVAY/I5b/abjrs/kPXx57xQX
26XkHIblqT0LuAK1AI5HGMMjjYyX6Sw8+MRwT6J0hJsnEZ4sRm1MgrJYR/qQ
HHmXj2N+7vKyo+RzLfJjcY9hGRoX5ArZ+GoAUGKPWc8yVD99d9SfIchMImCq
/XinZ4pbV3LlLxBqkgqyzvMp4Cpp7EmT2i/lG/Meu2ZToeUtjW9lVpluzvAk
YtnqVYcdjRXB/IzlCYQVcSl/vSS/Z5R3UeUiFXeJA9EJzeFcGhnKxVbNGK4G
SGzcmqxC4ydR85upnCBus0k7Wua1VL7p1/KzTzfqJ3ghajDUaql7H/tWqrhr
NjvwNoXvWQLPyj45YpCu2pMkZjIkEOwvWl7MW5nVUR8/K5BkdCuWNqsT9B+M
dKakdxYkHEXjNymFAGkq6dFKASW/2zYieYUogHiJ6JTUfjpZJ4Xpm0xAMngc
fHs/Ue8jV1NCtM+EPr2ME/ue2VopGj0+Bf6UpjwtCEOXb3j26WJrldy4+Pz2
zcXzajsCI0JbHqvHsPuDJrH5NnXd3ClqIUrBK9hnyMW508+MR51V30TKy1tz
noOE6nZu0qN11LyZ3oUrkWyYvJfqmAydhKKFS7ipoNITdhjYfiZu7Uw3bbiE
mtzcfQPx/HzWy7O4iVFcFkQzotbMj5LeCHfmFwXj+TyAkaqAgBcyWZlYKuiE
95o4mql7SlqXRZPcr8SkRtZe+86DjFUUjXbhfz5B+MdmA3kMCzOM8dzjeg8t
+XpF6jMGq/Dyp1Jfus4SLtjtCD/5ZB87d+1ZZMmHxJIzHhulqO654e2nrbj1
nzrbic4FDFjYSixE+rzoS4Ep36b0N2v2YhVXnos2KfNkZebSPJ4gjBvCA1s/
36nLZzJrPA016VJGrsO5Yq8vpu9SywtANxvDrRko+535Hnu6xhh8lxrGcyPW
coOBSzXS4w/1FFiGH2CMjZeQuOgEy9kGUxiknG8yBaoRJMTeEkZgJMVfvVqy
kDa0XfoFNvuDWfOPzAeOB99da+lWSRqnzQz7rLoFwQSW5+nK61sY7sFVH75/
Q79f3YSCfIlJCL+EMQd8PjwPELkXz+K/0kDb0fwZ/72VvJdq/Bya1xwTBO8G
NSd7jEqm7mQgkMbSHajBJtI4NbYcUHppWbXV4TuPz8ERyqLXnyi08VXKYsC6
b4vPuaz5km5spZjk9+Xvj2eJIoXmmeDi/NHTH9Vg3hL+JIUP3W/LuPNuTTEl
oLhP+mhIsmUZcVB0nXGwAuzz7JJ4EVB2DaLLFKFJ08xxpuMAQtfzTe63cuUx
LBkKMdHdVlzE5juZ6dJlmv6SO58ZkeJ4Byeac6isPKLT/Uq+jJddVZmDKeCG
5CHGQuGZTDms0nysm4Y1j/nI+fMboWzX2CKN3OyMERPOuwxcOAn7Vqv5ArMg
9JiWL4su8/TOHBXkO/wjocVE7iZ/5iELNazVV1fX4kiFwccrgXNROQzGEw72
w8P3J3PR07lhfx2QSpa9HC82IYuUBW+HPML19JYJNjN1qEcf/Dx0uTNfVDxL
cjpFI1KnVQQHuC5Dl/I9FjilE1xKe6LymCZeSYIpZuV5k3BS7bPFo5cylo+g
TX//3l9/LAe0UwkpDHTNz5KH+4jYA18jl17YwZBn+Tnrq+sn6BAFZZ680W5X
jG57whNTEkCtG5LRez5MWKiYjWZLvrq4vviUFeOZvZFPBj7t/9zJGmKgp1w0
d7156EDjJaL9dC5/6ki1//JsA9GoZz9LlJO/UwQV8yG5esP3cev+znthLxuk
D7Et5n9IiLrqivQFcgvFD9Qt8BUOwNT7uht9dZ4MNiSDZnQujID8L/KiecqZ
SgAA

-->

</rfc>

