<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.30 (Ruby 3.4.8) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-wang-cats-innetwork-infer-01" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="ODSI">An Open, Decentralized, and Scalable Framework for Large Language Model Inference</title>
    <seriesInfo name="Internet-Draft" value="draft-wang-cats-innetwork-infer-01"/>
    <author fullname="Hanling Wang">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>wanghl03@pcl.ac.cn</email>
      </address>
    </author>
    <author fullname="Qing Li">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>liq@pcl.ac.cn</email>
      </address>
    </author>
    <author fullname="Yong Jiang">
      <organization>Tsinghua Shenzhen International Graduate School &amp; Pengcheng Laboratory</organization>
      <address>
        <email>jiangy@sz.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Mingwei Xu">
      <organization>Tsinghua University</organization>
      <address>
        <email>xumw@tsinghua.edu.cn</email>
      </address>
    </author>
    <date year="2026" month="March" day="02"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>deep learning inference</keyword>
    <keyword>distributed system</keyword>
    <keyword>decentralized network</keyword>
    <abstract>
      <?line 84?>

<t>Large Language Model (LLM) inference is increasingly deployed as a networked service, yet existing deployments rely primarily on centralized infrastructure and trusted operators. Such designs limit openness, concentrate resource ownership, and constrain scalability to the capacity of individual providers. At the same time, LLM inference introduces execution characteristics (e.g., strict sequential dependencies, large intermediate activations, and tight latency requirements) that are not well supported by existing network, transport, or coordination mechanisms in open environments.</t>
      <t>This document specifies an open, decentralized, and scalable framework for executing LLM inference across independently operated and mutually untrusted participants. The framework treats inference as a distributed, layer-wise execution process subject to explicit deadlines, rather than as a monolithic computation or best-effort service. It combines layer-aware activation transport and routing, decentralized coordination among heterogeneous compute resources, and security mechanisms that provide accountability and correctness without assuming trusted execution.</t>
      <t>This document focuses on the architectural framework, design rationale, problem definition, challenges, and solution space of the Open, Decentralized, and Scalable Inference framework (ODSI). It does not specify concrete wire protocols, message formats, or protocol state machines. Such protocol-level specifications are to be defined in separate documents that build upon the framework described herein.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://kongyanye.github.io/draft-wang-cats-innetwork-infer/draft-wang-cats-innetwork-infer.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-wang-cats-innetwork-infer/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Computing-Aware Traffic Steering Working Group mailing list (<eref target="mailto:cats@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/cats/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/kongyanye/draft-wang-cats-innetwork-infer"/>.</t>
    </note>
  </front>
  <middle>
    <?line 92?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Large Language Models (LLMs) have become a foundational component of modern networked applications, supporting tasks such as natural language understanding, code generation, and interactive assistants. Inference for these models is typically delivered as an online service, where user requests are transmitted to remote servers and processed incrementally to generate output tokens. Unlike traditional network services or offline model training workloads, LLM inference exhibits strict sequential dependencies, maintains per-request execution state, and imposes tight latency constraints on model response.</t>
      <t>Today, most large-scale LLM inference deployments rely on centralized infrastructure operated by a small number of providers. Centralization simplifies coordination, scheduling, and state management, but it also concentrates control, limits participation, and couples scalability to the capacity, geography, and policies of individual operators. As model sizes and inference demand continue to grow, these structural limitations motivate the exploration of alternative execution paradigms that can support broader participation and more elastic scaling.</t>
      <t>This document introduces the Open, Decentralized, and Scalable Inference framework (ODSI), an architectural framework for executing LLM inference across independently operated and heterogeneous compute resources. ODSI considers environments in which participants are mutually untrusted and connected only through the public Internet. The framework focuses on how inference execution can be coordinated, secured, and scaled under such conditions, without assuming centralized control or trusted execution environments.</t>
      <section anchor="motivation-for-open-and-decentralized-inference">
        <name>Motivation for Open and Decentralized Inference</name>
        <t>Centralized inference architectures assume that compute resources, network paths, and execution environments are under common administrative control. While these assumptions enable tight optimization and simplified coordination, they impose structural constraints that become increasingly pronounced as model sizes continue to grow. Modern LLMs require substantial compute throughput and memory capacity, and deploying a single model instance often exceeds the capabilities of an individual server. As a result, inference deployments increasingly rely on multiple servers to host and execute a single model, raising operational complexity and cost.</t>
        <t>At the same time, a large volume of distributed compute resources remains underutilized. Many devices and servers possess limited memory capacity or modest compute performance, preventing them from hosting complete model instances despite having available compute resources. These constraints lead to fragmented and wasted capacity, particularly at the network edge or within smaller organizations, where individual nodes cannot independently support large models even though aggregate resources may be sufficient.</t>
        <t>Centralized cloud-based inference also introduces data movement and privacy concerns. User inputs must be transmitted to remote data centers for processing, increasing exposure to data leakage and raising privacy risks in sensitive applications. In addition, aggregating large volumes of inference traffic at centralized endpoints places sustained pressure on network bandwidth, increases transmission and queuing delays, and creates service bottlenecks that limit horizontal scalability. As demand grows, scaling inference capacity requires proportional expansion of centralized infrastructure and network provisioning, which may not be economically or operationally sustainable.</t>
        <t>An open and decentralized inference model seeks to address these limitations by allowing independently operated participants to contribute partial compute resources without requiring prior trust relationships or centralized admission. By distributing inference execution across many nodes, including resource-constrained and edge devices, this paradigm enables inference capacity to scale elastically with participation. Placing computation closer to data sources can also reduce data movement, mitigate bandwidth bottlenecks, and improve responsiveness in certain deployment scenarios.</t>
        <t>However, decentralization fundamentally changes the inference execution environment. Participants may vary widely in compute performance, memory capacity, network connectivity, and availability, and some participants may behave maliciously or rationally rather than altruistically. Moreover, LLM inference is inherently stateful, i.e., the generation of each output token depends on all previous tokens, commonly represented through cached intermediate values such as key–value (KV) caches. These characteristics make inference sensitive to delays, failures, and inconsistencies, and prevent it from being treated as a stateless or best-effort distributed task.</t>
        <t>ODSI is motivated by the need to support open participation and elastic scaling while preserving the correctness, timeliness, and reliability required for practical LLM inference. The framework addresses how inference can be executed across decentralized and heterogeneous resources while remaining usable as an interactive network service.</t>
      </section>
      <section anchor="scope-and-non-goals">
        <name>Scope and Non-Goals</name>
        <t>This document defines the architectural framework, problem formulation, and design considerations for decentralized LLM inference under open participation. It identifies the key challenges introduced by decentralization, including state management, latency constraints, heterogeneity, and adversarial behavior, and describes the high-level mechanisms used to address these challenges within the ODSI framework.</t>
        <t>This document does not specify concrete network protocols, wire formats, message encodings, or protocol state machines. It also does not mandate specific model architectures, execution platforms, hardware accelerators, or economic systems. Where cryptographic, incentive, or coordination mechanisms are discussed, they are described at an abstract level to illustrate design intent rather than to prescribe particular implementations.</t>
        <t>Protocol-level specifications, interoperability requirements, and implementation details are to be defined in other documents that build upon the framework presented here.</t>
      </section>
      <section anchor="design-principles">
        <name>Design Principles</name>
        <t>The ODSI framework is guided by the following design principles:</t>
        <ul spacing="normal">
          <li>
            <t>Open Participation: Any independently operated participant may contribute compute resources without requiring centralized admission or prior trust, subject to mechanisms that provide accountability and abuse resistance.</t>
          </li>
          <li>
            <t>Decentralized Coordination: Inference execution is coordinated without assuming a single trusted controller, relying instead on distributed mechanisms that tolerate heterogeneity, failures, and adversarial behavior.</t>
          </li>
          <li>
            <t>State-Aware Execution: The framework explicitly accounts for the stateful and sequential nature of LLM inference, including the management of intermediate execution state across tokens and layers.</t>
          </li>
          <li>
            <t>Deadline Sensitivity: Inference execution is treated as a latency-sensitive process, where intermediate steps are subject to explicit timing constraints rather than best-effort delivery.</t>
          </li>
          <li>
            <t>Scalability Through Composition: The framework is designed to scale by composing many independent contributors, allowing overall inference capacity to grow with participation rather than centralized provisioning.</t>
          </li>
        </ul>
        <t>These principles inform the architectural choices and mechanisms described in the remainder of this document.</t>
      </section>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>This section defines the terminology used throughout this document. Phrases in upper-case refer to other defined terms.</t>
      <t>ACTIVATION</t>
      <t>Intermediate numerical data produced by executing a model layer during inference. ACTIVATIONS are consumed by subsequent layers and may be transmitted across the network between participants.</t>
      <t>CONTROL PLANE</t>
      <t>The non-latency-critical path responsible for coordination, verification, and enforcement functions, including cryptographic IDENTITY registration, STAKE management, SLASHING, REWARD settlement, and REPUTATION updates.</t>
      <t>DEADLINE</t>
      <t>A time constraint by which a specific inference step, such as a layer execution or ACTIVATION delivery, must complete to preserve end-to-end responsiveness.</t>
      <t>EXECUTION COMMITMENT</t>
      <t>A cryptographically signed declaration by a PARTICIPANT indicating intent to execute a specific inference step under defined inputs and DEADLINES, enabling later verification.</t>
      <t>IDENTITY</t>
      <t>A persistent, self-generated identifier bound to a PARTICIPANT, typically realized as a public–private key pair.</t>
      <t>INFERENCE PLANE</t>
      <t>The latency-critical path responsible for performing inference-related work, including LAYER execution, ACTIVATION transport, ROUTING, RE-ROUTING, and failure signaling.</t>
      <t>LAYER</t>
      <t>A discrete computation stage within an inference pipeline. LAYERS are executed sequentially for each token step and may be assigned to different execution participants.</t>
      <t>PARTICIPANT</t>
      <t>An independently operated entity that contributes compute, memory, or network resources to ODSI and participates using a cryptographic IDENTITY.</t>
      <t>REPUTATION</t>
      <t>A persistent performance signal associated with a PARTICIPANT, derived from historical correctness, DEADLINE adherence, availability, and throughput. REPUTATION influences task assignment and reward rates.</t>
      <t>RE-ROUTING</t>
      <t>The process of dynamically changing the ROUTING or LAYER assignment of an ongoing inference request in response to failures, performance degradation, or DEADLINE pressure.</t>
      <t>REWARD</t>
      <t>An economic payment issued to a PARTICIPANT for successfully completing an assigned inference task.</t>
      <t>ROUTING</t>
      <t>The selection of execution participants and network paths for ACTIVATION transport and LAYER execution, informed by DEADLINES and observed performance.</t>
      <t>SLACK</t>
      <t>The difference between an allocated DEADLINE budget and the expected execution time for an inference step. SLACK represents tolerance to variability and delay.</t>
      <t>SLASHING</t>
      <t>An enforced economic penalty applied to a PARTICIPANT when verifiable misbehavior is detected, such as incorrect output, missed DEADLINES, or commitment violations.</t>
      <t>STAKE</t>
      <t>A quantity of economic value locked by a PARTICIPANT as collateral for participation. STAKE enables accountability, economic deterrence, and Sybil resistance</t>
    </section>
    <section anchor="problem-definition">
      <name>Problem Definition</name>
      <t>This section defines the core problem addressed by the ODSI framework. The goal is to formalize the execution constraints and failure modes that arise when LLM inference is performed across open, distributed, and independently operated compute resources. These constraints collectively define what it means for decentralized inference to be correct, timely, and usable.</t>
      <section anchor="layer-dependent-execution-pattern">
        <name>Layer-Dependent Execution Pattern</name>
        <t>LLM inference executes as a fixed sequence of model layers applied repeatedly for each generated token. The output of each layer constitutes an intermediate activation that is required as input to the next layer in the sequence. Execution therefore forms a strict dependency chain at layer granularity.</t>
        <t>In a decentralized setting, different layers may be executed on different nodes. This introduces an explicit requirement that intermediate activations be transferred between nodes in the correct order and without duplication or omission. Any execution framework must preserve layer ordering and ensure that each layer operates on the correct input corresponding to a specific inference request and token position.</t>
      </section>
      <section anchor="stateful-token-by-token-progression">
        <name>Stateful Token-by-Token Progression</name>
        <t>Inference proceeds incrementally, generating tokens one at a time. Each token depends on an execution state accumulated from all previous tokens, commonly including cached intermediate values such as KV caches. This state is logically persistent over the lifetime of the inference request.</t>
        <t>As a result, inference requests exhibit execution affinity, i.e., successive tokens must either be processed by nodes that already possess the relevant state or incur the cost of state transfer or reconstruction. Failures, delays, or inconsistencies in state handling directly affect correctness and latency. The problem therefore includes maintaining coherent per-request state across a sequence of distributed execution steps.</t>
      </section>
      <section anchor="activation-delivery-with-timing-constraints">
        <name>Activation Delivery with Timing Constraints</name>
        <t>For interactive applications, inference must progress under tight latency constraints. Each layer execution contributes both computation delay and communication delay, and delays in earlier layers propagate to all subsequent layers within the same token generation step.</t>
        <t>This creates implicit per-layer timing constraints, i.e., intermediate activations must be delivered within bounded time to sustain acceptable end-to-end response latency. The problem is not merely reliable delivery, but timely delivery under variable network conditions and heterogeneous execution speeds.</t>
        <t>The activation delivery problem is defined as follows. Given a sequence of layer-dependent computations distributed across multiple nodes, how can intermediate activations be delivered and processed in order, within time bounds, despite variability in network latency, compute throughput, and node availability?</t>
      </section>
      <section anchor="open-participation-and-adversarial-behavior">
        <name>Open Participation and Adversarial Behavior</name>
        <t>ODSI assumes an open execution environment in which nodes may join or leave without centralized admission and are operated by independent parties. Participants may differ significantly in performance, reliability, and incentives, and some may behave maliciously or rationally rather than cooperatively.</t>
        <t>The problems to solve therefore includes:</t>
        <ul spacing="normal">
          <li>
            <t>Detecting incorrect or inconsistent execution results,</t>
          </li>
          <li>
            <t>Attributing actions to specific participants,</t>
          </li>
          <li>
            <t>Preventing abuse such as equivocation, free-riding, or denial of service,</t>
          </li>
          <li>
            <t>Enabling accountability without assuming trusted execution environments.</t>
          </li>
        </ul>
        <t>Any viable solution must address correctness and liveness under these conditions while remaining compatible with open participation.</t>
      </section>
      <section anchor="limitations-of-existing-execution-and-transport-methods">
        <name>Limitations of Existing Execution and Transport Methods</name>
        <t>Existing execution and transport methods do not directly address the problem defined above. Best-effort networking does not account for execution dependencies or timing constraints. Traditional distributed computation frameworks assume stable membership, trusted operators, or coarse-grained tasks. Centralized schedulers do not extend naturally to environments without common administrative control.</t>
        <t>The problem addressed by ODSI is therefore not solved by simply distributing computation or improving transport performance. It requires a framework that explicitly integrates execution dependencies, state management, timing constraints, and participant accountability into the design.</t>
      </section>
    </section>
    <section anchor="system-and-threat-assumptions">
      <name>System and Threat Assumptions</name>
      <t>This section specifies the assumptions under which the ODSI framework operates. It defines the participating entities, communication and execution conditions, and the classes of failures and adversarial behavior the framework is designed to tolerate. No centralized trust, privileged operators, or trusted execution environments are assumed.</t>
      <section anchor="participants-and-roles">
        <name>Participants and Roles</name>
        <t>The system consists of a set of independently operated nodes that participate in inference execution. Nodes may assume one or more of the following roles:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Clients</strong> initiate inference requests and receive generated outputs. A client may or may not also participate in execution.</t>
          </li>
          <li>
            <t><strong>Execution Nodes</strong> perform inference computation, typically executing one or more model layers for specific inference requests. Execution nodes may differ in compute capacity, memory availability, and supported hardware.</t>
          </li>
          <li>
            <t><strong>Coordination Nodes</strong> participate in control-plane functions such as identity management, stake accounting, verification, and settlement. These roles may be co-located with execution nodes or operated separately.</t>
          </li>
        </ul>
        <t>All participants are identified by persistent cryptographic identities. No global trust relationships are assumed among participants, and no role is restricted to a fixed or privileged set of operators.</t>
      </section>
      <section anchor="network-and-execution-assumptions">
        <name>Network and Execution Assumptions</name>
        <t>Nodes communicate over the public Internet using unreliable, asynchronous networks. Message delivery may experience variable latency, reordering, duplication, or loss. No assumptions are made about bounded network delay or synchronized clocks, except where explicitly stated by higher-layer mechanisms.</t>
        <t>Execution nodes are heterogeneous. They may differ in:</t>
        <ul spacing="normal">
          <li>
            <t>Compute throughput and supported instruction sets,</t>
          </li>
          <li>
            <t>Available memory and storage,</t>
          </li>
          <li>
            <t>Network bandwidth and latency,</t>
          </li>
          <li>
            <t>Availability and uptime.</t>
          </li>
        </ul>
        <t>Nodes may join or leave the system at arbitrary times. Execution may be interrupted due to failures, preemption, or voluntary withdrawal. The system does not assume trusted execution environments or hardware-based attestation, and correctness cannot be inferred solely from successful message delivery.</t>
      </section>
      <section anchor="adversary-and-failure-model">
        <name>Adversary and Failure Model</name>
        <t>The framework assumes the presence of faulty, rational, and malicious participants. Nodes may deviate arbitrarily from prescribed behavior, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>Returning incorrect or fabricated inference outputs,</t>
          </li>
          <li>
            <t>Withholding results or responding after deadlines,</t>
          </li>
          <li>
            <t>Equivocating by providing inconsistent results to different peers,</t>
          </li>
          <li>
            <t>Attempting to free-ride without performing assigned computation,</t>
          </li>
          <li>
            <t>Launching denial-of-service or resource exhaustion attacks.</t>
          </li>
        </ul>
        <t>In addition to malicious behavior, the system must tolerate non-malicious failures such as crashes, network partitions, and transient performance degradation.</t>
        <t>The adversary is not assumed to control a majority of system resources globally, but may control multiple identities unless mitigated by Sybil-resistance mechanisms. The framework does not assume confidentiality of intermediate activations unless explicitly provided by higher-layer mechanisms.</t>
        <t>Under these assumptions, the framework aims to provide:</t>
        <ul spacing="normal">
          <li>
            <t>Safety: incorrect inference results can be detected and attributed,</t>
          </li>
          <li>
            <t>Liveness: inference can make progress despite failures and churn,</t>
          </li>
          <li>
            <t>Accountability: misbehavior can be penalized without relying on trusted authorities.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="odsi-framework-overview">
      <name>ODSI Framework Overview</name>
      <t>This section provides a high-level overview of the ODSI framework. The framework defines how large-scale inference execution can be coordinated across open, heterogeneous, and independently operated resources while preserving correctness, timeliness, and accountability.</t>
      <t>ODSI is structured as a layered framework that integrates execution, coordination, and incentive mechanisms. It does not mandate a specific implementation or protocol stack, but instead defines architectural components and their interactions.</t>
      <section anchor="high-level-architecture">
        <name>High-Level Architecture</name>
        <artwork><![CDATA[
                 +----------------------+
                 |        Client        |
                 |  (Inference Request) |
                 +----------+-----------+
                            |
                            v
=========================================================
|                    INFERENCE PLANE                    |
|   (Deadline-Critical, Peer-to-Peer Inference Path)    |
|                                                       |
|   +-----------+     +-----------+     +-----------+   |
|   |  Layer i  | --> | Layer i+1 | --> | Layer i+2 |   |
|   | Exec Node |     | Exec Node |     | Exec Node |   |
|   +-----------+     +-----------+     +-----------+   |
|       |                   |                   |       |
|       |        Intermediate Activations       |       |
|       +---------------------------------------+       |
|                                                       |
=========================================================
                            |
                            |   Execution Events
                            |   (commit, result, timing)
                            v
---------------------------------------------------------
|                     CONTROL PLANE                     |
|     (Asynchronous Coordination and Accountability)    |
|                                                       |
|   +----------------+   +--------------------------+   |
|   | Identity and   |   | Verification and Dispute |   |
|   | Stake Manage.  |   | Resolution               |   |
|   +----------------+   +--------------------------+   |
|            |                     |                    |
|            v                     v                    |
|   +----------------+   +--------------------------+   |
|   | Incentives and |   | Reputation and Routing   |   |
|   | Settlement     |   | Feedback                 |   |
|   +----------------+   +--------------------------+   |
|                                                       |
---------------------------------------------------------
]]></artwork>
        <t>At a high level, ODSI separates inference execution into two logically distinct planes:</t>
        <ul spacing="normal">
          <li>
            <t>Execution Plane: Responsible for performing inference computation and delivering intermediate results under latency constraints.</t>
          </li>
          <li>
            <t>Control Plane: Responsible for identity management, coordination, verification, accounting, and enforcement.</t>
          </li>
        </ul>
        <t>The execution plane operates in a peer-to-peer fashion and is optimized for low-latency, deadline-sensitive computation. The control plane operates asynchronously and does not block inference progress, allowing execution to proceed even in the presence of coordination delays.</t>
        <t>This separation enables inference to remain responsive while still supporting accountability and correctness in an open environment.</t>
      </section>
      <section anchor="layer-wise-distributed-execution-model">
        <name>Layer-Wise Distributed Execution Model</name>
        <t>ODSI adopts a layer-wise execution model in which inference computation is decomposed into sequential stages corresponding to the layers of the inference graph. Each stage may be executed by a different execution node, and intermediate results are transferred between nodes as needed.</t>
        <t>Execution proceeds incrementally for each inference request, with explicit association between:</t>
        <ul spacing="normal">
          <li>
            <t>A specific inference request,</t>
          </li>
          <li>
            <t>A specific token generation step,</t>
          </li>
          <li>
            <t>A specific computation layer.</t>
          </li>
        </ul>
        <t>This explicit structuring allows the framework to reason about execution dependencies, timing constraints, and correctness at layer granularity. It also enables flexible placement of computation across nodes with varying capabilities, without requiring any single node to host the entire inference workload.</t>
      </section>
      <section anchor="open-participation-and-resource-contribution">
        <name>Open Participation and Resource Contribution</name>
        <t>ODSI is designed to support open participation without centralized admission control. Any node may contribute resources to inference execution by assuming execution or coordination roles, subject to framework-defined requirements for identity, accountability, and correctness.</t>
        <t>Resource contribution is flexible and may include:</t>
        <ul spacing="normal">
          <li>
            <t>Compute capacity for executing specific computation stages,</t>
          </li>
          <li>
            <t>Memory for maintaining execution state,</t>
          </li>
          <li>
            <t>Network capacity for transporting intermediate results.</t>
          </li>
        </ul>
        <t>Nodes are not required to support complete inference execution. Instead, they may contribute partial resources aligned with their capabilities. This allows resource-constrained or edge nodes to participate meaningfully, while enabling aggregate inference capacity to scale with the number of participants.</t>
      </section>
      <section anchor="scalability-considerations">
        <name>Scalability Considerations</name>
        <t>Scalability in ODSI is achieved through decentralization, decomposition, and asynchronous coordination. By distributing execution across many independently operated nodes, the framework avoids reliance on centralized bottlenecks.</t>
        <t>Key scalability properties include:</t>
        <ul spacing="normal">
          <li>
            <t>Horizontal scaling: Inference capacity increases with the number of participating nodes.</t>
          </li>
          <li>
            <t>Elastic participation: Nodes may join or leave without global reconfiguration.</t>
          </li>
          <li>
            <t>Local decision-making: Execution placement and routing decisions can be made using local observations rather than global state.</t>
          </li>
        </ul>
        <t>The framework is designed to tolerate heterogeneity and churn while maintaining bounded coordination overhead. As inference demand grows, scalability is achieved by expanding participation rather than by increasing the capacity of centralized infrastructure.</t>
      </section>
    </section>
    <section anchor="execution-and-coordination-mechanism">
      <name>Execution and Coordination Mechanism</name>
      <t>This section describes how inference execution is coordinated across distributed participants in the ODSI framework. It focuses on how computation is assigned, how execution state is preserved, how heterogeneous resources are coordinated, and how failures are handled during inference execution.</t>
      <section anchor="layer-assignment-and-path-affinity">
        <name>Layer Assignment and Path Affinity</name>
        <t>Inference execution in ODSI is organized as a sequence of layer executions. For each inference request, layers are assigned to execution nodes based on their capabilities, availability, and observed performance characteristics.</t>
        <t>ODSI introduces the notion of execution path affinity, whereby successive layers and token steps of the same inference request preferentially follow a consistent sequence of nodes. Path affinity reduces the need to transfer execution state and intermediate data, thereby improving latency and reducing network overhead.</t>
        <t>Layer assignment decisions may be adapted dynamically in response to changing network conditions or node availability. However, reassignment is performed conservatively to avoid excessive state migration or disruption of ongoing inference execution. Under typical operating conditions, stable execution paths are expected to dominate, and recovery-related latency remains within acceptable bounds for interactive inference.</t>
      </section>
      <section anchor="handling-stateful-inference-and-kv-cache">
        <name>Handling Stateful Inference and KV Cache</name>
        <t>Inference execution maintains per-request state across token generation steps. This state commonly includes cached intermediate values such as KV caches, which are required for efficient generation of subsequent tokens.</t>
        <t>ODSI treats execution state as logically associated with an execution path rather than with a single node. State may be:</t>
        <ul spacing="normal">
          <li>
            <t>Retained locally by execution nodes across successive steps,</t>
          </li>
          <li>
            <t>Transferred explicitly when execution is reassigned,</t>
          </li>
          <li>
            <t>Reconstructed when transfer is infeasible or too costly.</t>
          </li>
        </ul>
        <t>This document does not mandate a specific state representation or transfer mechanism. Instead, it defines coordination requirements that ensure state consistency and correctness across execution steps.</t>
      </section>
      <section anchor="heterogeneous-compute-coordination">
        <name>Heterogeneous Compute Coordination</name>
        <t>Execution nodes in ODSI are heterogeneous in compute performance, memory capacity, and network connectivity. Coordination mechanisms must account for this heterogeneity when assigning computation and routing intermediate results.</t>
        <t>Nodes may advertise capabilities and performance metrics, such as execution throughput or observed latency. Coordination decisions may incorporate these metrics to balance load, avoid bottlenecks, and satisfy timing constraints.</t>
        <t>The framework supports partial participation, allowing nodes to execute only those computation stages that align with their capabilities. This enables broad participation without requiring uniform hardware or resource provisioning.</t>
      </section>
      <section anchor="failure-handling-and-recovery">
        <name>Failure Handling and Recovery</name>
        <t>ODSI is designed to tolerate failures during inference execution, including node crashes, network disruptions, and missed execution deadlines.</t>
        <t>Failure handling strategies include:</t>
        <ul spacing="normal">
          <li>
            <t>Detecting stalled or failed execution steps through timeout or absence of expected outputs,</t>
          </li>
          <li>
            <t>Reassigning computation to alternative nodes when failures occur,</t>
          </li>
          <li>
            <t>Reconstructing execution state when necessary to resume inference.</t>
          </li>
        </ul>
        <t>The framework prioritizes forward progress and bounded recovery cost. Failures may result in degraded performance or recomputation, but should not compromise correctness or global system stability.</t>
        <t>Recovery mechanisms are coordinated without assuming centralized control and do not require halting unrelated inference requests.</t>
      </section>
    </section>
    <section anchor="deadline-driven-execution-and-performance-considerations">
      <name>Deadline-Driven Execution and Performance Considerations</name>
      <t>Interactive inference services impose strict latency requirements that shape execution, coordination, and resource selection decisions. This section describes how ODSI reasons about deadlines, identifies sources of latency, and manages trade-offs between flexibility and timely execution.</t>
      <section anchor="deadline-semantics-and-slack">
        <name>Deadline Semantics and Slack</name>
        <t>In ODSI, inference execution is associated with explicit or implicit deadlines derived from application-level responsiveness requirements. Deadlines apply not only to end-to-end inference requests but also to intermediate execution steps, such as individual layer computations within a token generation cycle.</t>
        <t>Each execution step may be assigned a deadline budget, representing the maximum allowable time from input availability to output production. The difference between the allocated budget and the expected execution time is referred to as slack. Slack captures tolerance to variability in computation and communication and serves as a key signal for execution planning.</t>
        <t>Slack is consumed as inference progresses. Delays incurred at earlier steps reduce the slack available to downstream steps, making subsequent execution increasingly time-sensitive. The framework therefore prioritizes maintaining positive slack throughout execution to preserve responsiveness.</t>
      </section>
      <section anchor="latency-sources-and-bottlenecks">
        <name>Latency Sources and Bottlenecks</name>
        <t>End-to-end inference latency arises from multiple sources, including:</t>
        <ul spacing="normal">
          <li>
            <t>Computation time for executing individual layers,</t>
          </li>
          <li>
            <t>Data transfer time for delivering intermediate results,</t>
          </li>
          <li>
            <t>Queuing delays caused by contention for compute or network resources,</t>
          </li>
          <li>
            <t>Coordination overhead for assignment and verification.</t>
          </li>
        </ul>
        <t>In decentralized environments, these latency components are highly variable and may fluctuate over short time scales. Bottlenecks may shift dynamically due to changes in network conditions, node availability, or workload distribution.</t>
        <t>ODSI does not assume that any single latency source dominates. Instead, it treats latency as a composite effect and seeks to minimize the risk of deadline violations by accounting for both computation and communication delays when coordinating execution.</t>
      </section>
      <section anchor="routing-and-scheduling-considerations">
        <name>Routing and Scheduling Considerations</name>
        <t>Routing and scheduling decisions in ODSI are informed by deadline constraints and observed performance. Execution steps may be routed through nodes that offer favorable trade-offs between compute throughput, network latency, and reliability.</t>
        <t>Scheduling decisions may incorporate:</t>
        <ul spacing="normal">
          <li>
            <t>Estimated computation time based on historical performance,</t>
          </li>
          <li>
            <t>Network round-trip latency and variability,</t>
          </li>
          <li>
            <t>Current queue occupancy or load,</t>
          </li>
          <li>
            <t>Remaining slack for the inference request.</t>
          </li>
        </ul>
        <t>The framework favors execution paths that preserve slack and reduce the probability of deadline violations. However, routing decisions are made using incomplete and potentially stale information, and must therefore tolerate uncertainty.</t>
      </section>
      <section anchor="trade-offs-between-flexibility-and-timeliness">
        <name>Trade-offs Between Flexibility and Timeliness</name>
        <t>ODSI balances execution flexibility against the need for timely completion. Allowing frequent reassignment or dynamic reconfiguration can improve robustness and load balancing but may introduce additional coordination overhead and state transfer costs.</t>
        <t>Conversely, favoring stable execution paths improves predictability and reduces overhead but may limit the system’s ability to respond to failures or performance degradation.</t>
        <t>The framework does not prescribe a single optimal balance. Instead, it defines a design space in which implementations may tune the degree of flexibility based on workload characteristics, deployment conditions, and performance objectives. The guiding principle is to favor timely execution in the common case, while retaining sufficient adaptability to preserve progress under adverse conditions.</t>
      </section>
    </section>
    <section anchor="security-and-accountability-framework">
      <name>Security and Accountability Framework</name>
      <t>This section describes the security and accountability mechanisms assumed by the ODSI framework. The framework operates in an open environment with mutually untrusted participants and therefore relies on cryptographic mechanisms and economic accountability rather than trusted operators or centralized enforcement.</t>
      <section anchor="cryptographic-identity">
        <name>Cryptographic Identity</name>
        <t>Each participant in the system is associated with a persistent cryptographic identity, typically represented by a public–private key pair <xref target="RFC6979"/>. This identity serves as the basis for authentication, attribution, and accountability across all interactions.</t>
        <t>All inference-related actions, including execution commitments, result submissions, and coordination messages, are bound to the participant’s identity through digital signatures. This binding ensures that actions can be reliably attributed to specific participants without relying on centralized identity providers.</t>
        <t>Identities are self-generated and do not imply trust. The framework assumes that a single entity may control multiple identities unless constrained by additional mechanisms such as economic bonding or resource-based admission.</t>
      </section>
      <section anchor="verifiable-execution-actions">
        <name>Verifiable Execution Actions</name>
        <t>ODSI requires that execution-related actions be verifiable, meaning that they can be independently checked for consistency, correctness, or policy compliance after the fact.</t>
        <t>Verifiable actions may include:</t>
        <ul spacing="normal">
          <li>
            <t>Commitments to execute specific computation stages,</t>
          </li>
          <li>
            <t>Submission of intermediate or final execution outputs,</t>
          </li>
          <li>
            <t>Timing assertions related to execution deadlines.</t>
          </li>
        </ul>
        <t>Verification does not require continuous oversight during execution. Instead, it relies on cryptographic commitments, hashes, and signed messages that allow third parties to reconstruct and evaluate execution behavior when disputes arise.</t>
        <t>This approach enables detection of incorrect execution, equivocation, or deadline violations without imposing synchronous verification on the critical execution path <xref target="Byzantine"/>.</t>
      </section>
      <section anchor="stake-based-participation-and-accountability">
        <name>Stake-Based Participation and Accountability</name>
        <t>ODSI employs stake-based participation as a foundational mechanism for accountability and Sybil resistance. To serve inference requests, participants are required to lock a quantity of economic value as stake, which acts as collateral against misbehavior.</t>
        <t>Stake introduces a real economic cost to participation and creates persistent consequences for incorrect or unreliable behavior. When misbehavior is verified—such as incorrect execution, commitment violations, or repeated deadline failures—stake may be partially or fully forfeited according to predefined rules.</t>
        <t>Stake directly enables accountability by ensuring that identities are bound to economic risk. It also provides an economic basis for Sybil resistance: while identities are inexpensive to create, meaningful participation and influence require proportional stake. As a result, large-scale Sybil attacks require substantial capital commitment and expose the adversary to high risk of loss.</t>
        <t>The influence and opportunities afforded to a participant may be proportional to both locked stake and historical performance. This ensures that influence reflects sustained contribution rather than identity count alone.</t>
      </section>
      <section anchor="accountability-without-trusted-parties">
        <name>Accountability Without Trusted Parties</name>
        <t>The framework is designed to provide accountability without assuming trusted coordinators, validators, or execution environments. Accountability is achieved by combining identity-bound actions with verifiable evidence and enforceable consequences.</t>
        <t>When misbehavior is detected, responsibility can be attributed to specific identities based on signed execution records. Consequences, such as penalties or exclusion from future participation, can then be applied according to defined rules.</t>
        <t>This design ensures that participants are held accountable for their actions while preserving open participation and decentralization. Trust is replaced by verification and consequence, allowing the system to operate securely in adversarial environments.</t>
      </section>
    </section>
    <section anchor="incentives-and-economic-considerations">
      <name>Incentives and Economic Considerations</name>
      <t>ODSI operates in an open environment where participation is voluntary and participants are assumed to act in their own interest. As a result, correct and timely execution cannot be assumed to arise from cooperation alone. This section outlines the economic mechanisms that align participant incentives with system objectives, ensuring reliable execution under decentralized operation.</t>
      <section anchor="motivation-for-incentive-mechanisms">
        <name>Motivation for Incentive Mechanisms</name>
        <t>In a decentralized inference environment, participants contribute compute, memory, and network resources that incur real costs. Without explicit incentives, participants may decline execution assignments, deprioritize inference workloads, or abandon execution paths when conditions become unfavorable.</t>
        <t>ODSI therefore associates inference execution with explicit economic rewards <xref target="Bitcoin"/>. Each successfully executed layer earns a payment, creating a direct linkage between contributed work and compensation. Reward levels may vary based on execution conditions, including:</t>
        <ul spacing="normal">
          <li>
            <t>Tighter execution deadlines, which impose higher performance requirements,</t>
          </li>
          <li>
            <t>Placement on latency-critical or bottleneck segments of an execution path,</t>
          </li>
          <li>
            <t>Historical reliability and performance of the executing participant.</t>
          </li>
        </ul>
        <t>By rewarding each completed execution unit, ODSI enables fine-grained accounting and encourages participants to contribute resources proportionally to their capabilities.</t>
      </section>
      <section anchor="costly-misbehavior-and-deterrence">
        <name>Costly Misbehavior and Deterrence</name>
        <t>For incentives to be effective, misbehavior must be economically disadvantageous. ODSI assumes that participants may behave strategically, including submitting incorrect activation outputs, violating execution commitments, or missing assigned deadlines.</t>
        <t>Misbehavior triggers penalties through the control path. Slashing events may occur in response to:</t>
        <ul spacing="normal">
          <li>
            <t>Incorrect or invalid activation outputs,</t>
          </li>
          <li>
            <t>Mismatches between committed execution inputs and revealed results,</t>
          </li>
          <li>
            <t>Failure to meet agreed execution deadlines.</t>
          </li>
        </ul>
        <t>Penalties are calibrated to exceed the expected gains from cheating or shirking, ensuring that rational participants cannot profit from misbehavior even if detection is probabilistic. Slashing may involve forfeiture of locked stake, loss of accrued rewards, or other economically meaningful consequences.</t>
        <t>Penalties are applied only when sufficient cryptographic and execution evidence exists to attribute responsibility to a specific identity. This ensures that deterrence is precise and does not require centralized trust or continuous supervision.</t>
      </section>
      <section anchor="reputation-and-long-term-participation">
        <name>Reputation and Long-Term Participation</name>
        <t>While per-layer payments and penalties shape short-term behavior, long-term reliability is reinforced through reputation mechanisms. Reputation reflects a participant’s historical performance across multiple dimensions, including:</t>
        <ul spacing="normal">
          <li>
            <t>Deadline adherence rate,</t>
          </li>
          <li>
            <t>Output correctness and consistency,</t>
          </li>
          <li>
            <t>Availability and responsiveness,</t>
          </li>
          <li>
            <t>Sustained execution throughput over time.</t>
          </li>
        </ul>
        <t>Reputation directly influences future participation opportunities. Participants with strong reputations may receive higher task volumes, preferential assignment to latency-critical execution paths, higher reward rates, or access to tighter deadlines. Conversely, participants with poor or unstable reputations may receive fewer assignments, reduced compensation, or eventual exclusion.</t>
        <t>Reputation complements direct economic incentives by encouraging sustained, honest participation across many execution sessions. Together, per-layer rewards, slashing-based deterrence, and reputation-driven coordination create a self-reinforcing environment in which rational participants are motivated to behave reliably, enabling scalable and decentralized inference execution.</t>
      </section>
    </section>
    <section anchor="inference-path-and-control-path-separation">
      <name>Inference Path and Control Path Separation</name>
      <t>ODSI separates inference execution into an inference path and a control path in order to reconcile strict latency requirements with the need for security, accountability, and open participation (analogy to Lightning <xref target="Lightning"/>). The inference path is responsible for latency-critical execution and data movement required to generate inference outputs. The control path is responsible for identity management, economic coordination, verification, and enforcement. Together, they form a coherent architecture that decouples performance-sensitive execution from governance and accountability functions.</t>
      <section anchor="design-rationale">
        <name>Design Rationale</name>
        <t>Interactive inference requires execution progress within tight and predictable time bounds. Operations such as cryptographic identity registration, stake management, global verification, or reward settlement cannot be placed directly on the critical execution path without violating these constraints.</t>
        <t>At the same time, an open and decentralized environment requires mechanisms to deter misbehavior, attribute responsibility, and enforce incentives. These mechanisms inherently involve coordination, state persistence, and potentially delayed resolution.</t>
        <t>The separation between the inference path and the control path addresses this tension by allowing inference execution to proceed optimistically and independently, while ensuring that execution remains observable, attributable, and enforceable. Lightweight checks on the inference path reduce the likelihood that incorrect results propagate to users, while the control path provides definitive validation, economic settlement, and long-term accountability.</t>
      </section>
      <section anchor="inference-path-execution-properties">
        <name>Inference Path Execution Properties</name>
        <t>The inference path carries all latency-critical activities required for inference execution. This includes:</t>
        <ul spacing="normal">
          <li>
            <t>Layer-wise computation across participating nodes,</t>
          </li>
          <li>
            <t>Transport of intermediate activations and execution state,</t>
          </li>
          <li>
            <t>Routing and re-routing decisions based on network and execution conditions,</t>
          </li>
          <li>
            <t>Failure detection and signaling to enable timely recovery.</t>
          </li>
        </ul>
        <t>The inference path is optimized for low latency, minimal coordination overhead, and predictable progress under Internet variability.</t>
        <t>Inference path execution is optimistic but constrained. Participants are not blindly trusted, and incorrect execution is mitigated through the following mechanisms:</t>
        <ul spacing="normal">
          <li>
            <t>Execution commitments: Participants commit to execution inputs, layer identifiers, and deadlines before revealing outputs, preventing adaptive or inconsistent behavior.</t>
          </li>
          <li>
            <t>Selective redundancy: For high-impact layers or execution steps, multiple participants may perform the same computation, allowing mismatches to be detected through lightweight comparison.</t>
          </li>
          <li>
            <t>State-local execution: Execution path affinity minimizes state transfer and confines errors to limited execution segments.</t>
          </li>
        </ul>
        <t>These mechanisms allow many incorrect executions to be detected within the same token step or shortly thereafter, enabling rapid fallback or localized re-execution before incorrect results propagate to the user.</t>
      </section>
      <section anchor="control-path-functions-and-enforcement">
        <name>Control Path Functions and Enforcement</name>
        <t>The control path operates asynchronously and is responsible for system-wide coordination and accountability. Its functions include, but are not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>Cryptographic identity registration and authentication,</t>
          </li>
          <li>
            <t>Stake locking, management, and release,</t>
          </li>
          <li>
            <t>Validation of execution commitments and revealed results,</t>
          </li>
          <li>
            <t>Slashing decisions for incorrect execution or missed deadlines,</t>
          </li>
          <li>
            <t>Reward calculation and settlement,</t>
          </li>
          <li>
            <t>Maintenance of long-term reputation or eligibility signals.</t>
          </li>
        </ul>
        <t>Because control path operations are not latency-critical, they can perform thorough verification and policy enforcement without delaying inference execution. Control path decisions are driven by signed execution records and verifiable evidence produced during inference path execution.</t>
        <t>While some violations may be detected only after inference has progressed or completed, economic deterrence and reputational consequences make sustained misbehavior irrational for participants seeking long-term participation.</t>
      </section>
      <section anchor="bounded-risk-and-practical-correctness">
        <name>Bounded Risk and Practical Correctness</name>
        <t>The separation between the inference path and the control path introduces bounded risk rather than absolute prevention of incorrect execution. However, the combination of early detection on the inference path, rapid fallback mechanisms, and strong economic deterrence on the control path ensures that incorrect results are rare, localized, and unlikely to persist undetected.</t>
        <t>This design follows a well-established distributed systems principle: optimistic execution combined with eventual verification. ODSI applies this principle to open and decentralized inference, enabling high-throughput, low-latency execution while preserving accountability and system integrity.</t>
      </section>
    </section>
    <section anchor="scalability-and-deployment-considerations">
      <name>Scalability and Deployment Considerations</name>
      <t>ODSI is designed to scale across large numbers of independently operated participants and to operate under dynamic network and resource conditions. This section discusses considerations related to open participation, participant churn, and long-term extensibility of the framework.</t>
      <section anchor="open-membership-and-sybil-resistance">
        <name>Open Membership and Sybil Resistance</name>
        <t>ODSI supports open membership, allowing any participant to contribute computational resources without requiring centralized admission or prior trust relationships. This openness is essential for elastic scaling and for harnessing widely distributed and heterogeneous compute capacity.</t>
        <t>However, open membership introduces the risk that a single entity may create many identities to gain disproportionate influence or rewards. To mitigate this risk, ODSI relies on Sybil-resistance mechanisms implemented within the control plane, where Sybil resistance can only be achieved economically rather than administratively. While identity creation is unrestricted, meaningful participation requires stake-backed commitment. Stake ensures that influence, task volume, and rewards are proportional to economic risk and historical performance rather than identity count.</t>
        <t>This approach preserves openness while preventing adversaries from cheaply amplifying influence through large numbers of identities. Large-scale Sybil attacks require correspondingly large capital commitments and carry a high risk of economic loss.</t>
      </section>
      <section anchor="growth-and-churn">
        <name>Growth and Churn</name>
        <t>Participants in ODSI may join or leave the system at any time, either intentionally or due to failures, mobility, or network conditions. The framework assumes continuous churn and does not require long-lived availability from individual participants.</t>
        <t>Scalability under churn is achieved by:</t>
        <ul spacing="normal">
          <li>
            <t>Decentralized participant discovery and coordination,</t>
          </li>
          <li>
            <t>Adaptive assignment of inference execution based on observed availability and performance,</t>
          </li>
          <li>
            <t>Conservative state migration and localized recovery when execution paths change.</t>
          </li>
        </ul>
        <t>The inference plane prioritizes timely progress in the presence of transient failures, while the control plane provides longer-term stability by discouraging unreliable behavior through economic and reputational consequences. Together, these mechanisms allow the system to scale with participation while remaining robust under dynamic conditions.</t>
      </section>
      <section anchor="interoperability-and-extensibility">
        <name>Interoperability and Extensibility</name>
        <t>ODSI is defined as an architectural framework rather than a single monolithic protocol. It is intended to accommodate multiple protocol instantiations, execution environments, and incentive mechanisms.</t>
        <t>Interoperability is supported by:</t>
        <ul spacing="normal">
          <li>
            <t>Clear separation between inference execution and verification functions,</t>
          </li>
          <li>
            <t>Well-defined interfaces between planes,</t>
          </li>
          <li>
            <t>Use of cryptographic primitives and message formats that can be standardized independently.</t>
          </li>
        </ul>
        <t>Extensibility is a key design goal. New execution strategies, verification techniques, or incentive schemes can be introduced without disrupting existing deployments, provided they respect the framework’s core principles. This modularity allows ODSI to evolve alongside advances in inference techniques, hardware capabilities, and decentralized coordination mechanisms.</t>
      </section>
    </section>
    <section anchor="privacy-considerations">
      <name>Privacy Considerations</name>
      <t>Inference execution in ODSI involves the transmission of user-provided inputs and intermediate activations across independently operated participants. By default, the framework does not provide confidentiality guarantees for such data beyond transport-level protection.</t>
      <t>Decentralized execution may reduce the need to transmit user data to centralized endpoints and can enable computation to occur closer to data sources. However, distributing execution across multiple participants may also increase the number of entities that observe portions of the execution state.</t>
      <t>Privacy risks include exposure of user inputs, partial activations, execution patterns, or metadata such as timing and routing information. These risks vary depending on deployment context, participant selection policies, and execution strategies.</t>
      <t>ODSI is designed to be compatible with additional privacy-enhancing mechanisms, including data minimization, execution on trusted hardware, encrypted computation, or differential privacy techniques. Such mechanisms are considered out of scope for this document but may be incorporated by specific protocol instantiations or deployment profiles.</t>
      <t>Operators and users should carefully evaluate privacy requirements and select appropriate configurations and extensions when deploying ODSI in sensitive environments.</t>
    </section>
    <section anchor="relationship-to-existing-work">
      <name>Relationship to Existing Work</name>
      <t>ODSI draws inspiration from multiple areas of distributed systems and decentralized computing, while addressing challenges that are specific to large-scale, interactive inference workloads.</t>
      <t>Centralized inference platforms provide tightly optimized execution but rely on trusted operators and centralized infrastructure. ODSI departs from this model by enabling open participation and decentralized execution while preserving interactivity.</t>
      <t>Prior work on distributed machine learning focuses primarily on training or batch-oriented computation, where execution is less sensitive to strict per-step latency and state continuity. In contrast, ODSI targets online inference with strong sequential dependencies and deadline constraints.</t>
      <t>Peer-to-peer and decentralized computation frameworks enable open resource contribution but typically assume best-effort execution or stateless tasks. ODSI extends these ideas by incorporating stateful execution, deadline awareness, and economic accountability.</t>
      <t>Blockchain and decentralized ledger systems provide mechanisms for identity, verification, and incentive alignment, but are generally unsuitable for latency-critical execution. ODSI adopts similar accountability principles while decoupling execution from verification to meet performance requirements.</t>
      <t>By integrating concepts from these domains, ODSI defines a distinct architectural approach for open, decentralized inference that complements rather than replaces existing systems.</t>
    </section>
    <section anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The authors would like to thank colleagues and reviewers in the community who provided feedback on the early version of this draft.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC6979">
          <front>
            <title>Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA)</title>
            <author fullname="T. Pornin" initials="T." surname="Pornin"/>
            <date month="August" year="2013"/>
            <abstract>
              <t>This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6979"/>
          <seriesInfo name="DOI" value="10.17487/RFC6979"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Bitcoin">
          <front>
            <title>Bitcoin: A peer-to-peer electronic cash system</title>
            <author initials="S." surname="Nakamoto" fullname="Satoshi Nakamoto">
              <organization/>
            </author>
            <date year="2008"/>
          </front>
        </reference>
        <reference anchor="Lightning">
          <front>
            <title>The bitcoin lightning network: Scalable off-chain instant payments</title>
            <author initials="" surname="Joseph Poon" fullname="J. Poon">
              <organization/>
            </author>
            <author initials="T." surname="Dryja" fullname="Thaddeus Dryja">
              <organization/>
            </author>
            <date year="2008"/>
          </front>
        </reference>
        <reference anchor="Byzantine" target="https://www.usenix.org/legacy/publications/library/proceedings/osdi99/full_papers/castro/castro.ps">
          <front>
            <title>Practical byzantine fault tolerance</title>
            <author initials="M." surname="Castro" fullname="Miguel Castro">
              <organization/>
            </author>
            <author initials="B." surname="Liskov" fullname="Barbara Liskov">
              <organization/>
            </author>
            <date year="1999" month="February"/>
          </front>
          <refcontent>OSDI</refcontent>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA7V9644byZXm/3qK3Daw6LZJtu0BZrYF7K5LJbVbbkldI5W7
xxgYi2QySKaVzKQzk1VijwfwO+z+2dfzk+y5x4nIZJV8WQF2S1VkZlxOnMt3
vnNiuVxejfXYhGfFZ9dt8d0xtIviRahCO/ZlU/8YNouibDfF+6psynUTiq/7
8hAeuv5Dse364nXZ7wL8f7s7lfCXN90mNMWrdhv60Fbhs6tyve7DPTz7uxfv
X312VZVj2HX9+VlRt9vu6mrTVS087lmx6cvtuHyA5yzhM8Oybtsw4lvgb/Cw
5c9/cTWc1od6GOquHc9H+Mqrl3dfF8VPirIZOnhB3W4CDH4DA/9sUXwWNvXY
9XXZ4D9eXT+H/8BwP3v17u7rz67a02Ed+mdXGxjOs6uqa4fQDqfhWTH2p3AF
w/2nq7IPJTz1XXca63b32RWOZdd3pyP88KY7HOnHy+sH+FxxB4Pf1lXxfgyh
p09/CGf4wubZVbEsNiEciyaUfQu/KmpdG/pVPYx9vT6NYVMM52EMB/6CW/1C
1uHq6j60JxhtUXz6MIqCl+qzH+AJ+PZf41fx54eybuDnuNa/qsO4XXU9fb7s
qz38fD+Ox+HZl1/ix/BH9X1Y6ce+xB98ue67hyF8iQ/4Er+4q8f9aQ1f/dC1
u3PZnsOXT+wpfquBDRhG90L79oofuKq7p57z1O9X+/HQfHZ1VZ7GfdfjlsCL
i2J7ahqWvc++KdsGV+eHkhetAFHZlW39YzmCtD0rbkO7q/bwfyDo664vQbDO
9LEgy4jv3jc//6dfHatmVVarqoX3Td/zr/iS1/Xf9Iqm/uMTT/8dLF3xm/rC
HO4GePn+VBbv4S0/wv/glI6hb+nXZQOSUW5OsBtw0Pdd1xT/9ekh/QHfdf7V
8ONqlIevwuZ0aXhv4CMPoS7+7fTo8H7bgqz1Qz2m7/p4Ojz8avqaq7brD/CI
ezoY776++eev/uWrZ1dXqF3cL57XY9XV7TN6pGo7/WFxXRzhwCzHbon/LUIT
qrHvWjhJVTns5WDyoE2G6M9S/lvAqQbl8X5VvC0/lIdu7OwXPPn3sHjDvk5/
Tcqn+OXPf/7f4J+v691+RAWRjvFuH4o1j7No9COqEp5Fpdxtt8tqX8KnYCRj
2Y7FsTwfQIsMnzDu33RDOO6L265rs3H/ZuV/mn3tblW86M9/KLPv3O3LzSac
BvfLZKbPzz/C+Oo2pDO97ctqrGE+xVo/UGzLUzMWY9eEviRr8uRU3qyKmxJ0
ar4Bb+rdCQxT8rvsq89XsAnDh+4+++rzsl+Xfel/yfP5xVdffbX8+S/pJ33Y
ghUZYcHR1L1/8YrHOqJ1hB+pdnt4eFidwNbUH0mRNmFXVucvj6d1AzPHgzB8
2dTrvuzhh31XBTBi7W74shs29VdffYmH6X8dyyMcD9C7OBP5z+o4wFFYLpdF
uYZ/w0JeXc0a5s9fv37zRTRBRT3APyqwdHiwmjNYnmPTncHolENRqpihbQr9
fV2FRXEOYxE+gtVCMeRPk5TBAsDXj319KPsa/ta1hbdh8MYeR3qqxhMYKvQn
4B8Dmr0OpoOqZVgV70/VHh461Lt2AGk/1CP+FpT5MCwKWF5+IqioPgzdqYfx
dw8tLMa+PrKPgoYcPgGnYKCDUTegRkB+ihGOUVUeywr/3W1hPJv6vgZ918CY
O/hbwPdfj/TBAbYdxPIA04Xl8qsFr+82pyoMsAahOuGGFXDqcMHB4MKiVEPx
OViu1aJAu16NsHB/PMGowQ0p1Dup6gDTaWh7alTBB9hlnBXK/z1LAU9nxANP
JrKtzjDpP57qPtByfwEDLccCjX7bjcVDaJpiOB2PXY9Luj7HPZI9XMB6l+2A
HyA/qOrAO6lZ+ReHAJNo6+GA0kBLXoT2vgYVSC9bXV3d7UFSwFc74Q+K4Riq
egvTgFHSxxepy8KjH1Q3bROHUVYOrUqyuGXVdwMOwLw4FCMSDpRHeODhNMKG
wU9PrQrPsexh0etjicMsUFnGl40g1uPg34Ay7Twu3IQz6P2HeghuQ+ngwUjA
3fwDGAIUn/DxCAcUxHETyg14CriBMK49GAvYh5affOjaDuRtj2aDHDNeXJjy
GlycZdjC7Ec9Savi1YgfW+PDZBwlOXFRCuKW0fR79kSztU53EowLLOw+gFR1
u9CGDtQwDyYeGhGuAebb43Fwu09CJQcCBlJ1sNJ6ivh89T0sCR7I4gGmCiOC
uQ+nA26n7omt5ERutvCXAaaLU4OtIr9yDKgS4HjYxi1EBeAKk2sC5xDGBJJ0
gN9s67bGHy/w4DUNeCc2oa7hDRzgnKNFpJc8Hc9YsOJk53OMVr6gTdp0MGI8
ZCz1Z1JEPawwrABsF4xs7KqugUEcYFlQ1bLnMdA501+DPsAjfihhyrB8ouv0
t8sm3IN2lnMlloBONwjfOvC0SY3CtoHM46N0VWXX1qe62RSno6xtnAosZgUC
D98GcQ017goaikO92TTh6uon6AaSVsOXzpuNgewGKJ19eQ/uSACJgt2DeZ7a
jbqPKGVdi7sMC3+Ab/Wtsx/l8WgWbqGaioSmHD7gUYPFgEMEUkyy0Ojr4QWg
mcGf2ZDkV/DcAuWaRYM3knQonRo84kNN7g+ssNvXDs9pgFN+4OmAUEJchM4G
2bwGPU4xenBgMRYI0eQ94LoVILg9qWA4y7I1eDrBRqHMwzaBZu5G/hoMmUYm
moQ2rmLNTW+ET8skQExPI5xO+NEHiEBX4P029Qd69qaWlZVV1AENKFbg7tEg
aT4FmTxcTfxc05WbIbdc4eO+BjdyeNIsgbcNgwR/CFzifinzdcqRxFjWHTYc
T3NqpcwEj3TOeYCgeo4YY6NG6DblGd7TDSPbwCXaiZCNd+JZPO5PmJkAy1cW
wwFWueAQH6XRWfgbfQarywHm0LAl82oUJBTins2pIaEj3SKntwWhxFEt4LyN
BRgExB68a4IPwvPULNiBGaKJihILevXYwEcfcVIWICHdri+P+zN/59ihBcLd
T7wX5z1dD7LaAyzRIEcjLuhBHCQ4difSKzsI4BdyLnQp8ezhsEUDgUSjLQo0
NDSCXS9GbQszl+jxPjGeJUruTm1JBedJTnux7kEyYUeSBWHD3sEehqZEB4oW
BdZ9Yjyc8/X36nX84CXr83f6KE+Y3lWBA6BDQhKZ+Fmo3h/Afdgnbg3pmhnX
R7azhRmgF92iXtmDj7Db0/pwTCExfhhz58iZ4n33kGgKc2xhkcD22MHABSan
wTt48GrS0azCYTystkCTTPyD1GmhQ4KqbOI25L7nT34CVsh8Itwe3HsaQbL9
ccOvrm5SXaHbF7ccDwgOLIicTn0kVbsgp3vxMOaHSBvEiwCPOaBMb2DC6Gfy
6ZDJroof9nUT5MTR2498zEJLAsuKtIMfHlRB0TKrktpkSgoedBYt7A+w18Ds
GrDBTgI90IktmO+KrZ7XG7mKWJEPANYcXQANQtA5Jjtbi+k/kZIg8UNzRqca
DGJ/dhoNf8iKHcUB9DQORW0Ywxbkt40Yf3zE4HcwnUhaUtRf2XoNyAaX1F+J
+3dqQDvPW5JkBdSsHOALNahjs9ww7z1ap7jjIRssOv81/kDOfvSAGoi7zFce
RpDeaUxZSux3D/7qgfxUjwJPJBEdCzLIJGIgfiTVsCtli74LuwTs0PP4QR4G
dNBJkYfJPuCZw1kMUephEuSzthW52uCLtuyc7cHl3vbdgRaEzjDNccw3bUAv
8whvQw+RNvcecWMU6hkVeEcHwItpA2EVrjvopx1ulai3h5I0Q5Qg1osnWD/Y
u5JXVs9p2MCawtxQ76CfjE4A2n+HMg7qzDnxaXEpUNehi58qdjVcvF3iOuLi
FKjZQM+Wu10fdmWyW4fyjFpzOCESX8ODVqk2qprutFmuyyHVTOhGOAsHbjUG
lPfkbIgvCRqQ/asKTiO6iuiR1i2sLrwVdCi+dt4npaehpkTx2HJUgo4puTfx
UKCF74YTRx30HdiXD+iGU/gpIq8D6Wv03CkiAWPG3rfz8tH/BkW4kWBNlwqf
4MVf/Bldh1EyGKiS3aLBlhw7kpRjU+L6DDDfkkIiENeBxtxZvFGsYbwP9Wbc
2+zQaeClofwRTQhc2xOjWBB+i4bHD6MTJ752se7GEWLMUH0QXcqo1L7r6x87
9Oa9D0c6SBwt1JwY6bAz42Zox1A06YCbQdEQKRHYAhimuFhPIGhmodC/xe/Q
frIHgWKIEg0yAdq/7Q4S7GDsEHUWCTmtJJ5V1FYC/7CqrmatqBiLED6QroQ9
xi0Qs+a9R3TGm6Z74AWYdZkSR2fs2FSSJuRfOesSj5h6FryCIpPqSaBe5/fv
6yPFSn4WaJlJAlbF83PUu+kWRTMv/t4BVS0pChKo5oTRqA1oaZpM1BZpItHM
aKXrwdxiMfXDnEDA9DkUEk+YtgfnmnrMq+IWzoAqY0WZQK2gOtCDq0uFPhzp
FnDaTlVI9QpEYXA6SX/ZifESb2EeyFfQGA7OOcE/NQZkPYqOM7EwAZggbAb6
bd90D6As+wSvEh8OcQOLhhF92olXP7cJzteCuXuBQSG/L3tcpQ1acxzTnEmb
eCJ6csR/Blug/okYLjrPCisdQiqnrOIJCwEjAz8GV5+PljtXCTzYgGjWuqfo
UPWho6XJ8GVcVjRQbH4w5tyewNuoV2FF7p7DPlA/hBJOugcQJJgntx5jYLTl
ODqBFxbioJL3g4qTTa2GDVWJUW+KSd+XzSlEgOZDOP/lz/+bflh8/u33X/B3
ok3PsPBD+cHvabQUKKeidrew4OiPK5hDsdEwKiDBto9cEoy3yRtZB8YbA4dd
6PjRYjUomBne6n0rBJtAMCkEq2N0S7ABuxJsNtXwkyqchqtZpIoatwlkiNBq
sN/kodIFOX4EG8uEQEfVGviLHdiIYdYkWCIZefQmOhc2Jg3eJGQTr3Wj+ivV
5NMw1alWmgp7nDiT00BOHANjHmjLQCkO095XsGL0/Lddu/x1B5onj+IZyRwe
x38V7cUTfGoccCLAsIbPYmdw4dIppseKY7PpZhK8W6NJYggIxwQC7oDl6JKR
jOSKzFuDKUI0g4gt3MJHlbNBt71EqgqrFbBlNl2CbnloewgPBSh2kP1pYJlN
zbCbgvjDhJmg4Ns6TxCWy0i38zQU7Cbw25Buhb5hwh2lLB8Hv18JcGZvRJ8J
P6IIuDgZScC+8DATLC6+HFe07DeSOKlCI2gYvV4dH0ngDxiAo/df9efjyOBa
XdEeogjch0cTY/gG0CXVCfFcib3pZwavYz6utQRswTsFO1M3zWngvKXIb02J
4sREwOdQgdCjXJiD1rdh4Ji96qur28dyBgs+pORfpfqFImCz6O6ZMCiw4s2F
dENHQ/zUXEO0KbjQrBRe8JxvwVGrMM4mlZDLIqrj3QnOounibaeuoyza0R7w
7OpqyRjQrT/Oz4rr9vwJjibZb+dofoqDOetDsoib57nwGcO/IqlWruEM49tr
DqYxQZMhWzdOKJ85YDMeiHrwQN0UfDMAQ+E2AaUa9EIQCmEXGH5VIpiYmM18
LszHwGA/VWapJZ9TazS196gMhLH2UifwLLNwmnHFOJ+XbNA0jjlGgnpYJoOy
RwSoJPrfa2n8etTRHHs6byfLc6j9ZO+JXkeZ2kG2iHPBxXtxa2ARLm5O4qyI
ZVhGf0gC8ohPuEHBnhz5dM5lpBEspFggoilerySuECe6zrwLLvdwJw4g8gm7
oZ7bj3qQcygeEgUq6zOn/QgXoCDJnb54xEgfWySIji/6pvMREMbNM0FPMid/
MnzsSwYNrV9UFQXzwGbcjWrfGXLmBDyqc7GZ7AttOJM0enuJ6q24g32q267p
dmcxp0OoRK1GT2eMnxKDzSuORzR9ZnG77wmugNeDFxr6ZVWSdthydCf6WBQ0
PheF8frm7tX313evvnt7dfXKy04LT+3Jn6TI7+icmZjjKMXakmwXm1OfhMOr
Ij79PYkhyho8l56CODCfQDkavKAMgnk0So+SQ+zW8N/gnTLG+2++e3v37rvX
xe3r67cv2Vi04E7qoYHtYQ8ZgXmLSolnkpnvRQGiZrZRIHyUh4pPP4ShlVlN
1RCJc1C8evHy7d2ru9/Be3YC6OOT3t9df/sycfbev75+/82rt79eFO9e/nD9
7gWIAcbR/Et88buXt7+9o0WEfUVfB6f64uX1i9evcJbXFCO4Y4yLy1BOGb0i
F0qBUlhYWFbK3kWVA0sRt81O/oKBQkNxxetA4BgxNmRABgpPfKgP43z5by9v
fktPuvnuzZtXd29gVXDMyWIxnMQqAjzlppRAlXKyt9fv7l7dvLq9fntH6GtV
CuhCnhApNEPa52crbnz0TQj4pCyQrOL7BcMrjDHCKUj2H6ahu4lDRxIdhZlo
t0OzXWpGfhNDAogkkeJA7rWfwcKxB0Cvi1uA28BZNwiRCSUdOZ44ljVavldv
v3757uXbm5desj9NqgXNSM7lkoAutPUUM0UJfn39u5fvoigsvCA4Eti772BH
WWCX9ndcTjHjtJWaiqVn4rKhC0whgQefwFjuggYaFCjqxh3rIwW+Kx4V6w8L
T6PthoWkvCsCGoxk0JY7XYLMDrU+m3pLLxjTxLNXIm63CNW84Bbi29HwcA5Q
XUJL3Cp0RJGBqq3oJ8JQyI0ljMLMVcCgjPXqvDKB4UVtkMqix61kA3DmXVWb
W5eLIhwKOKgbydPAYzpW+AkAoUcEvLK9ekVTqCsm8FZeX8F2NqdAaR5EUGQr
LDPRB3DlMD/AOi2KE0u40ukwz3VuSwWiCfVTl0y+gKvM0utewQm/rt11KUqr
3BQQOaWXUArJnFC/lJsAm7AR7Q1vsfXQ9AENHBU3SYtFjkKiBvdnOIWpIiCh
BSWME0Ry7llVK21/G4XWZTgYg0qWaCDGuYJ6syKdAv6YkqZ3zx1t+uhECbAj
xEbbFCZ9tFuTAdj4BYMRgkW7+ZbHp+cNEyJiswnVbLqKxNJWc33a7MIoskSM
EWYnxDmRlcORJ2oCD/uqoDdGZHKIxG9c+HsMJVzYRPghj5MsL28cm/eN20Ew
CQ1+BXNTc1v4gKUQbCgI64LgTsMV9npHmkM0tghR0tES6BVhdCJ5OTvUMReg
Hkl64FmNxe/kO+Cp/+OpZO2Dm67DZWgVFvaDspn8WEvUTA3ZNkTMuj4HtNgx
0SRDGmwu4ltwUr2qAeTMnOEjLgRFz/ZWQLgXRrl8xMGtOqZC0jcUnbRYPsOd
KLTYdTCDmnQoCR0aUREaI6C4mMYbJspeKwEaubu0hRMoXcQ5+p7CVPYEYIac
Z03DJyWucTcogRCISIgLAqMpCao+hHIWnHS6oGOCDUmToMSiihl4ZQTlNbGD
X1hgZUFzcVuOSO8B+5yx/cjADuyTbOuPZmuZF+vc/cEOBpw7ClG9KY4+ERll
3jlJOGj+gT1PWpR65Le2l3jtvGn1EEFvOk+cv5DY4KNEEhqA6cBXbt4YBIVt
Jwgk4//EazQyIxkY9Eb0aaD/W0TUME+LQRJSwZNtQYed6dXmXcgKiQdibgvB
I/oZSgviwtQOLaY1sBDdIXCyABdY/xYzbfF0bkzZMkdB1sOUT4/uMNEkBOzZ
nCz9TnleS3UiLhaPVQzqKRiwAIDXiR7L5gvdI6YD4KDdZssZMSK3Dol3kv6F
BpkRl27epVfzTbaCPD6FHiSXoCDPHf5yuT4v6S+olnaoXUgfRbBFqmWGlGm7
sIQZjYRQnA6OKGoOOm4gVNHl9MmzdgYLghD9xC43+VqPJ9hcRPl0Wu3b710i
DVUsvRH+0nQ78Zecj4gICq17U28DGVThuk9WF6GBeVqUMZmFGeyT3ltS+GfN
O4p/w2k7WkKSm1ATFLE2D4/1feuUcwPB0eZshCSGU5pwjyAsTxFtbFudehGj
gdQK/0oPAuVVAytdJqmviq/NydMsIj/Ipw6JmkJPAk9zQ0HhpkYxRUgRTm81
JuUMDO5RMMZqTq1Z1DW8pcTxYZI0A2+csE340gl6WCa61wOrXsTCUbiO11Fb
vpCwnf3+O0b6bqL5ubr6mubt+O8Jz95xNvik89GRSPoib1sORQ4p+Pho3Y37
JASkfRDe2+FwalUP0c8X0WGjbQll32B0LfoVCTAl0RBQWVAlUw4quSQWM+no
wLpsOHmQ4qAoiYd4k6iAcWt4NlO0VKX8ok5WalUsE5DBEDaAprGmASmThvJQ
x5F8ySmmEuaFrJY0WCBWIqeHm+BwG6Sbs39gP5RtZL+4CZ7UICTcmVSvE7kj
6kvGTL2Jtse7sSnoUg6SlwEZ+TXCQ5lwcx2Th4BNQoZE8pVYo9xLIddgNru6
7D4M6T7kJRZsuxYmLLgvtEmkJ5if6KOIOjLGZFMWM1xWFl4cYBIx/086rNMs
FH362uU+nkswIbwDZh1b5dw8zyUywVmdogfyh45miJy8+2A2fz4xRfmXrC7C
Q/MUNaC1mdBp2LEh6IGQM/KJ4cUJn8bRF4y1wSnUwXFm/mqaTNUJNQ296ZWh
ByiEFCcMXXMfZvTxM87GYJzGEEF0kbxR8DaODeKwwG9ej5EEVjIgTG9Tn8WH
4fSF28iQ5dydmnH08+47hZu3fQjLvubKJYoCWpQHNHBSWoQPe6mIZZYZfLq8
LufJo5d3z6rASuFIeSk1YGLulMsl9kCjG9UeORkEzwbMDV9ABmmGUcHxiuMA
wmxfailq9N/x7XeGV7wJMNUNWDP7ZEg+GZGNA3+y2HSkLqM9j+SHtFQQT8Ua
/KUVHMOYCJNTTy6BMhBk+X35R9cWvjiKihUm9mOFE7FarSmTu0x9bis7GNhC
HAJWKXEF86QiWnCEsh/CcicEQyqYc4VMGLlwsRLaSVkYCKLQ5kg5HVecJSUL
pj4eLVhIjmAa1iuHKp5GIo7gCeW8EBrfjGGZVcUysZBFWzfYQ1DIEjGSbOkL
eykciQlitBY7Dkjmd24xw86Z8wQSHLcd8yMJH+IglTOhlP97T9QSFug9+h3F
dSzsyACTWDZN6UhXAMIHkDX+FDCxeIvrUR3s4g4fnhqEk2i6qQ+Wlq74Gh0F
6qqmJDYZnFbFTy/m8DPKR5YYVm7AqnjbJcZJKBKYFwGtspuI+ePqjawZn5wN
K5nbHBx91xm3hPk+hWh+rhnB+F5q5+bQHhe2OCS/qD1KGQubYXJqluU4Y1RJ
hRW9RWORwdJ3ylr56U9vGiwKGH7604JQNX7JJCpjXL0KeBojCMPIC5b6wY7h
Y2gE+FphexOpKhu/L8fGAUQtTJOAgcih8xn5eFJ9riumjP10EzSJAPGL0f7g
QZzo2ojT4ei7kaor3N0ZXq61O1AGmEzQk2XiHNNFEQ23PDYlthnRRHDEeDeS
GfIqA5TIB2PwkFWf5pdj2lfhQtp8xZCqbqmgOZnQkK2G0fMJkOJKb/KFrhFs
yKsCLVFJKtchBGneSSZDDh+cyl3Trctmli3vTpm0EkicH/GEaUqM4jHsprA6
I43MitJTLscu1qnS6X0rfjc+MEpEojr5iEVNFiLukdU2Srrt1GrcBAMdzm21
pxq3Qc09zP6N0BQtxsFtwSxFX5OYWixl8UAfFBBbeHyNVFYDIQytqFflVK1Z
YrCwRgOrQaIGGhwn4yGRAWpREPHtsertOAoJyFk4Ml+0yUgCtWg28lYwR5+J
Eg4kif1IIM/pgSOldDNfvxcPWB2xF9xPcZutxEuPKNVJdz0sMH3gbV6N4zEW
/4iY0zkdCZTTzZ+GPWPU7oT+r+sRm+VQrJcoFzlvFEj28FjkJJzyBCE46Lxt
tJ1YkATGXuCWTV8+lA1H6fLG6CtK5ejjJgseqZpJ6r0QrR9Gpy28Ry4laGux
BxjggjuFAT+hjTHLaGRbx+dC3EhsNa+kIGTcv4GtoqOPSwzK7jIm2zh8p3ZL
54XFZwtJwEvwlnVbiXuEtS4Uqct+1Dpoo7RuHLM5IqMIauCctVJx7Ege3wVw
W9tJKLct133NujMaFjGJJE4/wLbtu0aLczDAY/DQwOhyOxKFRJu4UABmQRuO
6CxMTX27xY76wIR+gO3CLIgkWWLEW0O/GKc7Boclhb2VxWe8LsEO7ZnyiqHi
stsutRCN58FNj8LHfQmSR67dOJagOSSjIdV2xD+1TYsL7w4PxYXG40R6VfyC
eYBqDKu+HPZpSXQ/Ji4k+u91Tl9wOXdFmUxEa3+QNlb51TVIRiv/0PWSF5Xh
RsoFG69GEDFj8sIXDUuK5g5MApWEaJkTaVDKdS5jrtNr0Yz0mB94eNWWn142
1kjqAlIlr3ZKXCjAT6jx37pY3NmVReZ2lzVjIvJQOjfvy20YqbNmTMhEB4zF
VypENKvNXv5oCVESQ4EFnmWFJVTMYxiyAmpJvFDt4eDyeUhCp2dJVl3GQJl5
Mn+Rb808ZGp9JF0OqNkbOy9XGHBRaBRbkH53jwckPGShliwLxo2ubKKTD1tv
oJm8tG+cw4EWgpK+U8mn9UpIk86JIX407ZzX4riyokdLitJY1VU5Wc3oxjEF
KYeVxNNzQfQio1MmUF9ybnyfJK3l8Gm/tOIgKw2pPkg7FaGf68JnhF1tMDRo
2Fq73AdzK8AMfoPb/Zq2+9pVj1xZU7/kz8+Ws39+Nv/pP+lfOIizH1/89Ocx
QfmOI6AvLn3ajeRnT4/Ev+bJT3DPwv/+t/65Sqbu/2SMxsujw29/rnz55Y1w
HRfFrbTdxP865vxtOe6/SL/9t/yJ305WlH739E/it+F/r5mRgH9fLv8H/L/8
4Ge/mPzkl/wd9230Sslbkpk8/ZN/1Mgvrd5jP7vw7YRWfu0M3WPfvnC8pudt
9tt/7Z8//QPk/OnnP/oJ+F+MQV5ilmD4pC99zmyxhWXqGZ784hPP9icu8/TP
I+ud0PDnB+6+/fm1j7gTCIYSYol1+v92tk2YHhG8/Gy/UqgHx6k//N4BO8wx
rweKkfOz/Z4woTeEEq30h++CJWCyUf8DR548dGaBHl81+3M/++3Zn/4D19zy
hbS4umqWHWA8l6HGfNVgzQ1ji4tafB3CZg2uxOxC/MPX/K/486e/84RiByH2
YrmkdMFuq6KDw6w/yqmKh84RiDaUW4OggDBPhqMdkxB/+Awl98nCgySPI8QO
xCG0nsOshMYcnN2YI5ow+sTx24URzCKxj1b5OHw2q/iRMDQpI25DpLMheSNt
Bb6FyFenWQ/am0uK9ZvuYWlAoYIKrqjPrRPHFhqqZq/1YGUj5GZ1pteIDPqC
Com/XEGdI1d3yoPjNkXCmPEQT1LezIyclcVORy3ZmXYp4WZCZSTb4wQ5QgGx
it2HZzLZOdDFNSJ5l2FPdf0BSb0vXCo1yqngWUyl2MB+WEiTt/HVDlWSVpsX
X8pecQ0jU/Q6X1BKdS3DlNBI3DvOd0yYdwS6C4OK62Jy+iixuudKWBC3dS1N
82NkPUdn2aHYPRU2ntJjcbnmaZGR4DvJ0Cw0LyHUKS0+oUIufiFpjutHsjyL
7AOzZK38Q35XaG1VLG0oGsiSiBEJKcNESEbLAY8rYe+XksGXUr8JP2KON2x9
C/R8bLHfG2oqakul5SqJfmQUgPeIlhYb1jAtNHa1W8yUnGM5rdRsE/lI29IR
Rx7ks/dSp41fV4+xkt4pgHijTD4i0SpQkNT3Xu6A8jjzyHodXkvLpLzaPqme
mjNd63PkvCT1jInqoqxaUnFvcrBU5ofvfJDYksWkMiLbfizQ0cWq3GLhMtme
a3Ga0JCSVIqVNKfdRGelnbUMnYc3nEnZUko3ckzz9rs+u5K8yYgUl2yxpVa0
c7wR8d2mW3nobOr7FaM00gIj211t2hV3GSSEhIpkn0EbL/lCeJbzPNtSC1cQ
O2pJij5NcGORBUyWiq8WYpCsCDT253us3ZaOzHcMTosJqbVNLJe/SdrPXF35
34G90fOE3U7CfSz2nmkfo7ZHm+QhjufjKC/y045l833KHiM4TFDk+67eDMzn
IwchLa53vcBgFb6F7fYdi4/UZmRkqnU8At+kvfHwRhMH7djyx8Z8j60/X2FA
JRbkr0rnpWPa9+NStlBVlWS8iUW+rXenXpMSy+J1R8XxcCxRfS0P5Qca8Evv
IlaxC6P04LcvGKBOKV/OQzf0SK6qE4zEsxxlMHSSV3lm7gKbJu20EaF2EXiv
KzThnGhLBL73cGipQ+GkKbTrVWhi7MSXegXAUSDv53JXhrXtqbXAchduXG5l
yLB+yg1M8IM3ijNP6s+0O9KlDsZZUxTthuUcy4RSMd8lCY1+1i058yA1ocf8
5bxuBCvRpMRGPnGp+Rb3VnDdlom+DV+IOZZeqhkonZ32aUiIPupKI6XC1+oi
slpcS4GHr5/xoaNpMOmdqpmDCdM7fgv0+NeP+JRaatanddw5+4Wz5FxUlNmJ
uXLlucLVvAWdJULS7uFg+uppuS1SFKz4hUgY1OHCyl9ch4tYpG4xABUmTAuc
jtS7w5W5o6nD2vCYW/YLK/Vkt34w0r1Ru2eIbtD6mEmhUh5AYOuPBTM112dH
u9SInMlm8IY63hcTVQZeCYF77Uqyo/bT4vxNyRQLV+Cd1WVbvfdMmQIW1+f8
+lVh/SNRp9i7k8JOuriPtCwVYCIJCS0a8Wh4y4T4We96Sz3B+UdGiOz/tLDc
+TqSjGUCnDZO5dDBOJRC5E3laJBGB1L/jKSB7kDHWhsAVrjAZ2viEC/54WbP
2kwh1pJwJQP7sa7iJ/Zp4fSX1jpZ/Vw84/jib78vbrDSbP7sz9/+MO2FlAdz
ad1aVgNHbUg/vQROO9jiAia9EYN2VM66YLpqIblFQ868XAA0OR++rG7SXKHN
NYK3cdJ/wYVlK15oOQhKX2HHlRwBeMf6PNF0sphOt9Aykmd/52J8Rx+gIufE
tum5kLz9u1gkh7PBj5uOqNnsl4yrYaTQdVRvJ4UWs20AZxK5vIJWom9nyl5k
SWEXKNSRtJzGcD4+Y1I315uqGGk13xRDkvWbLaD7JjGvGpF5l2JKllOjNyHN
eVLq4z1lfYMG31d2lXozrtcUF2e4ugNqBZW6erSNvM05hd77o4/GekRTRurN
iAhZ0k2fGO/Oeh4CEjqH2OzAgYuRHYgsVbW9Vs52kyKL3kIQIeXYkScr9/Dw
e6gCHrxOfDXCFwvR35MmxAM8dtie5+ovchdaotjBYtH8AhbFTS2e1NZDcodG
N8x0t7GSVmxF+Hgsq+AQ3XdyATyJMM+prYl8bW0sPdMr624Gwq28PtPzDOyw
NZnHciyIMDfysuvo2Xlkkifcr2g9ZWuk+YUH24ReByPW4VoJLjfC3OVBY6ze
GhCh5Kgfxzutko23nNSH0LE0lmsDuM3membguzB/hKjeNF5jIygdHjlbq66q
Tn2uX2dAGf4aiCySM/szg5HEGvMWOhVVah4JooPXboAQUBsdI1jh2mowp+4C
Xyph5c90uPi0F9SHGwl3mT8s1dOOz49UmwGksNmQlsff9dgrIOlZjN/TSJU5
eIPjFqnA5Q1SH21BOXf/C+c6PBIFotKMRufO2J5WRoAho/FJXvRUipqGkLdu
DXLA5tWc/xTv14pXqmBLibnrF1kZDPvymJydKVvKTnJs8WOqUX2m2XCWDjKj
2YPA2e7uQde0WINHiskkF8XIZMtqC0Vi2W23gyUMGMGMyRmpK87iR9fe8oCN
aioWyfdNWX0guikOcXEp7s79KkPxufAru00xbWLlitiFuJf1nvdbsbKBcisT
LoRhVd75AuyZKhs8CYTnExR9oQ8oumWu949dHKJ9T1yFszrtUye5OlfUzYXS
QunTJw3OSlsX6ae0iA5X7GH6sT6cDmzM5PIgbKyEy8ddOHwsRY0juWvL0a76
45TkTGsnKk6z3k6f2NKJ3FFxWlGrgmCioKxYXtBM8qVLF1s6mZ8VnZtpFRv5
HNLWBjvrSYe0tGYTs6tiMPnlBANJy8rS41+qawNJkXQnAHVPXMnR2hSw2ZEb
FCjSp8fGO24ovHtA2xDKg4oMQ4k+OvEYi7uICNcvZo0nt5hagaW3Fh7xY/z4
XoflWotmmWHp8zLp7khgESu59wpFwWI/j04YCO7cQTIAAbswDSx98TolvUjL
PAqXISmj5KQZkvyAsf1+gc1LLcawrz1BPKCv/mtyywsI4kkqWOW6ZhwHNw5l
J3+u0d+CBz4DqHIrsxRjy5tOtlmjIV8kovfvRXJEJLn23PK9OceyJE05bRvE
Tq0mCqx5z+0hOKUB4uw2j74x7OvtmOAzUg6jt2+4Zgge3ZjgMlQqo+lGl4+g
uZLVmpTJkOMcs5k6V7GMCowMacQowbuJ2ECQGSdLAmIBmPZjnSAX0WAJ80G7
iOHtRNRuRVVp7MFGGUYjidAOTtqZXGhkIs5htPPeD+STpLQlspV2i+TEBfEf
i5dNurjJR6S+d59NKO+ONtvGzzlFrMXE2GDc6DJTrvq1o8qwLcRhPeu2qf8w
1yNj0kiDvR/rE4HKeG6aWXjIrKQBRNn1YHPKwhBi1+jSh+U+M9qj87wE8Twm
SKczOnyuUd/D0cXroAK5+8cSP0sEn1KhFe2BwDpWO5HPNVxK1Tct5DBBB6Un
vGhksScKwwbrY6DGcV6QPUQ6yUtZHSInpXCNJatLMX83GiKNEZfKmHNduUTI
rI9FkXhpIN37M0rh2V2UkOciIV9nHuadFSuIjpCg3y9M4pXuEIQcI9695d4L
wXXYpMZmGstve7GxCVaMJoI1Xp774y4zerNRt4a5xrYYqNh4gFqkxlIqGQSr
s6KihDmjwJWQSRMrDNyouXXXYg0U9fkj2ZCYdw5ClvFR/mgDoYgnVGlGwN6p
w+QLymKp11/+/H8xgDA/UJhMvhKyiES/C4VbMyVR8bYKw0OJHYedAnhz5xHA
Uu9z4Fu6Iz8rveiC5jKe2iANF8BP4xJFJyWmDMwcZcmfhb+aKu97kATJxCJB
Pqo0pzxxBaA1ktdOlbhjk3hJE4fSTAMbti+seYp6afFCQM6WuC0xPZA16OJy
Od+ShXOl7/Xm9imzOlZGXUyWck9F94SMrudD+sH6vD9dLpWwKKf8Pg4DZ+6x
nXS5jToH7QdnXdOCdj/E1vV6zWaS3K+S91bJb4ZLaaKg1m7S3s1CHZIQzncJ
0b5kDJPMxL7lk6X557SjeLxGhbiCF5uKF//xH//l3dc3//zVv3z1n/+pPSiV
LxtjJRwdnJWak0dYUYcfMb7saB7cXCWZdbGjKxuSaqtrf4uDZbLKaU9933xE
m+IOWuuAEZKwx4yHl+DlVOyMv+pD7MeedD9pR9JyNnUj3NS7mngoGCZS/CmL
tK6ZzcD5BsV2pQeEMDqkmcDZlUdebAs1V8GYMB50ZPEqcggMYp0qXe6RNqF3
yBi30iEJnlwEZpXc1NJSFLExpj+pOtZzrdZnb93cObN8gJ61tdBhHVytVe52
xyKdo+9ja2XX6aESP1hwLunyI5195FO5SOGuxEbNC+V+yaU0yEaTrUspUOB2
UjdljvIsp7RI6ynRBuIV6+JgMB+K68WJMwVjgPm4yeigZiiAKuI+ufAU9e+9
nYJJWTFC4TVfDmp0SAduS19I7ODTC91I1i2hWHhYPilvMZOuCKzcAI3ZL3Qv
BuoTKTmDOSIgtbid19TJed9LKoHcI8a69HRrdgWpEeO+7jfaoo5dFgPeWd9j
6jiF6ay2mOKzDRfrDAxMaIKzPMLpI/xNEjRcBG0rrkXTDtBNG7p1/Ww4qWe/
1ptwPIPPgwHWLlfvesgyzf/+/PwjIq1t+L31wP0Qls/pVM00GUz0tBylcECH
Z+BeNXIgsxsMqZkW6tEyP+ZsH6bs/bw9OGihju3LDKq6mPap8SRTqmcoH2t+
XsrojQhQjUPW+VzjA1dVjgHmyNdNxibMdDtHfD41mE0IpBbrS8tQb6iRW/JH
ufdgK60ErRlFbDYTL7fC6+XavH88C0DY/OXP/2faQD7JHcy0i1+weuXG3FH4
1HHHZ9KcJaqXjCc3WeT7CGDg21CzEq3IrO7E5TSW9KkhncCrZy315nvIE5cB
jabp3Tq1YmagbdERiYnM+ViY725ZiM5JLmnPxI/O3gID/3gMrfQklt1bODLw
zBbbLRam55KLl2khs1vkfdU/j0z6bdgjEN0dS64WqcojeRtuJ7nzGmWTxqT/
BZL5saJLYSpqJcSxVhwnoTqUzD61MnfsX7jRdkv5nXbrbEqYXEdgS24TkPZV
yCmcBU8sfe18Ir9oW8xg+au3E2a8d7bN3WFqQ9l0bdDmxok0/SCq806881tW
+k+wYi/cpXexY6b5k9RrDpRMvXGXRM6308xHmhFiYY/XHNfpXJcs+eoVcKlH
9BYCjlg3VWIN+oVXM7BGcyokXkFh9wLxoMTdueCgujNjkbIsom+EiloBG0q6
ccR8F9+eIc0vw0fwcshDIaR/e6Lr9jJ6BY4JAwwamFwtkKieXO8w+YgxgUT2
JmZkHxoXnUh9INMwbNnznhkXrvDNqfgrFkFOYxHlm7Y5sd4cmtgqOSaJC/8w
08aBMAfZci21b6SYtW2FgD4rh32pejEHjcnCPxlnU/OwdMpoh6y7VdblMu36
hoql0pgWFrZ7kG7MCG6m2lGN2FwK2XWz8k+m2zpIeKzXLy4s6Yc0Gw4HubE2
l2Yo8psomY6TBuO2lHQEZVsiwLOI9ssseBy23jHmgzcbKKuwN521ykb5s72L
PPVh9n4JlyiPu5U5S9NbSeMNVJ5f5oqoWEtjC33ydRhnNMVqaXffoXlymTne
1oZ+hSsoMRCVQTRLPc7UnLEeLbHNW5ezJy1hYlzfNZJREEW2FIPRNQ33MQBl
vrg5JRRER4PuoRrAja7Hqqvb32sVpr+fyaowhcBe9i3d28b3PC3YlaBwSlyh
AlbmA5ZxxuxHa+rWehjiboFciS55xzdiEXfB3VJvWni+GWuaJ73DqCv0cwHc
IkKm6FlwL6kEz0zu/KWW1bE+sZ3eOccZMMkXwhHcSQO77ZQOS0/7JvoP/ibz
Caq69Rf6pM0kYdOfn2XLKK7EndIUxSY5kdgWg2MbrbpE0o/2Q3aZPDat8M+e
QspEzLXDWF6E6B0mpozM8PoYDyS2bPHG2WZqDGFXKeltDKaA+HofTlXS9dLe
ruu1Aiq+WqMPlgIGDBOgjo1Jy/ipVXQt1pVYV3F3NHcxOQILY9YZ3fX7VyhB
w47LeB2OGiEK38POowp+aWCldzu63ME8COPt+Up4ECniiAzU9476qvO0iHeX
FRHQwXiVtncnd25uPlRPCdq4HJFV7pOXciOpx+/tLkls7V423IfLOATKYqR7
nZEMg9mIS5THW5swMeJgeOs+ojFUnJ8QaSiUFaO4F+WD/UH3NXUoX2TxlvZm
zAwHm1uQ5i2oRCZiuN3gbgBbh3hQVZIkGDFX4jaBAa17arYv4aNcq+wDiQXF
LKQjqqo/Bb0HkOWEr6lNZNsFZ5nPm66YOo1E4SLr4VInKbpUJo2tzcUOH6nr
Mzocozvx3nPObiUSH34uAopXpUklV4U+TNKjwXCzvN01Vy0bmDacjuiVRmg0
63jyumt3S7xQOAV7MCggr9YaBoq1Uvq2rh7zEYkMskT40LV+bPDR9DOvsMnb
rfXKPD2ffRyVb/HmBmuxYDnB4OdDy8l9HxvwF9thzu4Z7dBuqqSbJekcfscc
tvweAw/qzraVTRlPArdqFDvPcL8XnhGRXeMtN4qOuCsx58KgNGzPrtlgvxTU
H7mgkT3IZF7u+S02na7bRM/9IP1qrZLMk44QVctNeuaGLfSJ/rJO9tvIOSLD
Jx5HVGaFz1ZP8h3Fset6xsIkgX1pNtvwkBSQUe6HL6L2jhPH42gCTjQDiTbT
LWAfgcVfPDTzAJ3tJZyKXQE2grLdWIHZUlleGhO6AmpHz+R7xjBr1O0CqrSF
O4Wm7wbRnAK35pcrxlVZbpipnCS4GLyi8spmu9TzyNmpmQth5vU/sT04MGFL
I16BJrHczchc5itksoshiic1ZZ35pEJXuvbgD95b5xhx5T+hOVF6TbA+tkw8
g0Lv9LEsQMUNZy4Ts2MxudJGNNc93/RhBh/4vITlxZva4aWv8UwQzvPv9tff
f8FZuGz43A896V30yKmkxUdW4wFUzYGZKxEk1zSge4f1/r/L3Ke5F882TXIw
+Cffke4lnzJsVKuCuyS3nrkenUENJqzysQmDNwCuK5K/iBDclB2q2rZUaCzD
9Kwzv3LSCSZ6J2cgXKLyW0LRqUHlVtjtUJjVIgOq7BplUXOl5QrbqAj04loh
z2Xvi/R2eMXk49JLJUW61J1p43hpgMNNBIUyk/NE8kjRz+jFj/ltpSvqJRbv
UIO5LgxCmmoDr35sQT0C07Gq857m4qLLlYiW09R6TYJ7cN2ycDXRD01FlrlV
lqtRRetJbUTXZDdeWvGt9K5l63LlCe8zuiiPU+wCnIEL9Ub2YChnrkjgnLpz
Xbm4f9gwavFp3gY4djDxLr9Ha7k6WJpL8FUHsuDyrxReXrEGewgk7ZQJt1s7
syk76mFTfwC7se+6jeFLEnJpL6rkyr7TgDxtGfpk1Szjs5GLhO+DYvCcY1W1
FE/BQnh46rNOehv/ZGKTXCc7a0ti+RQ/zarsewo1mmaqn0uu2cTfJ6XHs8Xh
cuWru4PsdexDNtMDaqapSaz35W5Lj/QTT2Od2AfIE4n7sJwSQQ11at2dG7MI
lA91Y6So2Xpq5cI1NaYqia3EpWCr2dWe65kXOcLE2L7EolxMlHNGj7PrPxyj
d5XcCVsmN6zYWKiFzPo0euZL5qNreyR0mTZKvokXRk/yt/jw2GXeIx3xFqCo
4bLeiw5keZYOg3+T8jgYrFjoJclaCdYPesumVkOtlUSHiAahCor0HN0FdshF
xCOZ35Xn8uoQLQW55Zq0RLtBgvQz6vhB3dXrw7Gs7K7OLm1JwcUwGvhN4Cu9
fMjMUlKraIr1EJEchtWsg70uduMVHd5U19eDNPmhEv0l9+WxoSVNfpJuG1pI
MOQkXgk2mcgKPj7SCDH4kusrfOCw0/TOxLoxyUVaNU0EaTK9+ZtPqXCsk8oP
qloGmSe2kvP0wUup8db0pqHGqHT6KjHuoCo8e0bvU3xMzeMQUNUrHOoCgK/t
AiXKX0X/kZVCYg8ea3s548pyAgeU6ib1Ama8RWQZDO4yJ9HNXPGqJzq7a+Tm
aYeO35SyJkWqPgTCxAim8/6eFD9gfyv66Pdm79KWM+7kX0YfDZiLOj2loyRd
8qQcO73jRBISsPnVqfHldGZxCS1FLzG0it17zMjf2weuwU5xNDYMKOfPA5VW
ze11rfUItP6ZzV1E5l5UBR0f6Un6VRh6LkCJl5+jw3exkcuNH1VaJCEx+fp8
MTnuqrrSXD4XU871YkpNz0ohPLoX1VHHhLVhh50wTyYdxmftyyFWKm4KqVWj
PInznhxOmYIOZYq38jUekcSRcA16Qxeo+a/X1Vhmxd3NVCpmbv58LvXq75DW
QlXYFJuh3r2JoN3f7Yg7ipdVyOMbPQelXJPjH8zYXaT5uTIaYfGvzRfZUhHo
2VMF54a5yHVt1PdCd2S4b263NK7zE8w4OLlaJk4d/N8iKnR+zakl152rCjg8
ImeJxSsjW8hNzhDNP4SmWQYC8ephHzZJizRWwEOshXjm3ahEl61r6/RoMF5S
DSn5LEL4JYqKJRZMnZiLRG2xnXEjx8NXoLluzD5XnBNCZuiNytynG1AkwEga
PnKezypJZmkZeQNVIo2J5088MmlvODxyCeW0FCKySYSZIOVM3pfvXadSrRPJ
WgvUQ3WiwLVKhu6JwlMgLIF85WqfLDCje2YtqaL3XlqFSGxG+8auuXWE0ndG
81PYUFu30GD81bjmB6Lb5IeV5nWd75g0IZ22XZnvXstXF+ptpEVyOaKsKQ6N
21iDCzgMgsZTEbM0p5SulzTRLV8Ch1/AH6EX42/FFZ5/2nEovwMTltEUVLYw
eVs70oKXawEYaWbXM5LDEG/Ext7Imo7Z8NEzEQ2qIijcIh0+wfjShTasUBr4
I7dsxVKv1L1NuqNL/70JIZQcBTKTSC1SRl6SaEysQHKxcXNGjq7jk8qaSACH
nF690PIRMqlhYcqxpqRodOVW4hnOEykXPqmjniITV8qck5rTaB8hbz7Cv5zw
37XezAmzqckYFwphLbjcNJahlFgcsVVPS+TDYrCJonNXj75+kk2bNFqHd/Hj
prxaSfuVPbLZUiKtrZcwakEB/brvHjRrgUrs6iqJsbXQ+snLJtuzQKahpqWu
tX+AMq4nF0weOlczP62uv1TM41LG3OV1NttMWrihrilJrw/pAWJtFLJext6u
sU3hd6QEV0nFei3ptS4aFG4AlNdrcQJWQQVfi7udBUcNnbLy9TLP3ub13Teu
6eOkvyPbpxjlyjCz5nlMTePWB1Pgim5n8N02BOoy/GnmXoV4BWLc/hlAVB4t
iChuIV42gaZ08BR7Wl/NXc6UGtiBi6WPj/n8WRZnDo9IKayuMXbWv0yqWrUc
nkunM+ckqVclqBbNG/oxbldfetfB+1DMDi6pQiC9AS6elETFq6U7wFI0aE4q
u12O6g4IpR1Dq7z5iup0qbNhhKX0OjosKyE+v5Dy5unhj1yFJykpP916cHfp
ytm6ARXTz8VAc4ck7yoSMQ6+9xTdd104ApC3ZeUoT3zhC330twPfApKgHiDq
hzrSj/V6We4HIPZLqOa4OBuk7LFb7rxYunbCe4O1NumRcGPX4XW6b0PaIVm7
waV5yAL2fN/WWFC0KDypjnplkI7USr9RY3ADAqRBHfHY6M6bnSsDJ/RT7uEk
4AENDt0g4FUxUVkqabrD4Yn6fyA5ch2ENq1n8iqYak5VIZt5N1B9AlL5KmZq
u5tU3NSs6V/W6ngSAVXzLSQpUrnFquBq2pb+0ebOnFdjr5F0lys+RJBvaavk
yHGXkxMc53xCWMNd7APdNJx3o3e9BbjAI79vdXeC0wJjkIIsSspSEn0dztTT
QDMp0jYMTzWHQLBQqTXzPW+1v/K0vTKC77ga/BaMNZL86ObY1dEb0Wtz8g6D
zGSswCFhIgM9SyIThz080dr/IoBOZVXaTb9I++hHH59au6ylzUAndaIpQVfT
SsjIE5FCr8pwVK5kEiogrYqmIrTTphOJRWptMVMjDNIwlrwCklKXlp5kwKyb
qfUi0fQwj4Ro1CxjUmWdNngYISBNw9bYd4+wQzteczpoNR/Kr3lHYThrNYqu
RvrIS7UM7V4ahnjsJ9JwmezBmQXNfEbcNjYoUJWAUAep6bQNDleh6uVBdRyA
0ysQgODaTnozsnoI1BqTGidXcERj41lrAKytRNbBt+ahgphY/T5vMLlG1raE
+Khc5vOdNV4goApTxtqFEhz5IOx8rezVSSXcHoasG+IvYywDH5I+wbGxiyZK
JTkv9Qc8JNwG0X6F46NMynHeubAfBeClmpEfqK8GN7nqywc8GcOxFgOedj8r
8ThS354ZKG1Ov+MOUx6BXSzhGhBUsceOqG2sku6Dv1vJVykuYouG+9lSDexA
M0v4Ah9hxEM3mPIlggypcc3fOq9dWh14we2SDX7kvgfegw26PqMEl6NYVtDZ
63PE+J4u3kpGNcH64mIwjHJL2A53K2mTnTlg7AOuOXplLXcF4xsf0DfiC+1p
quL2YsEEZiOXECIwjpEcUYYukuQwdVqIQoc+NrPYkE9I2Tzfo8oaYGMcyJkt
KTwpB62GGHHbEStriS7rNtvxS91lZv4OrCRTnFGEbv39d5ckVUVeTLc2PeYd
62evTkKZiU1OpDfcOgzjMmBVa5bGohWgRUO4RCsh6FxvBgliQErLQe4dES0l
3ZS4872rrra5lqhb463ZF1rHYD4LE3tw9Oo5mWvwQqLeQeN8ZJzGTa+bmnLs
okdLNWycNtQ8JdP/uE3OAPtfPk0oVGidr8Qb4MSCWsjB7ujMylkRrl7qcdCJ
TL1xqXu4VGLExTx6g7g06MaLC+x443ZtOiIvLfT8Wz8ovZgyjfYMrsKJ8y3q
l9iqHKE4YrCPD6Wec4jRgGwbK/vr6kPbPeCG0lcZDOBb52GVyD5hToUT4GX7
gdoQhHJ3CpayrZHebKiAtA2k3u1dDDa2ej+p5Hs4r0TtNdjrZhvcl9txdfX/
ABShwPCU0wAA

-->

</rfc>
