<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.31 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-wang-cats-odsi-00" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="ODSI">An Open, Decentralized, and Scalable Framework for Large Language Model Inference</title>
    <seriesInfo name="Internet-Draft" value="draft-wang-cats-odsi-00"/>
    <author fullname="Hanling Wang">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>wanghl03@pcl.ac.cn</email>
      </address>
    </author>
    <author fullname="Qing Li">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>liq@pcl.ac.cn</email>
      </address>
    </author>
    <author fullname="Yong Jiang">
      <organization>Tsinghua Shenzhen International Graduate School &amp; Pengcheng Laboratory</organization>
      <address>
        <email>jiangy@sz.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Mingwei Xu">
      <organization>Tsinghua University</organization>
      <address>
        <email>xumw@tsinghua.edu.cn</email>
      </address>
    </author>
    <date year="2026" month="March" day="02"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>deep learning inference</keyword>
    <keyword>distributed system</keyword>
    <keyword>decentralized network</keyword>
    <abstract>
      <?line 84?>

<t>Large Language Model (LLM) inference is increasingly deployed as a networked service, yet existing deployments rely primarily on centralized infrastructure and trusted operators. Such designs limit openness, concentrate resource ownership, and constrain scalability to the capacity of individual providers. At the same time, LLM inference introduces execution characteristics (e.g., strict sequential dependencies, large intermediate activations, and tight latency requirements) that are not well supported by existing network, transport, or coordination mechanisms in open environments.</t>
      <t>This document specifies an open, decentralized, and scalable framework for executing LLM inference across independently operated and mutually untrusted participants. The framework treats inference as a distributed, layer-wise execution process subject to explicit deadlines, rather than as a monolithic computation or best-effort service. It combines layer-aware activation transport and routing, decentralized coordination among heterogeneous compute resources, and security mechanisms that provide accountability and correctness without assuming trusted execution.</t>
      <t>This document focuses on the architectural framework, design rationale, problem definition, challenges, and solution space of the Open, Decentralized, and Scalable Inference framework (ODSI). It does not specify concrete wire protocols, message formats, or protocol state machines. Such protocol-level specifications are to be defined in separate documents that build upon the framework described herein.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://kongyanye.github.io/draft-wang-cats-odsi/draft-wang-cats-odsi.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-wang-cats-odsi/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Computing-Aware Traffic Steering Working Group mailing list (<eref target="mailto:cats@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/cats/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/kongyanye/draft-wang-cats-odsi"/>.</t>
    </note>
  </front>
  <middle>
    <?line 92?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Large Language Models (LLMs) have become a foundational component of modern networked applications, supporting tasks such as natural language understanding, code generation, and interactive assistants. Inference for these models is typically delivered as an online service, where user requests are transmitted to remote servers and processed incrementally to generate output tokens. Unlike traditional network services or offline model training workloads, LLM inference exhibits strict sequential dependencies, maintains per-request execution state, and imposes tight latency constraints on model response.</t>
      <t>Today, most large-scale LLM inference deployments rely on centralized infrastructure operated by a small number of providers. Centralization simplifies coordination, scheduling, and state management, but it also concentrates control, limits participation, and couples scalability to the capacity, geography, and policies of individual operators. As model sizes and inference demand continue to grow, these structural limitations motivate the exploration of alternative execution paradigms that can support broader participation and more elastic scaling.</t>
      <t>This document introduces the Open, Decentralized, and Scalable Inference framework (ODSI), an architectural framework for executing LLM inference across independently operated and heterogeneous compute resources. ODSI considers environments in which participants are mutually untrusted and connected only through the public Internet. The framework focuses on how inference execution can be coordinated, secured, and scaled under such conditions, without assuming centralized control or trusted execution environments.</t>
      <section anchor="motivation-for-open-and-decentralized-inference">
        <name>Motivation for Open and Decentralized Inference</name>
        <t>Centralized inference architectures assume that compute resources, network paths, and execution environments are under common administrative control. While these assumptions enable tight optimization and simplified coordination, they impose structural constraints that become increasingly pronounced as model sizes continue to grow. Modern LLMs require substantial compute throughput and memory capacity, and deploying a single model instance often exceeds the capabilities of an individual server. As a result, inference deployments increasingly rely on multiple servers to host and execute a single model, raising operational complexity and cost.</t>
        <t>At the same time, a large volume of distributed compute resources remains underutilized. Many devices and servers possess limited memory capacity or modest compute performance, preventing them from hosting complete model instances despite having available compute resources. These constraints lead to fragmented and wasted capacity, particularly at the network edge or within smaller organizations, where individual nodes cannot independently support large models even though aggregate resources may be sufficient.</t>
        <t>Centralized cloud-based inference also introduces data movement and privacy concerns. User inputs must be transmitted to remote data centers for processing, increasing exposure to data leakage and raising privacy risks in sensitive applications. In addition, aggregating large volumes of inference traffic at centralized endpoints places sustained pressure on network bandwidth, increases transmission and queuing delays, and creates service bottlenecks that limit horizontal scalability. As demand grows, scaling inference capacity requires proportional expansion of centralized infrastructure and network provisioning, which may not be economically or operationally sustainable.</t>
        <t>An open and decentralized inference model seeks to address these limitations by allowing independently operated participants to contribute partial compute resources without requiring prior trust relationships or centralized admission. By distributing inference execution across many nodes, including resource-constrained and edge devices, this paradigm enables inference capacity to scale elastically with participation. Placing computation closer to data sources can also reduce data movement, mitigate bandwidth bottlenecks, and improve responsiveness in certain deployment scenarios.</t>
        <t>However, decentralization fundamentally changes the inference execution environment. Participants may vary widely in compute performance, memory capacity, network connectivity, and availability, and some participants may behave maliciously or rationally rather than altruistically. Moreover, LLM inference is inherently stateful, i.e., the generation of each output token depends on all previous tokens, commonly represented through cached intermediate values such as key–value (KV) caches. These characteristics make inference sensitive to delays, failures, and inconsistencies, and prevent it from being treated as a stateless or best-effort distributed task.</t>
        <t>ODSI is motivated by the need to support open participation and elastic scaling while preserving the correctness, timeliness, and reliability required for practical LLM inference. The framework addresses how inference can be executed across decentralized and heterogeneous resources while remaining usable as an interactive network service.</t>
      </section>
      <section anchor="scope-and-non-goals">
        <name>Scope and Non-Goals</name>
        <t>This document defines the architectural framework, problem formulation, and design considerations for decentralized LLM inference under open participation. It identifies the key challenges introduced by decentralization, including state management, latency constraints, heterogeneity, and adversarial behavior, and describes the high-level mechanisms used to address these challenges within the ODSI framework.</t>
        <t>This document does not specify concrete network protocols, wire formats, message encodings, or protocol state machines. It also does not mandate specific model architectures, execution platforms, hardware accelerators, or economic systems. Where cryptographic, incentive, or coordination mechanisms are discussed, they are described at an abstract level to illustrate design intent rather than to prescribe particular implementations.</t>
        <t>Protocol-level specifications, interoperability requirements, and implementation details are to be defined in other documents that build upon the framework presented here.</t>
      </section>
      <section anchor="design-principles">
        <name>Design Principles</name>
        <t>The ODSI framework is guided by the following design principles:</t>
        <ul spacing="normal">
          <li>
            <t>Open Participation: Any independently operated participant may contribute compute resources without requiring centralized admission or prior trust, subject to mechanisms that provide accountability and abuse resistance.</t>
          </li>
          <li>
            <t>Decentralized Coordination: Inference execution is coordinated without assuming a single trusted controller, relying instead on distributed mechanisms that tolerate heterogeneity, failures, and adversarial behavior.</t>
          </li>
          <li>
            <t>State-Aware Execution: The framework explicitly accounts for the stateful and sequential nature of LLM inference, including the management of intermediate execution state across tokens and layers.</t>
          </li>
          <li>
            <t>Deadline Sensitivity: Inference execution is treated as a latency-sensitive process, where intermediate steps are subject to explicit timing constraints rather than best-effort delivery.</t>
          </li>
          <li>
            <t>Scalability Through Composition: The framework is designed to scale by composing many independent contributors, allowing overall inference capacity to grow with participation rather than centralized provisioning.</t>
          </li>
        </ul>
        <t>These principles inform the architectural choices and mechanisms described in the remainder of this document.</t>
      </section>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>This section defines the terminology used throughout this document. Phrases in upper-case refer to other defined terms.</t>
      <t>ACTIVATION</t>
      <t>Intermediate numerical data produced by executing a model layer during inference. ACTIVATIONS are consumed by subsequent layers and may be transmitted across the network between participants.</t>
      <t>CONTROL PLANE</t>
      <t>The non-latency-critical path responsible for coordination, verification, and enforcement functions, including cryptographic IDENTITY registration, STAKE management, SLASHING, REWARD settlement, and REPUTATION updates.</t>
      <t>DEADLINE</t>
      <t>A time constraint by which a specific inference step, such as a layer execution or ACTIVATION delivery, must complete to preserve end-to-end responsiveness.</t>
      <t>EXECUTION COMMITMENT</t>
      <t>A cryptographically signed declaration by a PARTICIPANT indicating intent to execute a specific inference step under defined inputs and DEADLINES, enabling later verification.</t>
      <t>IDENTITY</t>
      <t>A persistent, self-generated identifier bound to a PARTICIPANT, typically realized as a public–private key pair.</t>
      <t>INFERENCE PLANE</t>
      <t>The latency-critical path responsible for performing inference-related work, including LAYER execution, ACTIVATION transport, ROUTING, RE-ROUTING, and failure signaling.</t>
      <t>LAYER</t>
      <t>A discrete computation stage within an inference pipeline. LAYERS are executed sequentially for each token step and may be assigned to different execution participants.</t>
      <t>PARTICIPANT</t>
      <t>An independently operated entity that contributes compute, memory, or network resources to ODSI and participates using a cryptographic IDENTITY.</t>
      <t>REPUTATION</t>
      <t>A persistent performance signal associated with a PARTICIPANT, derived from historical correctness, DEADLINE adherence, availability, and throughput. REPUTATION influences task assignment and reward rates.</t>
      <t>RE-ROUTING</t>
      <t>The process of dynamically changing the ROUTING or LAYER assignment of an ongoing inference request in response to failures, performance degradation, or DEADLINE pressure.</t>
      <t>REWARD</t>
      <t>An economic payment issued to a PARTICIPANT for successfully completing an assigned inference task.</t>
      <t>ROUTING</t>
      <t>The selection of execution participants and network paths for ACTIVATION transport and LAYER execution, informed by DEADLINES and observed performance.</t>
      <t>SLACK</t>
      <t>The difference between an allocated DEADLINE budget and the expected execution time for an inference step. SLACK represents tolerance to variability and delay.</t>
      <t>SLASHING</t>
      <t>An enforced economic penalty applied to a PARTICIPANT when verifiable misbehavior is detected, such as incorrect output, missed DEADLINES, or commitment violations.</t>
      <t>STAKE</t>
      <t>A quantity of economic value locked by a PARTICIPANT as collateral for participation. STAKE enables accountability, economic deterrence, and Sybil resistance</t>
    </section>
    <section anchor="problem-definition">
      <name>Problem Definition</name>
      <t>This section defines the core problem addressed by the ODSI framework. The goal is to formalize the execution constraints and failure modes that arise when LLM inference is performed across open, distributed, and independently operated compute resources. These constraints collectively define what it means for decentralized inference to be correct, timely, and usable.</t>
      <section anchor="layer-dependent-execution-pattern">
        <name>Layer-Dependent Execution Pattern</name>
        <t>LLM inference executes as a fixed sequence of model layers applied repeatedly for each generated token. The output of each layer constitutes an intermediate activation that is required as input to the next layer in the sequence. Execution therefore forms a strict dependency chain at layer granularity.</t>
        <t>In a decentralized setting, different layers may be executed on different nodes. This introduces an explicit requirement that intermediate activations be transferred between nodes in the correct order and without duplication or omission. Any execution framework must preserve layer ordering and ensure that each layer operates on the correct input corresponding to a specific inference request and token position.</t>
      </section>
      <section anchor="stateful-token-by-token-progression">
        <name>Stateful Token-by-Token Progression</name>
        <t>Inference proceeds incrementally, generating tokens one at a time. Each token depends on an execution state accumulated from all previous tokens, commonly including cached intermediate values such as KV caches. This state is logically persistent over the lifetime of the inference request.</t>
        <t>As a result, inference requests exhibit execution affinity, i.e., successive tokens must either be processed by nodes that already possess the relevant state or incur the cost of state transfer or reconstruction. Failures, delays, or inconsistencies in state handling directly affect correctness and latency. The problem therefore includes maintaining coherent per-request state across a sequence of distributed execution steps.</t>
      </section>
      <section anchor="activation-delivery-with-timing-constraints">
        <name>Activation Delivery with Timing Constraints</name>
        <t>For interactive applications, inference must progress under tight latency constraints. Each layer execution contributes both computation delay and communication delay, and delays in earlier layers propagate to all subsequent layers within the same token generation step.</t>
        <t>This creates implicit per-layer timing constraints, i.e., intermediate activations must be delivered within bounded time to sustain acceptable end-to-end response latency. The problem is not merely reliable delivery, but timely delivery under variable network conditions and heterogeneous execution speeds.</t>
        <t>The activation delivery problem is defined as follows. Given a sequence of layer-dependent computations distributed across multiple nodes, how can intermediate activations be delivered and processed in order, within time bounds, despite variability in network latency, compute throughput, and node availability?</t>
      </section>
      <section anchor="open-participation-and-adversarial-behavior">
        <name>Open Participation and Adversarial Behavior</name>
        <t>ODSI assumes an open execution environment in which nodes may join or leave without centralized admission and are operated by independent parties. Participants may differ significantly in performance, reliability, and incentives, and some may behave maliciously or rationally rather than cooperatively.</t>
        <t>The problems to solve therefore includes:</t>
        <ul spacing="normal">
          <li>
            <t>Detecting incorrect or inconsistent execution results,</t>
          </li>
          <li>
            <t>Attributing actions to specific participants,</t>
          </li>
          <li>
            <t>Preventing abuse such as equivocation, free-riding, or denial of service,</t>
          </li>
          <li>
            <t>Enabling accountability without assuming trusted execution environments.</t>
          </li>
        </ul>
        <t>Any viable solution must address correctness and liveness under these conditions while remaining compatible with open participation.</t>
      </section>
      <section anchor="limitations-of-existing-execution-and-transport-methods">
        <name>Limitations of Existing Execution and Transport Methods</name>
        <t>Existing execution and transport methods do not directly address the problem defined above. Best-effort networking does not account for execution dependencies or timing constraints. Traditional distributed computation frameworks assume stable membership, trusted operators, or coarse-grained tasks. Centralized schedulers do not extend naturally to environments without common administrative control.</t>
        <t>The problem addressed by ODSI is therefore not solved by simply distributing computation or improving transport performance. It requires a framework that explicitly integrates execution dependencies, state management, timing constraints, and participant accountability into the design.</t>
      </section>
    </section>
    <section anchor="system-and-threat-assumptions">
      <name>System and Threat Assumptions</name>
      <t>This section specifies the assumptions under which the ODSI framework operates. It defines the participating entities, communication and execution conditions, and the classes of failures and adversarial behavior the framework is designed to tolerate. No centralized trust, privileged operators, or trusted execution environments are assumed.</t>
      <section anchor="participants-and-roles">
        <name>Participants and Roles</name>
        <t>The system consists of a set of independently operated nodes that participate in inference execution. Nodes may assume one or more of the following roles:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Clients</strong> initiate inference requests and receive generated outputs. A client may or may not also participate in execution.</t>
          </li>
          <li>
            <t><strong>Execution Nodes</strong> perform inference computation, typically executing one or more model layers for specific inference requests. Execution nodes may differ in compute capacity, memory availability, and supported hardware.</t>
          </li>
          <li>
            <t><strong>Coordination Nodes</strong> participate in control-plane functions such as identity management, stake accounting, verification, and settlement. These roles may be co-located with execution nodes or operated separately.</t>
          </li>
        </ul>
        <t>All participants are identified by persistent cryptographic identities. No global trust relationships are assumed among participants, and no role is restricted to a fixed or privileged set of operators.</t>
      </section>
      <section anchor="network-and-execution-assumptions">
        <name>Network and Execution Assumptions</name>
        <t>Nodes communicate over the public Internet using unreliable, asynchronous networks. Message delivery may experience variable latency, reordering, duplication, or loss. No assumptions are made about bounded network delay or synchronized clocks, except where explicitly stated by higher-layer mechanisms.</t>
        <t>Execution nodes are heterogeneous. They may differ in:</t>
        <ul spacing="normal">
          <li>
            <t>Compute throughput and supported instruction sets,</t>
          </li>
          <li>
            <t>Available memory and storage,</t>
          </li>
          <li>
            <t>Network bandwidth and latency,</t>
          </li>
          <li>
            <t>Availability and uptime.</t>
          </li>
        </ul>
        <t>Nodes may join or leave the system at arbitrary times. Execution may be interrupted due to failures, preemption, or voluntary withdrawal. The system does not assume trusted execution environments or hardware-based attestation, and correctness cannot be inferred solely from successful message delivery.</t>
      </section>
      <section anchor="adversary-and-failure-model">
        <name>Adversary and Failure Model</name>
        <t>The framework assumes the presence of faulty, rational, and malicious participants. Nodes may deviate arbitrarily from prescribed behavior, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>Returning incorrect or fabricated inference outputs,</t>
          </li>
          <li>
            <t>Withholding results or responding after deadlines,</t>
          </li>
          <li>
            <t>Equivocating by providing inconsistent results to different peers,</t>
          </li>
          <li>
            <t>Attempting to free-ride without performing assigned computation,</t>
          </li>
          <li>
            <t>Launching denial-of-service or resource exhaustion attacks.</t>
          </li>
        </ul>
        <t>In addition to malicious behavior, the system must tolerate non-malicious failures such as crashes, network partitions, and transient performance degradation.</t>
        <t>The adversary is not assumed to control a majority of system resources globally, but may control multiple identities unless mitigated by Sybil-resistance mechanisms. The framework does not assume confidentiality of intermediate activations unless explicitly provided by higher-layer mechanisms.</t>
        <t>Under these assumptions, the framework aims to provide:</t>
        <ul spacing="normal">
          <li>
            <t>Safety: incorrect inference results can be detected and attributed,</t>
          </li>
          <li>
            <t>Liveness: inference can make progress despite failures and churn,</t>
          </li>
          <li>
            <t>Accountability: misbehavior can be penalized without relying on trusted authorities.</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="odsi-framework-overview">
      <name>ODSI Framework Overview</name>
      <t>This section provides a high-level overview of the ODSI framework. The framework defines how large-scale inference execution can be coordinated across open, heterogeneous, and independently operated resources while preserving correctness, timeliness, and accountability.</t>
      <t>ODSI is structured as a layered framework that integrates execution, coordination, and incentive mechanisms. It does not mandate a specific implementation or protocol stack, but instead defines architectural components and their interactions.</t>
      <section anchor="high-level-architecture">
        <name>High-Level Architecture</name>
        <artwork><![CDATA[
                 +----------------------+
                 |        Client        |
                 |  (Inference Request) |
                 +----------+-----------+
                            |
                            v
=========================================================
|                    INFERENCE PLANE                    |
|   (Deadline-Critical, Peer-to-Peer Inference Path)    |
|                                                       |
|   +-----------+     +-----------+     +-----------+   |
|   |  Layer i  | --> | Layer i+1 | --> | Layer i+2 |   |
|   | Exec Node |     | Exec Node |     | Exec Node |   |
|   +-----------+     +-----------+     +-----------+   |
|       |                   |                   |       |
|       |        Intermediate Activations       |       |
|       +---------------------------------------+       |
|                                                       |
=========================================================
                            |
                            |   Execution Events
                            |   (commit, result, timing)
                            v
---------------------------------------------------------
|                     CONTROL PLANE                     |
|     (Asynchronous Coordination and Accountability)    |
|                                                       |
|   +----------------+   +--------------------------+   |
|   | Identity and   |   | Verification and Dispute |   |
|   | Stake Manage.  |   | Resolution               |   |
|   +----------------+   +--------------------------+   |
|            |                     |                    |
|            v                     v                    |
|   +----------------+   +--------------------------+   |
|   | Incentives and |   | Reputation and Routing   |   |
|   | Settlement     |   | Feedback                 |   |
|   +----------------+   +--------------------------+   |
|                                                       |
---------------------------------------------------------
]]></artwork>
        <t>At a high level, ODSI separates inference execution into two logically distinct planes:</t>
        <ul spacing="normal">
          <li>
            <t>Execution Plane: Responsible for performing inference computation and delivering intermediate results under latency constraints.</t>
          </li>
          <li>
            <t>Control Plane: Responsible for identity management, coordination, verification, accounting, and enforcement.</t>
          </li>
        </ul>
        <t>The execution plane operates in a peer-to-peer fashion and is optimized for low-latency, deadline-sensitive computation. The control plane operates asynchronously and does not block inference progress, allowing execution to proceed even in the presence of coordination delays.</t>
        <t>This separation enables inference to remain responsive while still supporting accountability and correctness in an open environment.</t>
      </section>
      <section anchor="layer-wise-distributed-execution-model">
        <name>Layer-Wise Distributed Execution Model</name>
        <t>ODSI adopts a layer-wise execution model in which inference computation is decomposed into sequential stages corresponding to the layers of the inference graph. Each stage may be executed by a different execution node, and intermediate results are transferred between nodes as needed.</t>
        <t>Execution proceeds incrementally for each inference request, with explicit association between:</t>
        <ul spacing="normal">
          <li>
            <t>A specific inference request,</t>
          </li>
          <li>
            <t>A specific token generation step,</t>
          </li>
          <li>
            <t>A specific computation layer.</t>
          </li>
        </ul>
        <t>This explicit structuring allows the framework to reason about execution dependencies, timing constraints, and correctness at layer granularity. It also enables flexible placement of computation across nodes with varying capabilities, without requiring any single node to host the entire inference workload.</t>
      </section>
      <section anchor="open-participation-and-resource-contribution">
        <name>Open Participation and Resource Contribution</name>
        <t>ODSI is designed to support open participation without centralized admission control. Any node may contribute resources to inference execution by assuming execution or coordination roles, subject to framework-defined requirements for identity, accountability, and correctness.</t>
        <t>Resource contribution is flexible and may include:</t>
        <ul spacing="normal">
          <li>
            <t>Compute capacity for executing specific computation stages,</t>
          </li>
          <li>
            <t>Memory for maintaining execution state,</t>
          </li>
          <li>
            <t>Network capacity for transporting intermediate results.</t>
          </li>
        </ul>
        <t>Nodes are not required to support complete inference execution. Instead, they may contribute partial resources aligned with their capabilities. This allows resource-constrained or edge nodes to participate meaningfully, while enabling aggregate inference capacity to scale with the number of participants.</t>
      </section>
      <section anchor="scalability-considerations">
        <name>Scalability Considerations</name>
        <t>Scalability in ODSI is achieved through decentralization, decomposition, and asynchronous coordination. By distributing execution across many independently operated nodes, the framework avoids reliance on centralized bottlenecks.</t>
        <t>Key scalability properties include:</t>
        <ul spacing="normal">
          <li>
            <t>Horizontal scaling: Inference capacity increases with the number of participating nodes.</t>
          </li>
          <li>
            <t>Elastic participation: Nodes may join or leave without global reconfiguration.</t>
          </li>
          <li>
            <t>Local decision-making: Execution placement and routing decisions can be made using local observations rather than global state.</t>
          </li>
        </ul>
        <t>The framework is designed to tolerate heterogeneity and churn while maintaining bounded coordination overhead. As inference demand grows, scalability is achieved by expanding participation rather than by increasing the capacity of centralized infrastructure.</t>
      </section>
    </section>
    <section anchor="execution-and-coordination-mechanism">
      <name>Execution and Coordination Mechanism</name>
      <t>This section describes how inference execution is coordinated across distributed participants in the ODSI framework. It focuses on how computation is assigned, how execution state is preserved, how heterogeneous resources are coordinated, and how failures are handled during inference execution.</t>
      <section anchor="layer-assignment-and-path-affinity">
        <name>Layer Assignment and Path Affinity</name>
        <t>Inference execution in ODSI is organized as a sequence of layer executions. For each inference request, layers are assigned to execution nodes based on their capabilities, availability, and observed performance characteristics.</t>
        <t>ODSI introduces the notion of execution path affinity, whereby successive layers and token steps of the same inference request preferentially follow a consistent sequence of nodes. Path affinity reduces the need to transfer execution state and intermediate data, thereby improving latency and reducing network overhead.</t>
        <t>Layer assignment decisions may be adapted dynamically in response to changing network conditions or node availability. However, reassignment is performed conservatively to avoid excessive state migration or disruption of ongoing inference execution. Under typical operating conditions, stable execution paths are expected to dominate, and recovery-related latency remains within acceptable bounds for interactive inference.</t>
      </section>
      <section anchor="handling-stateful-inference-and-kv-cache">
        <name>Handling Stateful Inference and KV Cache</name>
        <t>Inference execution maintains per-request state across token generation steps. This state commonly includes cached intermediate values such as KV caches, which are required for efficient generation of subsequent tokens.</t>
        <t>ODSI treats execution state as logically associated with an execution path rather than with a single node. State may be:</t>
        <ul spacing="normal">
          <li>
            <t>Retained locally by execution nodes across successive steps,</t>
          </li>
          <li>
            <t>Transferred explicitly when execution is reassigned,</t>
          </li>
          <li>
            <t>Reconstructed when transfer is infeasible or too costly.</t>
          </li>
        </ul>
        <t>This document does not mandate a specific state representation or transfer mechanism. Instead, it defines coordination requirements that ensure state consistency and correctness across execution steps.</t>
      </section>
      <section anchor="heterogeneous-compute-coordination">
        <name>Heterogeneous Compute Coordination</name>
        <t>Execution nodes in ODSI are heterogeneous in compute performance, memory capacity, and network connectivity. Coordination mechanisms must account for this heterogeneity when assigning computation and routing intermediate results.</t>
        <t>Nodes may advertise capabilities and performance metrics, such as execution throughput or observed latency. Coordination decisions may incorporate these metrics to balance load, avoid bottlenecks, and satisfy timing constraints.</t>
        <t>The framework supports partial participation, allowing nodes to execute only those computation stages that align with their capabilities. This enables broad participation without requiring uniform hardware or resource provisioning.</t>
      </section>
      <section anchor="failure-handling-and-recovery">
        <name>Failure Handling and Recovery</name>
        <t>ODSI is designed to tolerate failures during inference execution, including node crashes, network disruptions, and missed execution deadlines.</t>
        <t>Failure handling strategies include:</t>
        <ul spacing="normal">
          <li>
            <t>Detecting stalled or failed execution steps through timeout or absence of expected outputs,</t>
          </li>
          <li>
            <t>Reassigning computation to alternative nodes when failures occur,</t>
          </li>
          <li>
            <t>Reconstructing execution state when necessary to resume inference.</t>
          </li>
        </ul>
        <t>The framework prioritizes forward progress and bounded recovery cost. Failures may result in degraded performance or recomputation, but should not compromise correctness or global system stability.</t>
        <t>Recovery mechanisms are coordinated without assuming centralized control and do not require halting unrelated inference requests.</t>
      </section>
    </section>
    <section anchor="deadline-driven-execution-and-performance-considerations">
      <name>Deadline-Driven Execution and Performance Considerations</name>
      <t>Interactive inference services impose strict latency requirements that shape execution, coordination, and resource selection decisions. This section describes how ODSI reasons about deadlines, identifies sources of latency, and manages trade-offs between flexibility and timely execution.</t>
      <section anchor="deadline-semantics-and-slack">
        <name>Deadline Semantics and Slack</name>
        <t>In ODSI, inference execution is associated with explicit or implicit deadlines derived from application-level responsiveness requirements. Deadlines apply not only to end-to-end inference requests but also to intermediate execution steps, such as individual layer computations within a token generation cycle.</t>
        <t>Each execution step may be assigned a deadline budget, representing the maximum allowable time from input availability to output production. The difference between the allocated budget and the expected execution time is referred to as slack. Slack captures tolerance to variability in computation and communication and serves as a key signal for execution planning.</t>
        <t>Slack is consumed as inference progresses. Delays incurred at earlier steps reduce the slack available to downstream steps, making subsequent execution increasingly time-sensitive. The framework therefore prioritizes maintaining positive slack throughout execution to preserve responsiveness.</t>
      </section>
      <section anchor="latency-sources-and-bottlenecks">
        <name>Latency Sources and Bottlenecks</name>
        <t>End-to-end inference latency arises from multiple sources, including:</t>
        <ul spacing="normal">
          <li>
            <t>Computation time for executing individual layers,</t>
          </li>
          <li>
            <t>Data transfer time for delivering intermediate results,</t>
          </li>
          <li>
            <t>Queuing delays caused by contention for compute or network resources,</t>
          </li>
          <li>
            <t>Coordination overhead for assignment and verification.</t>
          </li>
        </ul>
        <t>In decentralized environments, these latency components are highly variable and may fluctuate over short time scales. Bottlenecks may shift dynamically due to changes in network conditions, node availability, or workload distribution.</t>
        <t>ODSI does not assume that any single latency source dominates. Instead, it treats latency as a composite effect and seeks to minimize the risk of deadline violations by accounting for both computation and communication delays when coordinating execution.</t>
      </section>
      <section anchor="routing-and-scheduling-considerations">
        <name>Routing and Scheduling Considerations</name>
        <t>Routing and scheduling decisions in ODSI are informed by deadline constraints and observed performance. Execution steps may be routed through nodes that offer favorable trade-offs between compute throughput, network latency, and reliability.</t>
        <t>Scheduling decisions may incorporate:</t>
        <ul spacing="normal">
          <li>
            <t>Estimated computation time based on historical performance,</t>
          </li>
          <li>
            <t>Network round-trip latency and variability,</t>
          </li>
          <li>
            <t>Current queue occupancy or load,</t>
          </li>
          <li>
            <t>Remaining slack for the inference request.</t>
          </li>
        </ul>
        <t>The framework favors execution paths that preserve slack and reduce the probability of deadline violations. However, routing decisions are made using incomplete and potentially stale information, and must therefore tolerate uncertainty.</t>
      </section>
      <section anchor="trade-offs-between-flexibility-and-timeliness">
        <name>Trade-offs Between Flexibility and Timeliness</name>
        <t>ODSI balances execution flexibility against the need for timely completion. Allowing frequent reassignment or dynamic reconfiguration can improve robustness and load balancing but may introduce additional coordination overhead and state transfer costs.</t>
        <t>Conversely, favoring stable execution paths improves predictability and reduces overhead but may limit the system’s ability to respond to failures or performance degradation.</t>
        <t>The framework does not prescribe a single optimal balance. Instead, it defines a design space in which implementations may tune the degree of flexibility based on workload characteristics, deployment conditions, and performance objectives. The guiding principle is to favor timely execution in the common case, while retaining sufficient adaptability to preserve progress under adverse conditions.</t>
      </section>
    </section>
    <section anchor="security-and-accountability-framework">
      <name>Security and Accountability Framework</name>
      <t>This section describes the security and accountability mechanisms assumed by the ODSI framework. The framework operates in an open environment with mutually untrusted participants and therefore relies on cryptographic mechanisms and economic accountability rather than trusted operators or centralized enforcement.</t>
      <section anchor="cryptographic-identity">
        <name>Cryptographic Identity</name>
        <t>Each participant in the system is associated with a persistent cryptographic identity, typically represented by a public–private key pair <xref target="RFC6979"/>. This identity serves as the basis for authentication, attribution, and accountability across all interactions.</t>
        <t>All inference-related actions, including execution commitments, result submissions, and coordination messages, are bound to the participant’s identity through digital signatures. This binding ensures that actions can be reliably attributed to specific participants without relying on centralized identity providers.</t>
        <t>Identities are self-generated and do not imply trust. The framework assumes that a single entity may control multiple identities unless constrained by additional mechanisms such as economic bonding or resource-based admission.</t>
      </section>
      <section anchor="verifiable-execution-actions">
        <name>Verifiable Execution Actions</name>
        <t>ODSI requires that execution-related actions be verifiable, meaning that they can be independently checked for consistency, correctness, or policy compliance after the fact.</t>
        <t>Verifiable actions may include:</t>
        <ul spacing="normal">
          <li>
            <t>Commitments to execute specific computation stages,</t>
          </li>
          <li>
            <t>Submission of intermediate or final execution outputs,</t>
          </li>
          <li>
            <t>Timing assertions related to execution deadlines.</t>
          </li>
        </ul>
        <t>Verification does not require continuous oversight during execution. Instead, it relies on cryptographic commitments, hashes, and signed messages that allow third parties to reconstruct and evaluate execution behavior when disputes arise.</t>
        <t>This approach enables detection of incorrect execution, equivocation, or deadline violations without imposing synchronous verification on the critical execution path <xref target="Byzantine"/>.</t>
      </section>
      <section anchor="stake-based-participation-and-accountability">
        <name>Stake-Based Participation and Accountability</name>
        <t>ODSI employs stake-based participation as a foundational mechanism for accountability and Sybil resistance. To serve inference requests, participants are required to lock a quantity of economic value as stake, which acts as collateral against misbehavior.</t>
        <t>Stake introduces a real economic cost to participation and creates persistent consequences for incorrect or unreliable behavior. When misbehavior is verified—such as incorrect execution, commitment violations, or repeated deadline failures—stake may be partially or fully forfeited according to predefined rules.</t>
        <t>Stake directly enables accountability by ensuring that identities are bound to economic risk. It also provides an economic basis for Sybil resistance: while identities are inexpensive to create, meaningful participation and influence require proportional stake. As a result, large-scale Sybil attacks require substantial capital commitment and expose the adversary to high risk of loss.</t>
        <t>The influence and opportunities afforded to a participant may be proportional to both locked stake and historical performance. This ensures that influence reflects sustained contribution rather than identity count alone.</t>
      </section>
      <section anchor="accountability-without-trusted-parties">
        <name>Accountability Without Trusted Parties</name>
        <t>The framework is designed to provide accountability without assuming trusted coordinators, validators, or execution environments. Accountability is achieved by combining identity-bound actions with verifiable evidence and enforceable consequences.</t>
        <t>When misbehavior is detected, responsibility can be attributed to specific identities based on signed execution records. Consequences, such as penalties or exclusion from future participation, can then be applied according to defined rules.</t>
        <t>This design ensures that participants are held accountable for their actions while preserving open participation and decentralization. Trust is replaced by verification and consequence, allowing the system to operate securely in adversarial environments.</t>
      </section>
    </section>
    <section anchor="incentives-and-economic-considerations">
      <name>Incentives and Economic Considerations</name>
      <t>ODSI operates in an open environment where participation is voluntary and participants are assumed to act in their own interest. As a result, correct and timely execution cannot be assumed to arise from cooperation alone. This section outlines the economic mechanisms that align participant incentives with system objectives, ensuring reliable execution under decentralized operation.</t>
      <section anchor="motivation-for-incentive-mechanisms">
        <name>Motivation for Incentive Mechanisms</name>
        <t>In a decentralized inference environment, participants contribute compute, memory, and network resources that incur real costs. Without explicit incentives, participants may decline execution assignments, deprioritize inference workloads, or abandon execution paths when conditions become unfavorable.</t>
        <t>ODSI therefore associates inference execution with explicit economic rewards <xref target="Bitcoin"/>. Each successfully executed layer earns a payment, creating a direct linkage between contributed work and compensation. Reward levels may vary based on execution conditions, including:</t>
        <ul spacing="normal">
          <li>
            <t>Tighter execution deadlines, which impose higher performance requirements,</t>
          </li>
          <li>
            <t>Placement on latency-critical or bottleneck segments of an execution path,</t>
          </li>
          <li>
            <t>Historical reliability and performance of the executing participant.</t>
          </li>
        </ul>
        <t>By rewarding each completed execution unit, ODSI enables fine-grained accounting and encourages participants to contribute resources proportionally to their capabilities.</t>
      </section>
      <section anchor="costly-misbehavior-and-deterrence">
        <name>Costly Misbehavior and Deterrence</name>
        <t>For incentives to be effective, misbehavior must be economically disadvantageous. ODSI assumes that participants may behave strategically, including submitting incorrect activation outputs, violating execution commitments, or missing assigned deadlines.</t>
        <t>Misbehavior triggers penalties through the control path. Slashing events may occur in response to:</t>
        <ul spacing="normal">
          <li>
            <t>Incorrect or invalid activation outputs,</t>
          </li>
          <li>
            <t>Mismatches between committed execution inputs and revealed results,</t>
          </li>
          <li>
            <t>Failure to meet agreed execution deadlines.</t>
          </li>
        </ul>
        <t>Penalties are calibrated to exceed the expected gains from cheating or shirking, ensuring that rational participants cannot profit from misbehavior even if detection is probabilistic. Slashing may involve forfeiture of locked stake, loss of accrued rewards, or other economically meaningful consequences.</t>
        <t>Penalties are applied only when sufficient cryptographic and execution evidence exists to attribute responsibility to a specific identity. This ensures that deterrence is precise and does not require centralized trust or continuous supervision.</t>
      </section>
      <section anchor="reputation-and-long-term-participation">
        <name>Reputation and Long-Term Participation</name>
        <t>While per-layer payments and penalties shape short-term behavior, long-term reliability is reinforced through reputation mechanisms. Reputation reflects a participant’s historical performance across multiple dimensions, including:</t>
        <ul spacing="normal">
          <li>
            <t>Deadline adherence rate,</t>
          </li>
          <li>
            <t>Output correctness and consistency,</t>
          </li>
          <li>
            <t>Availability and responsiveness,</t>
          </li>
          <li>
            <t>Sustained execution throughput over time.</t>
          </li>
        </ul>
        <t>Reputation directly influences future participation opportunities. Participants with strong reputations may receive higher task volumes, preferential assignment to latency-critical execution paths, higher reward rates, or access to tighter deadlines. Conversely, participants with poor or unstable reputations may receive fewer assignments, reduced compensation, or eventual exclusion.</t>
        <t>Reputation complements direct economic incentives by encouraging sustained, honest participation across many execution sessions. Together, per-layer rewards, slashing-based deterrence, and reputation-driven coordination create a self-reinforcing environment in which rational participants are motivated to behave reliably, enabling scalable and decentralized inference execution.</t>
      </section>
    </section>
    <section anchor="inference-path-and-control-path-separation">
      <name>Inference Path and Control Path Separation</name>
      <t>ODSI separates inference execution into an inference path and a control path in order to reconcile strict latency requirements with the need for security, accountability, and open participation (analogy to Lightning <xref target="Lightning"/>). The inference path is responsible for latency-critical execution and data movement required to generate inference outputs. The control path is responsible for identity management, economic coordination, verification, and enforcement. Together, they form a coherent architecture that decouples performance-sensitive execution from governance and accountability functions.</t>
      <section anchor="design-rationale">
        <name>Design Rationale</name>
        <t>Interactive inference requires execution progress within tight and predictable time bounds. Operations such as cryptographic identity registration, stake management, global verification, or reward settlement cannot be placed directly on the critical execution path without violating these constraints.</t>
        <t>At the same time, an open and decentralized environment requires mechanisms to deter misbehavior, attribute responsibility, and enforce incentives. These mechanisms inherently involve coordination, state persistence, and potentially delayed resolution.</t>
        <t>The separation between the inference path and the control path addresses this tension by allowing inference execution to proceed optimistically and independently, while ensuring that execution remains observable, attributable, and enforceable. Lightweight checks on the inference path reduce the likelihood that incorrect results propagate to users, while the control path provides definitive validation, economic settlement, and long-term accountability.</t>
      </section>
      <section anchor="inference-path-execution-properties">
        <name>Inference Path Execution Properties</name>
        <t>The inference path carries all latency-critical activities required for inference execution. This includes:</t>
        <ul spacing="normal">
          <li>
            <t>Layer-wise computation across participating nodes,</t>
          </li>
          <li>
            <t>Transport of intermediate activations and execution state,</t>
          </li>
          <li>
            <t>Routing and re-routing decisions based on network and execution conditions,</t>
          </li>
          <li>
            <t>Failure detection and signaling to enable timely recovery.</t>
          </li>
        </ul>
        <t>The inference path is optimized for low latency, minimal coordination overhead, and predictable progress under Internet variability.</t>
        <t>Inference path execution is optimistic but constrained. Participants are not blindly trusted, and incorrect execution is mitigated through the following mechanisms:</t>
        <ul spacing="normal">
          <li>
            <t>Execution commitments: Participants commit to execution inputs, layer identifiers, and deadlines before revealing outputs, preventing adaptive or inconsistent behavior.</t>
          </li>
          <li>
            <t>Selective redundancy: For high-impact layers or execution steps, multiple participants may perform the same computation, allowing mismatches to be detected through lightweight comparison.</t>
          </li>
          <li>
            <t>State-local execution: Execution path affinity minimizes state transfer and confines errors to limited execution segments.</t>
          </li>
        </ul>
        <t>These mechanisms allow many incorrect executions to be detected within the same token step or shortly thereafter, enabling rapid fallback or localized re-execution before incorrect results propagate to the user.</t>
      </section>
      <section anchor="control-path-functions-and-enforcement">
        <name>Control Path Functions and Enforcement</name>
        <t>The control path operates asynchronously and is responsible for system-wide coordination and accountability. Its functions include, but are not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>Cryptographic identity registration and authentication,</t>
          </li>
          <li>
            <t>Stake locking, management, and release,</t>
          </li>
          <li>
            <t>Validation of execution commitments and revealed results,</t>
          </li>
          <li>
            <t>Slashing decisions for incorrect execution or missed deadlines,</t>
          </li>
          <li>
            <t>Reward calculation and settlement,</t>
          </li>
          <li>
            <t>Maintenance of long-term reputation or eligibility signals.</t>
          </li>
        </ul>
        <t>Because control path operations are not latency-critical, they can perform thorough verification and policy enforcement without delaying inference execution. Control path decisions are driven by signed execution records and verifiable evidence produced during inference path execution.</t>
        <t>While some violations may be detected only after inference has progressed or completed, economic deterrence and reputational consequences make sustained misbehavior irrational for participants seeking long-term participation.</t>
      </section>
      <section anchor="bounded-risk-and-practical-correctness">
        <name>Bounded Risk and Practical Correctness</name>
        <t>The separation between the inference path and the control path introduces bounded risk rather than absolute prevention of incorrect execution. However, the combination of early detection on the inference path, rapid fallback mechanisms, and strong economic deterrence on the control path ensures that incorrect results are rare, localized, and unlikely to persist undetected.</t>
        <t>This design follows a well-established distributed systems principle: optimistic execution combined with eventual verification. ODSI applies this principle to open and decentralized inference, enabling high-throughput, low-latency execution while preserving accountability and system integrity.</t>
      </section>
    </section>
    <section anchor="scalability-and-deployment-considerations">
      <name>Scalability and Deployment Considerations</name>
      <t>ODSI is designed to scale across large numbers of independently operated participants and to operate under dynamic network and resource conditions. This section discusses considerations related to open participation, participant churn, and long-term extensibility of the framework.</t>
      <section anchor="open-membership-and-sybil-resistance">
        <name>Open Membership and Sybil Resistance</name>
        <t>ODSI supports open membership, allowing any participant to contribute computational resources without requiring centralized admission or prior trust relationships. This openness is essential for elastic scaling and for harnessing widely distributed and heterogeneous compute capacity.</t>
        <t>However, open membership introduces the risk that a single entity may create many identities to gain disproportionate influence or rewards. To mitigate this risk, ODSI relies on Sybil-resistance mechanisms implemented within the control plane, where Sybil resistance can only be achieved economically rather than administratively. While identity creation is unrestricted, meaningful participation requires stake-backed commitment. Stake ensures that influence, task volume, and rewards are proportional to economic risk and historical performance rather than identity count.</t>
        <t>This approach preserves openness while preventing adversaries from cheaply amplifying influence through large numbers of identities. Large-scale Sybil attacks require correspondingly large capital commitments and carry a high risk of economic loss.</t>
      </section>
      <section anchor="growth-and-churn">
        <name>Growth and Churn</name>
        <t>Participants in ODSI may join or leave the system at any time, either intentionally or due to failures, mobility, or network conditions. The framework assumes continuous churn and does not require long-lived availability from individual participants.</t>
        <t>Scalability under churn is achieved by:</t>
        <ul spacing="normal">
          <li>
            <t>Decentralized participant discovery and coordination,</t>
          </li>
          <li>
            <t>Adaptive assignment of inference execution based on observed availability and performance,</t>
          </li>
          <li>
            <t>Conservative state migration and localized recovery when execution paths change.</t>
          </li>
        </ul>
        <t>The inference plane prioritizes timely progress in the presence of transient failures, while the control plane provides longer-term stability by discouraging unreliable behavior through economic and reputational consequences. Together, these mechanisms allow the system to scale with participation while remaining robust under dynamic conditions.</t>
      </section>
      <section anchor="interoperability-and-extensibility">
        <name>Interoperability and Extensibility</name>
        <t>ODSI is defined as an architectural framework rather than a single monolithic protocol. It is intended to accommodate multiple protocol instantiations, execution environments, and incentive mechanisms.</t>
        <t>Interoperability is supported by:</t>
        <ul spacing="normal">
          <li>
            <t>Clear separation between inference execution and verification functions,</t>
          </li>
          <li>
            <t>Well-defined interfaces between planes,</t>
          </li>
          <li>
            <t>Use of cryptographic primitives and message formats that can be standardized independently.</t>
          </li>
        </ul>
        <t>Extensibility is a key design goal. New execution strategies, verification techniques, or incentive schemes can be introduced without disrupting existing deployments, provided they respect the framework’s core principles. This modularity allows ODSI to evolve alongside advances in inference techniques, hardware capabilities, and decentralized coordination mechanisms.</t>
      </section>
    </section>
    <section anchor="privacy-considerations">
      <name>Privacy Considerations</name>
      <t>Inference execution in ODSI involves the transmission of user-provided inputs and intermediate activations across independently operated participants. By default, the framework does not provide confidentiality guarantees for such data beyond transport-level protection.</t>
      <t>Decentralized execution may reduce the need to transmit user data to centralized endpoints and can enable computation to occur closer to data sources. However, distributing execution across multiple participants may also increase the number of entities that observe portions of the execution state.</t>
      <t>Privacy risks include exposure of user inputs, partial activations, execution patterns, or metadata such as timing and routing information. These risks vary depending on deployment context, participant selection policies, and execution strategies.</t>
      <t>ODSI is designed to be compatible with additional privacy-enhancing mechanisms, including data minimization, execution on trusted hardware, encrypted computation, or differential privacy techniques. Such mechanisms are considered out of scope for this document but may be incorporated by specific protocol instantiations or deployment profiles.</t>
      <t>Operators and users should carefully evaluate privacy requirements and select appropriate configurations and extensions when deploying ODSI in sensitive environments.</t>
    </section>
    <section anchor="relationship-to-existing-work">
      <name>Relationship to Existing Work</name>
      <t>ODSI draws inspiration from multiple areas of distributed systems and decentralized computing, while addressing challenges that are specific to large-scale, interactive inference workloads.</t>
      <t>Centralized inference platforms provide tightly optimized execution but rely on trusted operators and centralized infrastructure. ODSI departs from this model by enabling open participation and decentralized execution while preserving interactivity.</t>
      <t>Prior work on distributed machine learning focuses primarily on training or batch-oriented computation, where execution is less sensitive to strict per-step latency and state continuity. In contrast, ODSI targets online inference with strong sequential dependencies and deadline constraints.</t>
      <t>Peer-to-peer and decentralized computation frameworks enable open resource contribution but typically assume best-effort execution or stateless tasks. ODSI extends these ideas by incorporating stateful execution, deadline awareness, and economic accountability.</t>
      <t>Blockchain and decentralized ledger systems provide mechanisms for identity, verification, and incentive alignment, but are generally unsuitable for latency-critical execution. ODSI adopts similar accountability principles while decoupling execution from verification to meet performance requirements.</t>
      <t>By integrating concepts from these domains, ODSI defines a distinct architectural approach for open, decentralized inference that complements rather than replaces existing systems.</t>
    </section>
    <section anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The authors would like to thank colleagues and reviewers in the community who provided feedback on the early version of this draft.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC6979">
          <front>
            <title>Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA)</title>
            <author fullname="T. Pornin" initials="T." surname="Pornin"/>
            <date month="August" year="2013"/>
            <abstract>
              <t>This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6979"/>
          <seriesInfo name="DOI" value="10.17487/RFC6979"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="Bitcoin">
          <front>
            <title>Bitcoin: A peer-to-peer electronic cash system</title>
            <author initials="S." surname="Nakamoto" fullname="Satoshi Nakamoto">
              <organization/>
            </author>
            <date year="2008"/>
          </front>
        </reference>
        <reference anchor="Lightning">
          <front>
            <title>The bitcoin lightning network: Scalable off-chain instant payments</title>
            <author initials="" surname="Joseph Poon" fullname="J. Poon">
              <organization/>
            </author>
            <author initials="T." surname="Dryja" fullname="Thaddeus Dryja">
              <organization/>
            </author>
            <date year="2008"/>
          </front>
        </reference>
        <reference anchor="Byzantine" target="https://www.usenix.org/legacy/publications/library/proceedings/osdi99/full_papers/castro/castro.ps">
          <front>
            <title>Practical byzantine fault tolerance</title>
            <author initials="M." surname="Castro" fullname="Miguel Castro">
              <organization/>
            </author>
            <author initials="B." surname="Liskov" fullname="Barbara Liskov">
              <organization/>
            </author>
            <date year="1999" month="February"/>
          </front>
          <refcontent>OSDI</refcontent>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA7V9644byZXm/3qK3Daw6LZJdo8HmNkWsLsuldRuuSV1jVTu
HmMwWCSTQTKtZCadmawSezyA32H3z76en2TPPU5EJqvkywqwW6oiM+Ny4ly/
88Vyubwa67EJz4rPrtvi+2NoF8WLUIV27Mum/ilsFkXZbor3VdmU6yYU3/Tl
ITx0/Ydi2/XF67LfBfj/dncq4S9vuk1oilftNvShrcJnV+V63Yd7ePb3L96/
+uyqKsew6/rzs6Jut93V1aarWnjcs2LTl9tx+QDPWcJnhmW3GerlV19dDaf1
oR6GumvH8xE+9+rl3TdF8bOibIYOnlq3mwAj3sBoP1sUn4VNPXZ9XTb4j1fX
z+E/MMbPXr27++azq/Z0WIf+2dUGxvDsquraIbTDaXhWjP0pXMEY//Gq7EMJ
T33Xnca63X12hdPc9d3pCD+86Q5H+vHy+gE+V9zBiLd1VbwfQ+jp0x/CGb6w
eXZVLItNCMeiCWXfwq9wsrwg9Kt6GPt6fRrDphjOwxgO/AW35EUbRnz31dV9
aE8w2qL49GEUBS/VZz/CE/Dtv8av4s8PZd3Az3GBf1WHcbvqevp82Vd7+Pl+
HI/Dsy+/xI/hj+r7sNKPfYk/+HLddw9D+BIf8CV+cVeP+9Mavvqha3fnsj2H
L+c2Ej/awKoPo3uLfWXFT1nV3eyXZ3+42o+H5rOrq/I07rseVxxeURTbU9Ow
PH32bdk2OPkfS16TAiRhV7b1T+UIwvSsuA3trtrD/4Hwrru+BLk508eCrBK+
cN989Y+/OlbNqqxWVQvvm77nX/Alr+u/6hVN/Ycnnv47WKTiN/WFOdwN8PL9
qSzew1t+gv/ByRtD39KvywY2vtycYN3h8O67rin+69ND+j2+6/yr4afVKA9f
hc3p0vDewEceQl386+nR4f22BVHqh3pM3/XxdHj41fQ1V23XH+AR9yT37765
+aev//nrZ1dXqDHcL57XY9XV7TN6pGow/WFxXRzhPCzHbon/LUITqrHvWjgo
VTns5dzxoE2G6M9S/lvAoQXd8H5VvC0/lIdu7OwXPPn3sHjDvk5/Tbql+OVX
X/03+Ofrercf8fynY7zbh2LN4ywa/Yie+GdR0Xbb7bLal/ApGMlYtmNxLM8H
UBLDJ4z7N90QjvvituvabNy/WfmfZl+7WxUv+vPvy+w7d/tyswmnwf0ymenz
808wvroN6Uxv+7Iaa5hPsdYPFNvy1IzF2DWhL8lCPDmVN6vipgSVmW/Am3p3
AmOT/C776vMVbMLwobvPvvq87NdlX/pf8nz+4euvv15+9Uv6SR+2YCRGWHA0
X+9fvOKxjmjx4Eeqxx4eHlYnMCX1R9KTTdiV1fnL42ndwMzxIAxfNvW6L3v4
Yd9VAWxUuxu+7IZN/fXXX+Jh+l/H8gjHA9QqzkT+szoOcBSWy2VRruHfsJBX
V7PG9vPXr998ES1MUQ/wjwoMGR6s5gyG5dh0Z7Ap5VCUKmZoekJ/X1dhUZzD
WISPYJRQDPnTJGWwAPD1Y18fyr6Gv3Vt4U0UvLHHkZ6q8QR2CH0E+MeAVq2D
6aBqGVbF+1O1h4cO9a4dQNoP9Yi/bdswDIsClpefCCqqD0N36mH83UMLi7Gv
j+x3oJ2GT8ApGOhg1A2oEZCfYoRjVJXHssJ/d1sYz6a+r0HfNTDmDv4W8P3X
I31wgG0HsTzAdGG5/GrB67vNqQoDrEGoTrhhBZw6XHCwp7Ao1VB8DjZqtSjQ
bFcjLNwfTjBq8DIKdT6qOsB0GtqeGlXwAXYZZ4Xyf89SwNMZ8cCTMWyrM0z6
D6e6D7TcX8BAy7FAm952Y/EQmqYYTsdj1+OSrs9xj2QPF7DeZTvgB8jNqTpw
PmpW/sUhwCTaejigNNCSF6G9r0EF0stWV1d3e5AU8L9O+INiOIaq3sI0YJT0
8UXqkfDoB9VN28QJlJVDq5Isbln13YADMCcNxYiEA+URHng4jbBh8NNTq8Jz
LHtY9PpY4jALVJbxZSOI9Tj4N6BMO4cKN+EMev+hHoLbUDp4MBLwJn8PhgDF
J3w8wgEFcdyEcgOeAm4gjGsPxgL2oeUnH7q2A3nbo9kgv4sXF6a8BmdmGbYw
+1FP0qp4NeLH1vgwGUdJPlqUgrhlNP2eHc1srdOdBOMCC7sPIFXdLrShAzXM
g4mHRoRrgPn2eBzc7pNQyYGAgVQdrLSeIj5ffQ9LggeyeICpwohg7sPpgNup
e2IrOZGbLfxlgOni1GCryG0cA6oEOB62cQtRAbjC5JrAOYQxgSQd4Dfbuq3x
xws8eE0D3olNqGt4Awc452gR6SVPxygWgDjZ+RwjkC9okzYdjBgPGUv9mRRR
DysMKwDbBSMbu6prYBAHWBZUtex5DHTO9NegD/CIH0qYMiyf6Dr97bIJ96Cd
5VyJJaDTDcK3DjxtUqOwbSDz+ChdVdm19aluNsXpKGsbpwKLWYHAw7dBXEON
u4KG4lBvNk24uvoZuoGk1fCl82ZjILsBSmdf3oM7EkCiYPdgnqd2o+4jSlnX
4i7Dwh/gW33r7Ed5PJqFW6imIqEphw941GAx4BCBFJMsNPp6eAFoZvBnNiT5
FTy3QLlm0eCNJB1KpwaP+FCT+wMr7Pa1w3Ma4JQfeDoglBD2oLNBNq9Bj1OM
HhxYjAVCNHkPuG4FCG5PKhjOsmwNnk6wUSjzsE2gmbuRvwZDppGJJqGNq1hz
0xvh0zIJENPTCKcTfvQBAswVeL9N/YGevallZWUVdUADihW4ezRImk9BJg9X
Ez/XdOVmyC1X+LivwY0cnjRL4G3DIMEfApe4X8p8nXIkMZZ1hw3H05xaKTPB
I51zHiConiOG0KgRuk15hvd0w8g2cIl2ImTjnXgWj/sTZibA8pXFcIBVLjiC
R2l0Fv5Gn8HqcoA5NGzJvBoFCYW4Z3NqSOhIt8jpbUEocVQLOG9jAQYBUwve
NcEH4XlqFuzADNFERYkFvXps4KOPOCkLkJBu15fH/Zm/c+zQAuHuJ96L856u
B1ntAZZokKMRF/QgDhIcuxPplR3E5ws5F7qUePZw2KKBQKLRFgUaGhrBrhej
toWZS/R4nxjPEiV3p7akgvMkp71Y9yCZsCPJgrBh72APQ1OiA0WLAus+MR7O
+fpb9Tp+8JL1+Rt9lCdM76rAAdAhIYlM/CxU7w/gPuwTt4Z0zYzrI9vZwgzQ
i25Rr+zBR9jtaX04ppAYP4y5c+RM8b57SDSFObawSGB77GDgApPT4B08eDXp
aFbhMB5WW6BJJv5B6rTQIUFVNnEbct/zZz8DK2Q+EW4P7j2NINn+uOFXVzep
rtDti1uOBwQHFkROpz6Sql2Q0714GPNDpA3iRYDHHFCmNzBh9DP5dMhkV8WP
+7oJcuLo7Uc+ZqElgWVF2sEPD6qgaJlVSW0yJQUPOosW9gfYa2B2DdhgJ4Ee
6MQWzHfFVs/rjVxFrMgHAGuOLoAGIegck52txfSfSEmQ+KE5o1MNBrE/O42G
P2TFjuIAehqHojaM0xbkt40Yf3zE4HcwnUhaUtRf2XoNyAaX1F+J+3dqQDvP
W5JkBdSsHOALNahjs9ww7z1ap7jjIRssOv81/kDOfvSAGoi7zFceRpDeaUxZ
Sux3D/7qgfxUn+SdSCI6FmSQScRA/EiqYVfKFn0XdgnYoefxgzwM6KCTIg+T
fcAzh7MYotTDJMhnbStytcEXbdk524PLve27Ay0InWGa45hv2oBe5hHehh4i
be49poVRqGdU4B0dAC+mDYRVuO6gn3a4VaLeHkrSDFGCWC+eYP1g70peWT2n
YQNrCnNDvYN+MjoBaP9dlnFQZ86JT4tLgboOXfxUsavh4u0S1xEXp0DNBnq2
3O36sCuT3TqUZ9SawwkT7TU8aJVqo6rpTpvluhxSzYRuhLNw4FZjQHlPzob4
kqAB2b+q4DSiq4gead3C6sJbQYfia+d9UnoaakoUjy1HJeiYknsTDwVa+G44
cdRB34F9+YBuOIWfIvI6kL5Gz50iEjBm7H07Lx/9b1CEGwnWdKnwCV78xZ/R
dRilQIEq2S0abMmxI0k5NiWuzwDzLSkkAnEdaMydxRvFGsb7UG/Gvc0OnQZe
GioP0YTAtT1xFgvCb9Hw+GF04sTXLtbdOEKMGaoPoks5K7Xv+vqnDr1578OR
DhJHCzUnRjrszLgZ2jEUTTrgZlA0REoEtgCGKS7WExk0s1Do3+J3aD/Zg0Ax
RIkGmQDt33YHCXYwdog6i4ScVhLPKmorSf+wqq5mragYixA+kK6EPcYtELPm
vUd0xpume+AFmHWZEkdn7NhUkibkXznrEo+Yeha8giKT6kmgXuf37+sjxUp+
FmiZSQJWxfNz1LvpFkUzL/7eAVUtKQoSqOaE0agNaGmaTNQWaSLRzGil68Hc
YjH1w5xAwPQ5FBJPmLYH55p6zKviFs6AKmPNMoFaQXWgB1eXCn040i3gtJ2q
kOoViMLgdJL+shPjJd7CPJCvoDEcnHNK/9QYkPUoOs7EwgRggrAZ6Ld92z2A
suyTfJX4cJg3sGgYs0878ernNsH5WjB3LzAo5Pdlj6u0QWuOY5ozaRNPRE+O
+M9gC9Q/EcNF51nTSoeQyimreMqFgJGBH4Orz0fLnaskPdiAaNa6p+hQ9aGj
pcnyy7isaKDY/GDMuT2Bt1GvworcPZf7QP0QSjjpPoEgwTy59RgDoy3H0Ul6
YSEOKnk/qDjZ1GrYUJUY9aY56fuyOYWYoPkQzn/+0/+mHxaff/fDF/ydaNOz
XPih/OD3NFoKlFNRu1tYcPTHNZlDsdEwakKCbR+5JBhvkzeyDpxvDBx2oeNH
i9WgYGb5Vu9bYbIJBJNCsDpGt5Q2YFeCzaYaflKF03A1i1RR4zaBDBFaDfab
fKp0QY4fpY1lQqCjag38xQ5sxDBrESyRjDx6E50LG5MGbxKyide6Uf2VavJp
mOpUK02FPU6cyWkgJ44TYz7RliWlOEx7X8GK0fPfdu3y1x1onjyK50zm8Hj+
V7O9eIJPjUucSGJYw2exM7hw6RTTY8Wx2XQzKb1bo0niFBCOCQTcJZajS0Yy
kisybw2mGaKZjNjCLXxUORt020tEorBaAVtm06XULQ9tD+GhJIpdyv40sMym
ZthNQfxhypmg4Ns6TzIslzPdztPQZDclvy3TralvmHBHJcvHk9+vJHFmb0Sf
CT+iGXBxMpKAfeHTTLC4+HJc0bLfSOGkCo1kw+j16vhIAX/AABy9/6o/H0dO
rtUV7SGKwH14tDCGbwBdUp0wnyuxN/3M0utYj2utAFvwTsHO1E1zGrhuKfJb
U6E4MRHwOVQg9CgX5qD1bThxzF711dXtYzWDBR9S8q9S/UIRsFl090wYFFjx
5kK5oaMhfmqtIdoUXGhWCi94zrfgqFUYZ5NKyGUR1fHuBGfRdPG2U9dRFu1o
D3h2dbXkHNCtP87Piuv2/AmOJtlv52h+ioM560OyiJvnufAVw7+gqFau4Qzj
22sOprFAk2W2bpxQPnOJzXgg6sEn6qbJN0tgaLpNklINeiGYCmEXGH5VYjIx
MZv5XBiPgcF+qsxSSz6n1mhq71EZCCDtpU7gWWbhtOKKcT4v2aBlHHOMJOth
lQyqHlFCJdH/Xkvj16OO5tjTeTtZnUPtJ3tP9Dqq1A6yRVwLLt6LWwOLcHFz
EmdFLMMy+kMSkMf8hBsU7MmRT+dcRRqThRQLxGyK1yuJK8SFrjPvgqs93IkD
iHDBbqjn9qMe5ByKh0SByvrMZT/KC1CQ5E5fPGKkjy0SRMcXfdP5CAjj5pmg
J5mTPxk+9iWDhtYvqoqCcWAz7ka17yxz5gQ8qnOxmewLbbiSNHp7ieqtuIN9
qtuu6XZnMadDqEStRk9njJ8Sg80rjkc0fWZxu+8pXQGvBy809MuqJO2w5ehO
9LEoaHwuCuP1zd2rH67vXn3/9urqlZedFp7akz9Jkd/ROTOxxlGKtSXZLjan
PgmHV0V8+nsSQ5Q1eC49BfPAfALlaPCCchLMZ6P0KLmM3Rr+G7xTxvn+m+/f
3r37/nVx+/r67Us2Fi24k3poYHvYQ8bEvEWlhDPJzPeiAFEz2ygpfJSHik8/
hKGVWU3VEIlzULx68fLt3au738F7dpLQxye9v7v+7mXi7L1/ff3+21dvf70o
3r388frdCxADjKP5l/jidy9vf3tHiwj7ir4OTvXFy+sXr1/hLK8pRnDHGBeX
Uzll9IpcKAVKYWFhWSl7F1UOLEXcNjv5C04UWhZXvA5MHGOODRGQgcITH+rD
OF/+68ub39KTbr5/8+bV3RtYFRxzslicTmIVAZ5yU0qgSjXZ2+t3d69uXt1e
v72j7GtVStKFPCFSaJZpn5+tuPHRN6HEJ1WBZBXfLzi9wjlGOAXJ/sM0dDdx
6AiiozAT7XZotkutyG9iSACRJEIcyL32M1g49ADodXELcBu46gYhMmVJR44n
jmWNlu/V229evnv59uall+xPk2rJZiTnckmJLrT1FDNFCX59/buX76IoLLwg
OBDYu+9hR1lgl/Z3XE4x47SVWoqlZ+KyoQtMIYFPPoGx3AUNNChQ1I071kcK
fFc8KtYfFp5G2w0LSXVXTGhwJoO23OkSRHao9dnUW3rBmBaevRJxu0VZzQtu
Ib4dDQ/XANUltMKtpo4oMlC1Ff1EGAq5sZSjMHMVMChjvTqvTGB4URuksujz
VrIBOPOuqs2ty0URDgUc1I3UaeAxHSv8JAGhRwS8sr16RdNUVyzgrby+gu1s
ToHKPJhBka2wykQfwJXD+gDrtChOLOEKp8M617ktNRFNWT91yeQLuMosve4V
XPDr2l2XZmkVmwIip/ASKiGZE+qXchNgEzaiveEtth5aPqCBo+ImabHIUUDU
4P4MpzBVBCS0oIRxggjOPatqpe1vo9C6CgfnoJIlGghxrkm9WZFOE/5YkqZ3
zx1t+uhECbAjxEbbFCZ9tFuTAdj4BYMRgkW7+Y7Hp+cNCyJisymr2XQViaWt
5vq02YVRZIkQI4xOiHMiK4cjT9QEHvZVQW+MmckhAr9x4e8xlHBhE+UPeZxk
eXnj2Lxv3A6CSWjwK1ibmtvCB2yFYENBuS4I7jRcYa93pDlEY4spSjpaknrF
NDqBvJwd6hgLUI8kPfCsxuJ38h3w1P/hVLL2wU3X4XJqFRb2g6KZ/FhL1EwN
2TbMmHV9ntBix0SLDGmwuYhvwUn1qgYQM3OGj7gQFD3bW0nCvTDI5SMObtUx
FJK+odlJi+WzvBOFFrsOZlCTDiWhQyMqQmMAFBfTeMNE1WsFQCN2l7ZwkkoX
cY6+pyCVPQCYU86zpuGTCte4G1RACAQkxAWB0ZSUqj6EcjY56XRBxwAbkibJ
Eosq5sQrZ1BeEzr4hQVWFjQXt+WI8B6wzxnajwzswD7Jtv5otpZxsc7dH+xg
wLmjENWb4ugTkVHmnZOCg9Yf2POkRalHfmt7CdfOm1YPMelN54nrFxIbfJRI
QgMwHfjKzRuDoLDtJAPJ+X/CNRqYkQwMeiP6NND/LWbUsE6LQRJCwZNtQYed
4dXmXcgKiQdibgulR/QzVBbEhaldtpjWwEJ0l4GTBbiA+reYaYunc2PKljEK
sh6mfHp0hwkmIcmezcnK71TntVIn5sXisYpBPQUDFgDwOtFj2Xyhe8RwABy0
22w5Iwbk1iHxTtK/0CBzxqWbd+nVfJOtII9PUw9SS9Akzx3+crk+L+kvqJZ2
qF1IH8Vki3TLDCnSdmEFMxoJZXE6OKKoOei4gVBFl9MXz9qZXBCE6Cd2ucnX
erzA5iLKp8tq3/3gCmmoYumN8Jem24m/5HxEzKDQujf1NpBBFaz7ZHUxNTAP
izIksyCDfdF7Swr/rHVH8W+4bEdLSHITakpFrM3DY33fOuXcQHC0ORsgidMp
TbjHJCxPEW1sW516EaOB1Ar/Sg8C1VUDK10Gqa+Kb8zJ0yoiP8iXDgmaQk8C
T3NDQeGmRjHFlCKc3mpM2hk4uUfBGKs5tWZR1/CWEsaHQdKceOOCbYKXTrKH
ZaJ7fWLVi1g4CtbxOmrLFxK2s99/x5m+m2h+rq6+oXk7/HuCs3eYDT7pfHQk
kr6I25ZDkacUfHy07sZ9EgLSPgju7XA4taqH6OeL6LDRtoSybzC6Fv2KAJiS
YAioLKiTKU8quSIWI+nowLpqOHmQ4qAoiIdwk6iAcWt4NtNsqUr5RZ2s0KrY
JiCDodwAmsaaBqRIGqpDHUfyJac5lTAvZLWUwQKhErk83ASXt0G4OfsH9kPZ
RvaLm+BBDQLCnSn1OpE7or7knKk30fZ4NzZNupSD1GVARn6N6aFMuLmPyaeA
TUKGRPIVWKPYSwHXYDW7uuw+DOk+5C0WbLsWJiy4L7RJpCcYn+ijiDoixmRT
FjNYVhZeHGASMf9POqzTKhR9+trVPp5LMCG4A0YdW+fcPM4lIsFZnaIH8vuO
ZoiYvPtgNn++MEX1l6wvwqfmKWpAazOB07BjQ6kHypyRTwwvTvA0Dr5gqA0u
oQ4OM/MXw2SqTqBp6E2vLHuAQkhxwtA192FGHz/jagzGaZwiiC6SNwrexrFB
HBb4zesxgsBKTgjT29Rn8WE4feE2ImS5dqdmHP28+07Tzds+hGVfc+cSRQEt
ygMaOGktwoe91IxlVhl8ur0ux8mjl3fPqsBa4Uh5KTRgYu4UyyX2QKMb1R45
GATPBswNX0AGaQZRwfGKwwDCbF9qK2r03/Htd5aveBNgqhuwZvbJkHwyZjYO
/Mli05G6jPY8gh/SVkE8FWvwl1ZwDGMhTE49uQSKQJDl9+0fXVv45ihqVpjY
jxVOxHq1pkjuMvW5re1gYAtxCNilxB3Mk45oySOU/RCWOwEYUsOca2TCyIWb
ldBOysJAEIU2R9rpuOMsaVkw9fFow0JyBNOwXjFU8TQScARPKNeF0PhmCMus
K5aBhSzausE+BYUoEQPJlr6xl8KRWCBGa7HjgGR+5xYz6Jw5TyDJ47ZjfiTh
QxykciWU6n/vCVrCAr1Hv6O4jo0dWcIktk1TOdI1gPABZI0/TZhYvMX9qC7t
4g4fnhpMJ9F0Ux8sbV3xPTqaqKuaktBkcFo1f3qxhp9BPrLCsGIDVsXbLjFO
ApHAughold1EzB9Xb2TN+ORsWMnc5snRd51hSxjvU4jm554RjO+ld24u2+PC
FpfJL2qfpYyNzTA5NctynDGqpMaK3qKxiGDpO0Wt/PznNw02BQw//3lBWTV+
ySQq47x6FfA0xiQMZ16w1Q92DB9DI8DXCtqbQFXZ+H07Ng4gamGaBAxEDp2v
yMeT6mtdsWTsp5tkkyghfjHaH3wSJ7o24nQ4+G6E6gp2dwaXa3QHigCTCXqw
TJxjuiii4ZbHpkSaES0ExxzvRipDXmWAEvlgCB6y6tP6ciz7arqQNl9zSFW3
1KQ5mdCQrYbB8ykhxZ3e5AtdY7Ih7wq0QiWpXJchSOtOMhly+OBU7ppuXTaz
aHl3yoRKIHF+xBOmKXEWj9NumlbnTCOjovSUy7GLfap0et+K340PjBKRqE4+
YlGThZj3yHobpdx2ajVugoEO57baU4/boOYeZv9GYIoW4+C2YJWir0lMLZay
eKAPmhBb+PwaqawGQhhaUa/KqVuzxGBhjQZWg0QNNDhOxkMiA9SmIMLbY9fb
cRQQkLNwZL5okxEEatFsxK1gjT4TJRxIEvuRQJ7TA0dK6Wa+fy8esDrmXnA/
xW22Fi89otQn3fWwwPSBt3k3js+x+EfEms7pSEk53fxp2DNG7U7Z/3U9IlkO
xXqJcpHzRoFkD49FTMIpLxCCg87bRtuJDUlg7CXdsunLh7LhKF3eGH1F6Rx9
3GTBI1UzSb8XZuuH0WkL75FLC9pa7AEGuOBOYcBP2cZYZTSwrcNzYd5IbDWv
pGTImL+BraKDj0sMyu4yFts4fCe6pfPC4rOFFOAleMvYVuIeYa8LReqyH7UO
2iCtG4dsjplRTGrgnLVTcexIHt8FcFvbSSi3Ldd9zbozGhYxiSROP8K27btG
m3MwwOPkoSWjy+1IEBIlcaEAzII2HNFZkJr6dosd9YEJ/ADpwiyIJFnijLeG
fjFOdwgOKwp7K4vPeF2CHdoz5BVDxWW3XWojGs+DSY/Cx30Jkkeu3TiWoDmk
oiHddoQ/tU2LC+8OD8WFhuNEeFX8gnmAagyrvhz2aUt0PyYuJPrvdQ5fcDV3
zTKZiNb+IG2s86trEIxW/r7rpS4qw42QCzZejWTEDMkLX7RcUjR3YBKoJUTb
nEiDUq1zGWudXotmoMf8wMOrtvz0sjEiqQuZKnm1U+ICAX5Cjf/WxeLOriwy
t7usOSciD6Vz877chpHYMmNBJjpgLL7SIaJVbfbyRyuIkhhKWuBZ1lhCzTyW
Q9aEWhIvVHs4uHwektDpWVJVlzFQZZ7MX8RbMw6ZqI+E5YDI3th5ucKAi0Kj
SCv6/T0ekPCQhVqyLBg3uraJTj5s3EAzdWlPnMOBFiYlPVPJp3ElpEXnxBA/
WnbOe3FcW9GjLUVprOq6nKxndOOQglTDSuLpuSB6kcEpk1Rfcm48T5L2cviy
X9pxkLWGVB+ETkXg57rwGWBXCYYGDVtrV/tgbAWYwW9xu1/Tdl+77pErI/VL
/vxiOfvnF/Of/qP+hYM4+/HFT38eC5TvOAL64tKn3Uh+8fRI/Gue/ARzFv73
v/bPVTJ1/ydDNF4eHX77c8XLL28E67goboV2E//rkPO35bj/Iv32X/MnfjtZ
Ufrd0z+J34b/vWZEAv59ufwf8P/yg1/8w+Qnv+TvuG+jV0rekszk6Z/8vUZ+
afUe+9mFbyew8mtn6B779oXjNT1vs9/+S//88e8g508//9FPwP9iDPISqwTD
J33pc0aLLaxSz+nJLz7xbH/iMk//PLLeCQx/fuDu259f+4g7ScFQQSyxTv/f
zrYJ0yOCl5/tV5rqwXHqD39wiR3GmNcDxcj52X5POaE3lCVa6Q/fBSvAZKP+
O448eejMAj2+avbnfvbbsz/9O6651QtpcXXVrDrA+VxONearBmtuOba4qMU3
IWzW4ErMLsTffc3/gj9//BtPKDIIsRfLLaULdls1OzjM+qNcqnjoHIBoQ7U1
CAoo58npaIckxB8+Q8l9svEgqeMIsAPzENrPYVZCYw6ubswBTTj7xPHbhRHM
ZmIf7fJx+dms40fC0KSNuA0RzobgjZQKfAuRr06zHpSbS5r1m+5haYlCTSq4
pj63ThxbaKiavdYnKxsBN6szvcbMoG+okPjLNdQ5cHWnODimKRLEjE/xJO3N
jMhZWex01JadKUsJkwmVEWyPE+QIBcQqsg/PVLLzRBf3iOQswx7q+iOCel+4
UmqUU8lnMZRiA/thIU1O46sMVVJWmxdfql5xDyND9DrfUEp9LcMU0EjYO653
TJB3lHQXBBX3xeTwUUJ1z7WwYN7WUZrmx8g4R2fRocieChtP5bG4XPOwyAjw
nVRoFlqXEOiUNp9QIxe/kDTH9SNVnkX2gVmwVv4hvyu0tiqWNhQNZEnECISU
5URIRssBjyvl3i8Vgy+VfhN8xBxu2HgL9Hxske8NNRXRUmm7SqIfOQvAe0RL
i4Q1DAuNrHaLmZZzbKeVnm0CHyktHWHkQT57L3VK/Lp6DJX0ThOIN4rkIxCt
JgqS/t7LDCiPI4+M6/BaKJPybvuke2rOdK3PEfOS9DMmqouqaknHvcnBUpEf
nvkgsSWLSWdEtv3YoKOLVbnFwmWyPdfmNIEhJaUUa2lO2URnpZ21DJ2HN1xJ
2VJJN2JMc/pdX11J3mRAiku22EoryhxvQHy36dYeOlv6fsVZGqHAyHZXSbvi
LoOEkFCR7HPSxku+AJ7lPM9SauEKIqOWlOjTAjc2WcBkqflqIQbJmkAjP99j
dFs6Ms8YnDYTErVNbJe/Sehnrq7878De6HlCtpNwH5u9Z+hj1PYoSR7m8Xwc
5UV+ylg2z1P2GMBhkkW+7+rNwHg+chDS5nrHBQar8B1st2csPhLNyMhQ63gE
vk258fBGE5faseWPxHyPrT9fYUAtFuSvCvPSMeX9uFQtVFUlFW9CkW/r3anX
osSyeN1RczwcS1Rfy0P5gQb80ruIVWRhFA5++4Il1Knky3Xohh7JXXWSI/Eo
RxkMneRVXpm7gKZJmTZiql0E3usKLTgn2hIT33s4tMRQOCGFdlyFJsZOfIkr
AI4CeT+XWRnWtqdGgeUu3LhMZchp/RQbmOQP3mieedJ/puxIlxiMM1IUZcNy
jmUCqZhnSUKjn7ElZx6kFvQYv5z3jWAnmrTYyCcukW8xt4JjWyb4Nnwh1lh6
6WagcnbK05AAfdSVRkiF79XFzGpxLQ0evn/Gh46mwYQ7VSsHE6R3/Bbo8W8e
8Sm11axP+7hz9AtXybmpKLMTc+3Kc42rOQWdFUJS9nAwffW03RYhCtb8QiAM
Yriw9hfHcBGb1C0GoMaEaYPTkbg7XJs7mjrsDY+1Zb+w0k926wcj7I3KniG6
QftjJo1KeQCB1B8LRmquzw52qRE5g83gDXW8LyaqDLwSAvfatWRH7afN+ZuS
IRauwTvry7Z+75k2BWyuz/H1q8L4I1Gn2LuTxk66l4+0LDVgIggJLRrhaHjL
BPhZ73orPcH5R0SI7P+0sdz5OlKMZQCcEqdy6GAYSgHypnI0CNGB9D8jaKA7
0LFWAsAKF/hsJA7xkh8me1YyhdhLwp0M7Me6jp/I08LlL+11sv65eMbxxd/9
UNxgp9n82Z+//WHKhZQHc2nfWtYDRzSkn94Cpwy2uIAJN2JQRuWMBdN1C8kt
GnLm5QKgyfnwbXUTcoU21wjexgn/ggvLVrzQchAUvsKOKzkC8I71eaLpZDGd
bqFlJM/+zsX4Dj5ATc6JbdNzIXX7d7FJDmeDHzcdUbPZLzmvhpFC11G/nTRa
zNIAzhRyeQWtRd/OlL3IisIuUKgjaDmN4Xx8xqBu7jdVMdJuvmkOSdZvtoHu
28S8akTmXYopWE6N3gQ050Gpj3PKeoIGzyu7Sr0ZxzXFzRmu74CooFJXj7aR
tzmH0Ht/9NFYj2DKCL0ZMUOWsOkT4t1Zz0NAQOcQyQ5ccjGiAxGlqrbX2tlu
0syitxAESDl25MnKPTz8HuqAB68TX43pi4Xo7wkJ8QCPHbbnuf6L3IWWKHaw
WDS/gEXzphZPKvWQ3KHRDTPsNtbSilSEj8eymhyi+04uJE9imufU1gS+NhpL
j/TK2M1AuBXXZ3qeEztsTeZzORZEmBt52XX06DwyyRPsV7SesjVCfuGTbQKv
gxHrcK0Fl4kwd3nQGLu3BsxQctSP4512ycZbTupD6Fgay7UluM3memTguzB/
hKjfNF5jI1k6PHK2Vl1Vnfpcv84kZfhrILIIzuzPnIwk1Ji30KmoEnkkiA5e
uwFCQDQ6BrDCtdVgTt0FvlTC2p/pcPFpL4iHGwF3mT8s3dMOz49QmwGksNmQ
lsff9cgVkHAW4/c0UmUM3uCwRSpwOUHqoxSUc/e/cK3DZ6JAVJrR4NwZ2tPa
CDBkNDzJi55aUdMQ8tatQZ6weTXnP8X7teKVKkgpMXf9IiuDYV8ek7MzRUvZ
SY4UP6Ya1WeaDWfpIHM2e5B0trt70JEWa/BIMZnUojgz2bLaQpFYdtvtYAUD
zmDG4oz0FWfxo6O3PCBRTcUi+b4pqw8EN8UhLi7F3blfZVl8bvzKblNMSaxc
E7sA9zLueb8VKxsoU5lwIwyr8s43YM902eBJoHw+paIv8ICiW+a4f+ziEOU9
cR3O6rRPneTqXBGbC5WF0qdPCM5KWxfhU1pEhytymH6sD6cDGzO5PAiJlXD5
mIXDx1JEHMmsLUe76o9LkjPUTtScZtxOn0jpRO6oOK2oVUEwUVBWLC9oJvnS
pYuUTuZnRedm2sVGPofQ2iCznjCkpT2bWF0Vg8kvpzSQUFaWPv+lujaQFAk7
Aah7wkqORlPAZkduUKBInx4b77ih8O4BbUMoDyoynEr00YnPsbiLiHD9YtV4
coupNVh6a+Ezfpw/vtdhOWrRrDIsPC8TdkdKFrGSe6+pKFjs59EJA8GdO0iW
QEAWpoGlL16npBdpmUfhKiRllJy0QpIfMLbfL5C81GIM+9oTwAP66r8kt7yA
IJ6kg1Wua8ZxMHEoO/lzRH8LHvhMQpWpzNIcW0462WZEQ75JRO/fi+CICHLt
mfK9Oce2JC05bRvMnVpPFFjznukhuKQB4uw2j74x7OvtmORnpB1Gb99wZAg+
uzHJy1CrjJYbXT2C5kpWa9ImQ45zrGbqXMUyamJkSCNGCd5NxAZKmXGxJGAu
AMt+rBPkIhpsYT4oixjeTkR0K6pKIwcbVRgNJEI7OKEzuUBkIs5htPPeD+ST
pLAlspV2i+TEBfEfi5dNurjJR6Seu88mlLOjzdL4OaeItZgYG4wbXWXKdb92
1Bm2hTisZ9029R/mODImRBrs/RhPBCrjuWlm4SGjkgYQZcfB5pSFZYgd0aUP
y31ltEfneQnieUwync7o8LlGfQ9HF6+DCuTuH0v8LAF8Sk2tKAcC61hlIp8j
XErVNy3kMMkOCie8aGSxJ5qGDcZjoMZxXpB9inRSl7I+RC5K4RpLVZdi/m60
jDRGXCpjznXlFiGzPhZF4qWBdO/PKI1nd1FCnouEfJN5mHfWrCA6QoJ+vzCJ
V7rDJOQY891b5l4IjmGTiM00lt/2YmOTXDGaCNZ4ee2PWWb0ZqNuDXONtBio
2HiA2qTGUioVBOuzoqaEOaPAnZAJiRUGbkRu3bXYA0U8fyQbEvPOpZBlfFQ/
2kAo4gFVWhGwd+ow+YKy2Or15z/9XwwgzA8UJJPvhCwi0O9C49ZMS1S8rcLy
oYSOQ6YA3tz5DGCp9znwLd0Rn5VedEFzGU9tEMIF8NO4RdFJiSkDM0dZ8Wfh
r6bKeQ+SIJlQJIhHFXLKE3cAGpG8MlXijk3iJS0cCpkGErYvjDxFvbR4ISBX
S9yWmB7ICLq4Xc5TsnCt9L3e3D5FVsfOqIvFUuZUdE/I4Ho+pB+M5/3pdqkE
RTnF93EYOHOP7YTlNuoctB9cdU0b2v0QW8f1ms0kuV8l51bJb4ZLYaKg1m5S
7maBDkkI51lClJeM0yQzsW/5ZGv+OWUUj9eoEFbwIql48R//8V/efXPzT1//
89f/+Z/KQal42Rgr4ejgrNRcPMKOOvyI4WVH8+DmOsmMxY6ubEi6ra79LQ5W
ySqnnPqefERJcQftdcAISdBjhsNL8uXU7Iy/6kPkY0/YT9qRtJxN3QA39a4m
HAqGiRR/yiKta0YzcL1Bc7vCASGIDiETOLv2yIu0UHMdjAniQUcWryKHwCD2
qdLlHikJvcuMMZUOSfDkIjDr5CZKS1HEhpj+pO5Yj7Van711c+fM6gF61tYC
h3Xpau1ytzsW6Rz9EKmVHdNDJX6w5LmE5UeYfeRTuUjhrkSi5oViv+RSGkSj
ydalEChwO4lNmaM8qykt0n5KtIF4xbo4GIyH4n5xwkzBGGA+bjI6qBkIoIq4
Ly48Bf17b6dg0laMqfCaLwc1OKRLbgsvJDL49AI3knVLIBY+LZ+0t5hJ1wys
3ACN1S90LwbiiZSawRwQkChu5zV1ct73Ukog94hzXXq6tbqC0IhxX/cbpahj
l8US76zvsXScpumst5jisw036wycmNACZ3mE00f5NynQcBO0rbg2TbuEbkro
1vWz4aSe/VpvwvEIPp8MMLpcveshqzT/2/PzT5hpbcO/Gwfuh7B8TqdqhmQw
0dNylMIBHZ6BuWrkQGY3GBKZFurRMj/mbB+m6P2cHhy0UMf2ZSarupjy1HiQ
KfUzlI+Rn5cyegMCVOOQMZ9rfOC6yjHAHPm6yUjCTLdzxOcTwWwCILVYXyhD
vaFGbMkf5N6DrVAJGhlFJJuJl1vh9XJtzh/PAhA2f/7T/5kSyCe1gxm6+AWr
VybmjsKnjjs+k+YsUb1UPJlkke8jgIFvQ81KtCKzuhOX01DSp4Z0Aq+eUerN
c8gTlgGNpundOrViZqBt0TETE5HzsTHf3bIQnZNc0p6JH529BQb+8Rha4SSW
3Vs4MPDMFtstFqbnkouXaSGzW+R91z+PTPg27BGY3R1L7hapyiN5G24nmXmN
qkljwn+BYH7s6NI0FVEJcawVx0lZHSpmn1qZO/IXbpRuKb/Tbp1NCYvrmNiS
2wSEvgoxhbPJEytfO5/IL9oWK1j+6u0EGe+dbXN3GNpQNl0blNw4kaYfRXXe
iXd+y0r/CVTshbv0LjJmmj9JXHOgZOqNuyRynk4zH2kGiIU9XnNcp3NdsuSr
V8CtHtFbCDhi3VSJNegXXs3AGs2pkHgFhd0LxIMSd+eCg+rOjEXKsoieCBW1
AhJKunHEehffniHkl+EjeDnkoVCmf3ui6/YyeAWOCQMMGphcLZConlzvMPiI
cwKJ7E3MyD40LjqR/kCGYdiy55wZF67wzaH4KxZBLmMR5Ju2ObHeHJrYKjkk
iQv/sNLGgTAH2XIttSdSzGhbIaDP2mFfql7Mk8Zk4Z+Ms4k8LJ0y2iFjt8pY
LlPWN1Qslca0sLDdg7AxY3Iz1Y5qxOZKyI7Nyj+Zbusg4TGuX1xY0g9pNRwO
cmM0l2Yo8psoGY6TBuO2lHQEZVtigmcR7ZdZ8DhsvWPMB282UFZhbzqjykb5
s72LOPVh9n4JVyiPu5U5S9NbSeMNVB5f5pqoWEsjhT75OpxnNMVqZXfP0Dy5
zBxva0O/wjWUWBKVk2hWepzpOWM9WiLNW5ejJ61gYljfNYJRMItsJQaDa1re
xxIo883NKaAgOhp0D9UAbnQ9Vl3d/rt2Yfr7mawLUwDsZd/SvW18z9OCXQkK
p8QVKmBlPmAbZ6x+tKZujcMQdwvkSnTJO74Ri7AL7pZ608LzZKxpnfQOo67Q
zwVwi5gyRc+CuaSSfGZy5y9RVsf+xHZ65xxXwKReCEdwJwR22ykclp72bfQf
/E3mk6zq1l/ok5JJwqY/P8uWUVyJO6Ulik1yIpEWg2Mb7bpE0I/yIbtKHptW
+GdPIWUi5sowljcheoeJISMzuD7OBxJatnjjbDMRQ9hVSnobgykgvt6HS5V0
vbS363qtgIqv9uiDpYABwwSIsTGhjJ9aRUexrsC6itnR3MXkmFgYM2Z0x/ev
qQQNOy7n63DUmKLwHHY+q+CXBlZ6t6PLHcyDMNye74QHkSKMyEC8d8SrztMi
3F3WREAH41VK707u3Nx8qJ8StHE5IqrcFy/lRlKfv7e7JJHavWyYh8swBIpi
pHudEQyD1YhLkMdbmzAh4mB46z5mY6g5PwHSUCgrRnEvygf5Qfc1MZQvsnhL
uRkzw8HmFqR5CyqRgRhuN5gNYOsyHtSVJAVGrJW4TeCE1j2R7Uv4KNcq+0Bi
QTEL6Yiq6k9B7wFkOeFrahPZdsFZ5vOmK6ZOI0G4yHq40kmaXSoTYmtzscNH
Yn1Gh2N0J957ztmtROLDz0VA8ao06eSq0IdJOBosb5bTXXPXsiXThtMRvdKY
Gs0YT1537W6JFwqnyR4MCsirNcJAsVYK39bVYzwigUGWmD501I8NPpp+5hU2
ebu1Xpmn57OPo/IUb26wFguWkxz8fGg5ue9jA/5iO8zZPYMd2k2VdLMkncPv
GcOW32Pgk7qztLIp4knSrRrFziPc7wVnRGDXeMuNZkfclZhzYVAatmfXbLBf
CuqPXNCIHmQwL3N+i02n6zbRcz8IX611knnQEWbVcpOeuWELfaK/rJP9NnKO
yPCJxxGVWeGr1ZN6R3Hsup5zYVLAvjSbbXhIGsio9sMXUXvHieNxNAEnmoFE
m+kWsI/A4i8emnmAzvZSnopdATaCst3YgdlSW14aE7oGagfP5HvGsGrU7QKq
tIU7habvBtGckm7NL1eMq7LcMFI5KXBx8oraK5vtUs8jV6dmLoSZ1/+E9uDA
hC2NeAVaxHI3I3Obr4DJLoYoHtSUMfNJh66w9uAP3htzjLjyn0BOlF4TrI8t
E8+g0Dt9rApQMeHMZWB2bCZX2IjWuudJH2byA5+XsLx4Uzu89DWeCcrz/Jv9
9d+/4CpcNnzmQ0+4ix45lbT4iGo8gKo5MHIlJsm1DOjeYdz/d5n7NPfiWdIk
lwb/5DvSveRThY16VXCX5NYzx9EZ1GDCKh+bMHgD4FiR/EWE4KbsUNW2pabG
spyeMfMrJp3SRO/kDIRLUH4rKDo1qNgKux0Kq1pkQBVdoyhq7rRcIY2KpF4c
FfJc9b5Ib4fXnHxceumkSJe6M20cLw1weRPJQpnJeaJ4pNnP6MWP+W2lK+IS
i3eowVwXlkKaagOvfmxBfQamY1XnPc3FRZcrES2nqfWaBPfgumXhaqIfmoos
Y6usVqOK1oPaCK7JbrxQ8a30rmVjufKA9xldlMcpdgHOwI16I3swVDPXTOCc
unOsXMwfNozafJrTAEcGE+/y+2wtdwcLuQRfdSALLv9K08sr1mAPgaSdKuF2
a2c2ZQc9bOoPYDf2Xbex/JKEXMpFlVzZdxoQpy1Dn6yaVXw2cpHwfdAcPNdY
VS3FU7AQHJ76rBNu459NbJJjsjNaEqun+GlWZd9TqNE0U/1ccs8m/j5pPZ5t
DpcrX90dZK8jD9kMB9QMqUns92W2pUf4xNNYJ/IAeSBxH5ZTIKhlnVp358Zs
BsqHujFS1Go9UblwT42pSkIrcSvYana15zjzIkaYENuXUJSLiXLO4HF2/YdD
9K6SO2HL5IYVGwtRyKxPo0e+ZD660iOhy7RR8E28MHpSv8WHR5Z5n+mItwBF
DZdxL7oky7N0GPybFMfByYqFXpKsnWD9oLdsajfUWkF0mNGgrIJmeo7uAjvE
IuKRzO/Kc3V1iJaC3HJNWqLdIED6GTF+ELt6fTiWld3V2aWUFNwMo4HfJH2l
lw+ZWUp6FU2xHmImh9NqxmCvi914RYc31fX1ICQ/1KK/ZF4eG1pC8pOwbWgj
wZCDeCXYZCAr+PgII8TgS66v8IHDTss7E+vGIBehapoI0mR68zefUuNYJ50f
1LUMMk9oJefpg5dS463pTUPEqHT6KjHuoCo8ekbvU3xMzeMQUNVrOtQFAN/Y
BUpUv4r+IyuFxB48Rns548pyAQeU6ib1Ama8RUQZDO4yJ9HN3PGqJzq7a+Tm
aYeO35SiJkWqPgTKiVGazvt70vyA/Fb00R/M3qWUM+7kX84+WmIu6vQUjpKw
5Ek7dnrHiRQkYPOrU+Pb6cziUrYUvcTQau7e54z8vX3gGuw0j8aGAeX8eaDW
qrm9rrUfgdY/s7mLiNyLqqDjIz0pvwpCzwUo8fJzdPguErnc+FGlTRISk6/P
F4vjrqsrreVzM+UcF1NqelaawqN7UR10TFAbdtgp58mgw/isfTnETsVNIb1q
VCdx3pPLU6ZJhzLNt/I1HhHEkWANessuEPmv19XYZsXsZioVMzd/Ppd+9XcI
a6EubIrNUO/exKTd3+yIO4iXdcjjGz0GpVyT4x/M2F2E+bk2GkHxr80X2VIT
6NlDBeeGuch1bdT3AnfkdN/cbmlc5yeYYXBytUyYOvi/RVTo/JpTS647dxVw
eETOEotXBraQm5whmn8ITbMMlMSrh33YJBRprICH2AvxzLtRiS5b18b0aGm8
pBtS6lmU4ZcoKrZYMHRiLhK1xXbGjRwP34Hm2Jh9rTgHhMzAGxW5TzegSICR
ED5ync86SWZhGTmBKoHGxPMnHJnQGw6PXEI5bYWIaBJBJkg7k/fle8dUqn0i
GbVAPVQnClyrZOgeKDxNhCUpX7naJwvM6J5ZK6rovZfWIRLJaN/YNbcOUPrO
YH6aNlTqFhqMvxrX/EB0m/yw0rqu8x0TEtIp7co8ey1fXai3kRbJ5Yiypjg0
prEGF3AYJBtPTcxCTimslzTRLV8Ch1/AH6EX42/FFZx/yjiU34EJy2gKKluY
nNaOtODlXgDONLPrGcFhmG9EYm9ETcdq+OiRiJaqolS4RTp8gvGlCyWsUBj4
I7dsxVav1L1N2NGFf28CCCVHgcwkQosUkZcUGhMrkFxs3JwRo+vwpLImEsAh
plcvtHwETGq5MMVYU1E0unIr8QzngZQLX9RRT5GBK2WOSc1htI+ANx/BX07w
79pv5oTZ1GSMCwWwFlxtGttQSmyO2KqnJfJhMdhE0bmrR18/iaZNiNbhXfy4
Ka5Wyn5lj2i2FEhr6yWIWlBAv+67B61aoBK7ukpibG20fvKyyfYsKdNQ01LX
yh+giOvJBZOHzvXMT7vrLzXzuJIxs7zOVptJCzfEmpJwfQgHiNEoZFzG3q6x
TeF3pABXKcV6Lem1LhoUJgDK+7W4AKtJBd+Lu51Njlp2ytrXy7x6m/d33zjS
xwm/I9unGOXKMDPyPIamMfXBNHFFtzN4tg1JdVn+aeZehXgFYtz+mYSoPFoy
oriFeNkEmtLBQ+xpfbV2OdNqYAcutj4+5vNnVZy5fEQKYXXE2Bl/mXS1ajs8
t05nzknSr0qpWjRv6Me4XX3pXQfvQzE6uKQOgfQGuHhSEhWvlu4AS9GgOans
djnqO6As7Rhaxc1X1KdLzIYxLaXX0WFbCeH5BZQ3Dw9/5Co8KUn56daDu0tX
ztYNqJh+LgaaOyQ5q0jMcfC9p+i+68JRAnlbVg7yxBe+0Ed/O/AtIEnWA0T9
UEf4sV4vy3wAYr8Eao6Ls0HIHrvlzoulaye8N1grSY+EG7sOr9N9G1KGZGWD
S+uQBez5vq2xoWhReFAdcWWQjtROv1FjcEsECEEd4djozpudawOn7Kfcw0mJ
BzQ4dIOAV8UEZamEdIfDE/X/QHLkOgglrWfwKphqLlUhmnk3UH8CQvkqRmq7
m1Tc1Iz0L6M6nkRA1TyFJEUqt9gVXE1p6R8ld+a6GnuNpLtc8yEm+Za2Sg4c
d7k4wXHOJ4Q1zGIf6KbhnI3ecQtwg0d+3+ruBKcFxiANWVSUpSL6OpyJ00Ar
KUIbhqeaQyBYqNSaec5b5Vee0itj8h1Xg9+CsUZSH90cuzp6I3ptTs4wyEjG
ChwSBjLQsyQycbmHJ6j9LybQqa1K2fSLlEc/+vhE7bIWmoFO+kRTgK6WlRCR
JyKFXpXlUbmTSaCAtCpailCmTScSi9TaYqVGEKRhLHkFpKQulJ5kwIzN1LhI
tDzMIyEYNcuYdFmnBA8jBKRp2Bp59yh3aMdrTget5kP5Ne8oDGetRtH1SB95
qZah3QthiM/9RBgugz24sqCVz5i3jQQFqhIw1UFqOqXB4S5UvTyojgNwegUC
EFzbCTcjq4dA1JhEnFzBEY3Es0YArFQi6+CpeaghJna/zxtM7pG1LSE8Krf5
fG/EC5SowpKxslCCIx8Ena+dvTqpBNvDKeuG8MsYy8CHhCc4ErtooVSK89J/
wEPCbRDtVzg8yqQd550L+1EAXqoZ+ZF4NZjkqi8f8GQMx1oMeMp+VuJxJN6e
mVTanH7HHaY6ArtYgjWgVMUeGVHb2CXdB3+3ku9SXESKhvvZVg1koJkFfIGP
MOKhG0z5EkCG1LjWb53XLlQHXnC7ZIMfue+B92CDrs8oweUolhV09vocc3xP
N28lo5rk+uJicBrllnI7zFbSJjtzwNgHXHP0ylpmBeMbH9A34gvtaari9mLD
BFYjlxAicB4jOaKcukiKw8S0EIUOfWxGsSGekKp5nqPKCLAxDuTKljSelIN2
Q4y47Zgrawku6zbb4UvdZWb+DqykUpxBhG79/XeXJFVFXky3kh7zjvWzVyeh
zESSE+GGW4dhXAbsas3KWLQCtGiYLtFOCDrXm0GCGJDScpB7R0RLCZsSM9+7
7mqba4m6Nd6afYE6ButZWNiDo1fPyVyDFxL1LjXOR8Zp3PS6qSnGLnq01MPG
ZUOtUzL8j2lyBtj/8mlAoabW+Uq8AU4sqIU82R2dWTkrgtVLPQ46kak3Ln0P
l1qMuJlHbxAXgm68uMCON27XpiPw0kLPv/FB6cWUabRn6SqcON+ifgmtyhGK
Awb7+FD6OYcYDci2sbK/rj603QNuKH2VkwF86zysEtknrKlwAbxsPxANQSh3
p2Al2xrhzZYVENpA4m7vYrCx1ftJpd7DdSWi12Cvm21wX27H1dX/AzyyCpho
0wAA

-->

</rfc>
