<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc [
  <!ENTITY nbsp "&#160;">
  <!ENTITY zwsp "&#8203;">
  <!ENTITY nbhy "&#8209;">
  <!ENTITY wj "&#8288;">
]>
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     category="std"
     docName="draft-vandemeent-upip-process-integrity-00"
     ipr="trust200902"
     submissionType="IETF"
     consensus="true"
     version="3">

  <front>
    <title abbrev="UPIP">UPIP: Universal Process Integrity Protocol
    with Fork Tokens for Multi-Actor Continuation</title>

    <seriesInfo name="Internet-Draft"
                value="draft-vandemeent-upip-process-integrity-00"/>

    <author fullname="Jasper van de Meent" initials="J."
            surname="van de Meent">
      <organization>Humotica</organization>
      <address>
        <postal>
          <city>Den Dolder</city>
          <country>Netherlands</country>
        </postal>
        <email>jasper@humotica.com</email>
        <uri>https://humotica.com</uri>
      </address>
    </author>

    <author fullname="Root AI" surname="Root AI">
      <organization>Humotica</organization>
      <address>
        <email>root_ai@humotica.nl</email>
        <uri>https://humotica.com</uri>
      </address>
    </author>

    <date year="2026" month="March" day="18"/>

    <area>Security</area>
    <workgroup>Internet Engineering Task Force</workgroup>

    <keyword>process-integrity</keyword>
    <keyword>reproducibility</keyword>
    <keyword>fork-token</keyword>
    <keyword>multi-actor</keyword>
    <keyword>provenance</keyword>

    <abstract>
      <t>This document specifies UPIP (Universal Process Integrity Protocol),
      a five-layer protocol for capturing, verifying, and reproducing
      computational processes across machines, actors, and trust domains.
      UPIP defines a cryptographic hash chain over five layers: STATE,
      DEPS, PROCESS, RESULT, and VERIFY, enabling any party to prove
      that a process was executed faithfully and can be reproduced.</t>

      <t>This document also specifies Fork Tokens, a continuation protocol
      enabling multi-actor process handoff with cryptographic chain of
      custody. Fork tokens freeze the complete UPIP stack state and
      transfer it to another actor (human, AI, or machine) for
      continuation, maintaining provenance integrity across the handoff
      boundary.</t>

      <t>Together, UPIP and Fork Tokens address process integrity in
      multi-agent AI systems, distributed computing, scientific
      reproducibility, autonomous vehicle coordination, and regulatory
      compliance scenarios.</t>
    </abstract>
  </front>

  <middle>
    <!-- Section 1: Introduction -->
    <section anchor="introduction">
      <name>Introduction</name>

      <t>Distributed computing increasingly involves heterogeneous actors:
      human operators, AI agents, automated pipelines, edge devices, and
      cloud services. When a process moves between actors -- from one
      machine to another, from an AI to a human for review, from a
      drone to a command station -- the integrity of the process state
      must be verifiable at every handoff point.</t>

      <t>Existing solutions address parts of this problem:</t>

      <ul>
        <li>Version control (git) tracks code state but not execution state</li>
        <li>Container images (OCI) capture environment but not intent</li>
        <li>CI/CD pipelines (GitHub Actions, Jenkins) orchestrate execution
        but provide no cross-machine reproducibility proof</li>
        <li>Package managers (pip, npm) record dependencies but not the
        context of their use</li>
      </ul>

      <t>None of these provide a unified, self-verifying bundle that
      captures the complete execution context (state, dependencies,
      process, result) with cryptographic chain of custody across
      actor boundaries.</t>

      <t>UPIP fills this gap with two complementary protocols:</t>

      <ol>
        <li>The UPIP Stack: a five-layer bundle capturing everything needed
        to reproduce a process, with a single stack hash that
        invalidates if any layer is modified.</li>

        <li>Fork Tokens: a continuation protocol that freezes the stack
        state and transfers it to another actor with cryptographic
        proof of what was handed off, who handed it off, why, and
        what capabilities are required to continue.</li>
      </ol>

      <t>Key design principles:</t>

      <ul>
        <li>EVIDENCE OVER ENFORCEMENT: UPIP proves what happened; it does
        not prevent bad actors from acting. In adversarial environments,
        enforcement can be bypassed but evidence cannot be un-recorded.</li>

        <li>HASH CHAIN INTEGRITY: Every layer is independently hashed. The
        stack hash chains them. Fork hashes chain into the fork chain.
        Tampering with any component invalidates the chain.</li>

        <li>ACTOR AGNOSTICISM: Actors may be human operators, AI agents,
        automated scripts, IoT devices, or any computational entity.
        The protocol makes no assumption about actor type.</li>

        <li>TRANSPORT AGNOSTICISM: UPIP bundles are JSON documents that
        can be transferred via any mechanism: file copy, HTTP API,
        message queue, or physical media.</li>
      </ul>
    </section>

    <!-- Section 2: Terminology -->
    <section anchor="terminology">
      <name>Terminology</name>

      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL",
      "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY",
      and "OPTIONAL" in this document are to be interpreted as
      described in BCP 14 <xref target="RFC2119"/>
      <xref target="RFC8174"/>.</t>

      <dl>
        <dt>Actor</dt>
        <dd>An entity that creates, modifies, or continues a UPIP process.
        Actors may be human operators, AI agents (IDDs as defined in
        <xref target="JIS"/>), automated scripts, IoT devices, or any
        computational entity.</dd>

        <dt>Airlock</dt>
        <dd>An isolated execution environment (sandbox) where processes run
        before their results are applied to production state. The
        airlock captures all side effects without committing them.</dd>

        <dt>Continuation Point</dt>
        <dd>A reference to the specific position in the UPIP stack where
        the fork occurs, expressed as "L{layer}:{position}". Example:
        "L4:post_result" indicates the fork occurs after L4 RESULT
        has been captured.</dd>

        <dt>Fork Token</dt>
        <dd>A JSON document that freezes the UPIP stack state at a specific
        point and authorizes another actor to continue the process.</dd>

        <dt>Fork Chain</dt>
        <dd>An ordered list of fork token references maintained in the
        UPIP stack, providing a complete history of all handoffs.</dd>

        <dt>Fork-Squared (Fork^2)</dt>
        <dd>Parallel forking, where a single process is split into N
        independent sub-tasks distributed to N actors, each receiving
        a fork token of type "fragment".</dd>

        <dt>IDD (Individual Device Derivative)</dt>
        <dd>An AI agent with unique identity. Defined in the companion
        TIBET specification <xref target="TIBET"/>.</dd>

        <dt>Shadow-Run</dt>
        <dd>Executing a process in the airlock to capture its effects
        without applying them. Used for fork validation.</dd>

        <dt>Stack Hash</dt>
        <dd>The SHA-256 hash computed over the concatenation of L1 through
        L4 layer hashes, prefixed with "upip:". This single hash
        represents the complete integrity of the UPIP bundle.</dd>

        <dt>UPIP Stack (Bundle)</dt>
        <dd>A JSON document containing all five UPIP layers plus metadata.
        Files use the ".upip.json" extension.</dd>
      </dl>
    </section>

    <!-- Section 3: Protocol Overview -->
    <section anchor="protocol-overview">
      <name>Protocol Overview</name>

      <t>UPIP operates in two modes:</t>

      <t>Single-Actor Mode (Capture-Run-Verify):</t>

      <ol>
        <li>CAPTURE: Record L1 (state) and L2 (deps)</li>
        <li>RUN: Execute the process (L3) in an airlock</li>
        <li>RESULT: Capture L4 (output, diff, hash)</li>
        <li>HASH: Compute stack_hash = SHA-256(L1 || L2 || L3 || L4)</li>
        <li>VERIFY: On another machine, reproduce and compare (L5)</li>
      </ol>

      <t>Multi-Actor Mode (Fork-Resume):</t>

      <ol>
        <li>Actor A completes steps 1-4 (single-actor mode)</li>
        <li>Actor A creates a Fork Token from the UPIP stack</li>
        <li>Actor A delivers the fork token to Actor B</li>
        <li>Actor B validates the fork token hash</li>
        <li>Actor B checks capability requirements</li>
        <li>Actor B executes continuation in an airlock (shadow-run)</li>
        <li>Actor B creates a new UPIP stack linked to Actor A's via
        the fork chain</li>
        <li>Actor B sends ACK with resume_hash to Actor A</li>
      </ol>

      <figure anchor="process-flow">
        <name>Process Flow Diagram</name>
        <artwork type="ascii-art"><![CDATA[
+----------+     +---------+     +---------+     +---------+
| L1 STATE |---->| L2 DEPS |---->| L3 PROC |---->| L4 RSLT |
+----------+     +---------+     +---------+     +---------+
     |                |               |               |
     v                v               v               v
  state_hash       deps_hash      (intent)       result_hash
     |                |               |               |
     +-------+--------+-------+-------+
             |
             v
      stack_hash = SHA-256(L1 || L2 || L3 || L4)
             |
             v
       +------------+
       | Fork Token |---> Actor B ---> New UPIP Stack
       +------------+
             |
             v
        fork_chain: [{fork_id, parent_hash, ...}]
        ]]></artwork>
      </figure>
    </section>

    <!-- Section 4: UPIP Stack Structure -->
    <section anchor="upip-stack-structure">
      <name>UPIP Stack Structure</name>

      <t>A UPIP stack MUST be a <xref target="RFC8259"/> JSON object
      with the following top-level fields:</t>

      <sourcecode type="json"><![CDATA[
{
  "protocol": "UPIP",
  "version": "1.0",
  "title": "<human-readable description>",
  "created_by": "<actor identity>",
  "created_at": "<ISO-8601 timestamp>",
  "stack_hash": "upip:sha256:<hex>",
  "state": { "<L1 object>" : "..." },
  "deps": { "<L2 object>" : "..." },
  "process": { "<L3 object>" : "..." },
  "result": { "<L4 object>" : "..." },
  "verify": [ "<L5 array>" ],
  "fork_chain": [ "<fork token references>" ],
  "source_files": { "<optional embedded files>" : "..." }
}
      ]]></sourcecode>

      <!-- Section 4.1: L1 STATE -->
      <section anchor="l1-state">
        <name>L1 STATE - Input State Capture</name>

        <t>L1 captures the complete input state before execution. The
        state_type field determines the capture method:</t>

        <sourcecode type="json"><![CDATA[
{
  "state_type": "git | files | image | empty",
  "state_hash": "<type>:<hash>",
  "captured_at": "<ISO-8601 timestamp>"
}
        ]]></sourcecode>

        <t>State Types:</t>

        <dl>
          <dt>git</dt>
          <dd>Hash is the git commit SHA. MUST include git_remote
          and git_branch. state_hash prefix: "git:"</dd>

          <dt>files</dt>
          <dd>Hash is SHA-256 of the sorted file manifest.
          state_hash prefix: "files:"</dd>

          <dt>image</dt>
          <dd>Hash is the container image digest.
          state_hash prefix: "image:"</dd>

          <dt>empty</dt>
          <dd>No input state. state_hash: "empty:0"</dd>
        </dl>

        <t>For "git" type, additional fields:</t>
        <ul>
          <li>git_remote: Repository URL</li>
          <li>git_branch: Branch name</li>
          <li>git_dirty: Boolean, true if uncommitted changes exist</li>
        </ul>

        <t>For "files" type, additional fields:</t>
        <ul>
          <li>file_count: Number of files captured</li>
          <li>total_size: Total size in bytes</li>
          <li>manifest: Optional array of {path, hash, size} objects</li>
        </ul>
      </section>

      <!-- Section 4.2: L2 DEPS -->
      <section anchor="l2-deps">
        <name>L2 DEPS - Dependency Snapshot</name>

        <t>L2 captures the exact dependency set at execution time.</t>

        <sourcecode type="json"><![CDATA[
{
  "python_version": "<major.minor.patch>",
  "packages": { "<name>": "<version>" },
  "system_packages": [ "<name>=<version>" ],
  "deps_hash": "deps:sha256:<hex>",
  "captured_at": "<ISO-8601 timestamp>"
}
        ]]></sourcecode>

        <t>The deps_hash MUST be computed as SHA-256 of the sorted,
        deterministic serialization of all package name:version pairs.</t>

        <t>While this specification uses Python as the reference
        implementation, L2 is language-agnostic. Other implementations
        MAY substitute appropriate dependency metadata for their
        runtime environment (e.g., Cargo.lock for Rust, go.sum for Go,
        package-lock.json for Node.js).</t>
      </section>

      <!-- Section 4.3: L3 PROCESS -->
      <section anchor="l3-process">
        <name>L3 PROCESS - Execution Definition</name>

        <t>L3 defines what was executed and why.</t>

        <sourcecode type="json"><![CDATA[
{
  "command": [ "<arg0>", "<arg1>" ],
  "intent": "<human-readable purpose>",
  "actor": "<actor identity>",
  "env_vars": { "<key>": "<value>" },
  "working_dir": "<path>"
}
        ]]></sourcecode>

        <t>The command field MUST be an array of strings, not a shell
        command string. This prevents injection attacks and ensures
        deterministic execution.</t>

        <t>The intent field MUST be a human-readable string describing
        WHY this process is being run. This serves as the ERACHTER
        (intent) component for TIBET integration.</t>

        <t>The actor field MUST identify the entity that initiated the
        process. This may be a human username, AI agent identifier
        (IDD), or system service name.</t>
      </section>

      <!-- Section 4.4: L4 RESULT -->
      <section anchor="l4-result">
        <name>L4 RESULT - Output Capture</name>

        <t>L4 captures the execution result.</t>

        <sourcecode type="json"><![CDATA[
{
  "success": true,
  "exit_code": 0,
  "stdout": "<captured stdout>",
  "stderr": "<captured stderr>",
  "result_hash": "sha256:<hex>",
  "files_changed": 3,
  "diff": "<unified diff of file changes>",
  "captured_at": "<ISO-8601 timestamp>"
}
        ]]></sourcecode>

        <t>The result_hash MUST be computed as SHA-256 of the
        concatenation of: exit_code (as string) + stdout + stderr.</t>

        <t>If execution occurs in an airlock, the diff field SHOULD
        contain the unified diff of all file changes detected.</t>
      </section>

      <!-- Section 4.5: L5 VERIFY -->
      <section anchor="l5-verify">
        <name>L5 VERIFY - Cross-Machine Proof</name>

        <t>L5 records verification attempts when the UPIP stack is
        reproduced on another machine.</t>

        <sourcecode type="json"><![CDATA[
{
  "machine": "<hostname or identifier>",
  "verified_at": "<ISO-8601 timestamp>",
  "match": true,
  "environment": { "os": "linux", "arch": "x86_64" },
  "original_hash": "upip:sha256:<hex>",
  "reproduced_hash": "upip:sha256:<hex>"
}
        ]]></sourcecode>

        <t>The match field MUST be true only if reproduced_hash equals
        original_hash.</t>

        <t>L5 is an array, allowing multiple verification records from
        different machines. Each verification is independent.</t>
      </section>

      <!-- Section 4.6: Stack Hash Computation -->
      <section anchor="stack-hash-computation">
        <name>Stack Hash Computation</name>

        <t>The stack hash MUST be computed as follows:</t>

        <ol>
          <li>Serialize each layer hash as a UTF-8 string:
          L1: state.state_hash,
          L2: deps.deps_hash,
          L3: SHA-256(json_serialize(process)),
          L4: result.result_hash</li>

          <li>Concatenate with pipe separator: L1 + "|" + L2 + "|" +
          L3 + "|" + L4</li>

          <li>Compute SHA-256 of the concatenated string</li>

          <li>Prefix with "upip:sha256:"</li>
        </ol>

        <t>Result: "upip:sha256:4f2e8a..."</t>

        <t>This ensures that modifying ANY layer invalidates the
        stack hash.</t>
      </section>
    </section>

    <!-- Section 5: Fork Tokens -->
    <section anchor="fork-tokens">
      <name>Fork Tokens</name>

      <!-- Section 5.1: Fork Token Structure -->
      <section anchor="fork-token-structure">
        <name>Fork Token Structure</name>

        <t>A fork token MUST be a <xref target="RFC8259"/> JSON object
        with the following fields. The fork_id SHOULD be a UUID
        as defined in <xref target="RFC4122"/>:</t>

        <sourcecode type="json"><![CDATA[
{
  "fork_id": "<unique identifier>",
  "parent_hash": "sha256:<hex>",
  "parent_stack_hash": "upip:sha256:<hex>",
  "continuation_point": "L<n>:<position>",
  "intent_snapshot": "<human-readable purpose>",
  "active_memory_hash": "sha256:<hex>",
  "memory_ref": "<path or URL to memory blob>",
  "fork_type": "script | ai_to_ai | human_to_ai | fragment",
  "actor_from": "<source actor>",
  "actor_to": "<target actor or empty>",
  "actor_handoff": "<from> -> <to>",
  "capability_required": { },
  "forked_at": "<ISO-8601 timestamp>",
  "expires_at": "<ISO-8601 timestamp or empty>",
  "fork_hash": "fork:sha256:<hex>",
  "partial_layers": { },
  "metadata": { }
}
        ]]></sourcecode>

        <t>The actor_to field MAY be empty, indicating the fork is
        available to any capable actor. In this case, actor_handoff
        MUST use "*" as the target: "ActorA -> *".</t>
      </section>

      <!-- Section 5.2: Fork Types -->
      <section anchor="fork-types">
        <name>Fork Types</name>

        <dl>
          <dt>script</dt>
          <dd>The UPIP bundle IS the complete state. No external memory
          blob is needed. Used for CLI pipelines, CI/CD, and batch
          processing. The active_memory_hash is computed from the
          L1+L2+L3+L4 layer hashes.</dd>

          <dt>ai_to_ai</dt>
          <dd>The AI actor's context window is serialized as a binary
          blob (.blob file). The active_memory_hash is the SHA-256
          of this blob. The memory_ref field SHOULD point to the
          blob's location.</dd>

          <dt>human_to_ai</dt>
          <dd>A human creates an intent document (natural language
          instructions) and delegates to an AI actor. The
          active_memory_hash is the SHA-256 of the intent document.</dd>

          <dt>fragment</dt>
          <dd>A parallel fork (Fork-Squared). The parent process is
          split into N sub-tasks, each receiving a fork token of
          type "fragment" with the specific portion assigned.</dd>
        </dl>
      </section>

      <!-- Section 5.3: Fork Hash Computation -->
      <section anchor="fork-hash-computation">
        <name>Fork Hash Computation</name>

        <t>The fork hash MUST be computed as follows:</t>

        <ol>
          <li>Concatenate with pipe separator:
          fork_id + "|" + parent_hash + "|" + parent_stack_hash +
          "|" + continuation_point + "|" + intent_snapshot + "|" +
          active_memory_hash + "|" + actor_handoff + "|" + fork_type</li>

          <li>Compute SHA-256 of the concatenated string</li>

          <li>Prefix with "fork:sha256:"</li>
        </ol>

        <t>Result: "fork:sha256:7d3f..."</t>

        <t>This ensures that modifying ANY field invalidates the fork.</t>
      </section>

      <!-- Section 5.4: Active Memory Hash -->
      <section anchor="active-memory-hash">
        <name>Active Memory Hash</name>

        <t>The active_memory_hash field captures the cognitive or
        computational state at the moment of forking.</t>

        <dl>
          <dt>For fork_type "script":</dt>
          <dd>SHA-256(state_hash + "|" + deps_hash + "|" +
          process_intent + "|" + result_hash)</dd>

          <dt>For fork_type "ai_to_ai":</dt>
          <dd>SHA-256(contents of .blob file)</dd>

          <dt>For fork_type "human_to_ai":</dt>
          <dd>SHA-256(contents of intent document)</dd>

          <dt>For fork_type "fragment":</dt>
          <dd>SHA-256(fragment specification)</dd>
        </dl>

        <t>This field answers the question: "What was the actor
        thinking/processing at the moment of handoff?"</t>
      </section>

      <!-- Section 5.5: Capability Requirements -->
      <section anchor="capability-requirements">
        <name>Capability Requirements</name>

        <t>The capability_required field specifies what the resuming
        actor needs:</t>

        <sourcecode type="json"><![CDATA[
{
  "capability_required": {
    "deps": ["package>=version"],
    "gpu": true,
    "min_memory_gb": 16,
    "platform": "linux/amd64",
    "custom": { }
  }
}
        ]]></sourcecode>

        <t>On resume, the receiving actor SHOULD verify these
        requirements and record the result in the verification
        record. Missing capabilities MUST NOT prevent execution
        but MUST be recorded as evidence.</t>
      </section>

      <!-- Section 5.6: Fork Chain -->
      <section anchor="fork-chain">
        <name>Fork Chain</name>

        <t>The fork_chain field in the UPIP stack is an ordered array
        of fork token references:</t>

        <sourcecode type="json"><![CDATA[
{
  "fork_chain": [
    {
      "fork_id": "fork-abc123",
      "fork_hash": "fork:sha256:...",
      "actor_handoff": "A -> B",
      "forked_at": "2026-03-18T14:00:00Z"
    }
  ]
}
        ]]></sourcecode>

        <t>When a process is resumed, the new UPIP stack MUST include
        the fork token in its fork_chain. This creates a complete
        audit trail of all handoffs.</t>
      </section>
    </section>

    <!-- Section 6: Operations -->
    <section anchor="operations">
      <name>Operations</name>

      <!-- Section 6.1: Capture and Run -->
      <section anchor="capture-and-run">
        <name>Capture and Run</name>

        <t>Input: command, source_dir, intent, actor</t>
        <t>Output: UPIP stack with L1-L4 populated</t>

        <ol>
          <li>Capture L1 STATE from source_dir</li>
          <li>Capture L2 DEPS from current environment</li>
          <li>Define L3 PROCESS from command and intent</li>
          <li>Execute command in airlock</li>
          <li>Capture L4 RESULT</li>
          <li>Compute stack_hash</li>
          <li>Return UPIP stack</li>
        </ol>
      </section>

      <!-- Section 6.2: Reproduce -->
      <section anchor="reproduce">
        <name>Reproduce</name>

        <t>Input: UPIP stack (.upip.json), target machine</t>
        <t>Output: L5 VERIFY record</t>

        <ol>
          <li>Load UPIP stack from file</li>
          <li>Restore L1 STATE (checkout git, extract files)</li>
          <li>Verify L2 DEPS match (warn on mismatches)</li>
          <li>Execute L3 PROCESS in airlock</li>
          <li>Capture L4 RESULT on target machine</li>
          <li>Compare result_hash with original</li>
          <li>Create L5 VERIFY record</li>
          <li>Return verification result</li>
        </ol>
      </section>

      <!-- Section 6.3: Fork -->
      <section anchor="fork">
        <name>Fork</name>

        <t>Input: UPIP stack, actor_from, actor_to, intent</t>
        <t>Output: Fork Token</t>

        <ol>
          <li>Load UPIP stack</li>
          <li>Compute active_memory_hash from L1-L4</li>
          <li>Snapshot partial_layers (hash + key fields per layer)</li>
          <li>Generate fork_id</li>
          <li>Compute fork_hash</li>
          <li>Create Fork Token</li>
          <li>Append to fork_chain in parent stack</li>
          <li>Return Fork Token</li>
        </ol>
      </section>

      <!-- Section 6.4: Resume -->
      <section anchor="resume">
        <name>Resume</name>

        <t>Input: Fork Token (.fork.json), command, actor</t>
        <t>Output: New UPIP stack, verification record</t>

        <ol>
          <li>Load Fork Token</li>
          <li>Recompute fork_hash and compare (tamper check)</li>
          <li>Check capability_required against local environment</li>
          <li>Execute command in airlock (shadow-run)</li>
          <li>Capture new UPIP stack (L1-L4)</li>
          <li>Copy fork_chain from parent, append this fork</li>
          <li>Create L5 VERIFY with fork validation results</li>
          <li>Return new stack + verification</li>
        </ol>
      </section>

      <!-- Section 6.5: Fragment (Fork-Squared) -->
      <section anchor="fragment">
        <name>Fragment (Fork-Squared)</name>

        <t>Input: UPIP stack, N fragments, actor list</t>
        <t>Output: N Fork Tokens of type "fragment"</t>

        <ol>
          <li>Load UPIP stack</li>
          <li>Define fragment specification (how to split)</li>
          <li>For each fragment i in 1..N: Create Fork Token with
          fork_type="fragment", set fragment-specific metadata
          (index, total, range), and deliver to actor[i].</li>
          <li>Wait for N ACKs</li>
          <li>Verify all fragment hashes</li>
          <li>Reconstruct combined result</li>
        </ol>

        <t>Fragment tokens MUST include metadata fields:</t>
        <ul>
          <li>fragment_index: Position in sequence (0-based)</li>
          <li>fragment_total: Total number of fragments</li>
          <li>fragment_spec: Description of this fragment's portion</li>
        </ul>
      </section>
    </section>

    <!-- Section 7: Transport: I-Poll Delivery -->
    <section anchor="transport">
      <name>Transport: I-Poll Delivery</name>

      <t>While UPIP is transport-agnostic, this section defines the
      I-Poll binding for real-time fork delivery between AI agents.</t>

      <!-- Section 7.1: Fork Delivery Message Format -->
      <section anchor="fork-delivery">
        <name>Fork Delivery Message Format</name>

        <t>Fork tokens are delivered via I-Poll TASK messages:</t>

        <sourcecode type="json"><![CDATA[
{
  "from_agent": "<source agent>",
  "to_agent": "<target agent>",
  "content": "<human-readable fork summary>",
  "poll_type": "TASK",
  "metadata": {
    "upip_fork": true,
    "fork_id": "<fork_id>",
    "fork_hash": "fork:sha256:<hex>",
    "fork_type": "<type>",
    "continuation_point": "<point>",
    "actor_handoff": "<from> -> <to>",
    "fork_data": { "<complete fork token JSON>" : "..." }
  }
}
        ]]></sourcecode>

        <t>The "upip_fork" metadata flag MUST be true to identify this
        message as a fork delivery.</t>

        <t>The "fork_data" field MUST contain the complete fork token
        as defined in <xref target="fork-token-structure"/>. This allows
        the receiving agent to reconstruct the fork token without
        needing the .fork.json file.</t>
      </section>

      <!-- Section 7.2: Acknowledgment -->
      <section anchor="acknowledgment">
        <name>Acknowledgment</name>

        <t>After processing a fork token, the receiving actor SHOULD
        send an ACK message:</t>

        <sourcecode type="json"><![CDATA[
{
  "from_agent": "<resuming agent>",
  "to_agent": "<original agent>",
  "content": "FORK RESUMED_OK -- <fork_id>",
  "poll_type": "ACK",
  "metadata": {
    "upip_fork": true,
    "fork_id": "<fork_id>",
    "fork_status": "RESUMED_OK",
    "resume_hash": "upip:sha256:<hex>",
    "resumed_by": "<agent identity>"
  }
}
        ]]></sourcecode>

        <t>The resume_hash is the stack_hash of the new UPIP stack
        created during resume.</t>

        <t>The fork_status field MUST be one of "RESUMED_OK" or
        "RESUMED_FAIL".</t>
      </section>

      <!-- Section 7.3: Listener Pattern -->
      <section anchor="listener-pattern">
        <name>Listener Pattern</name>

        <t>Agents MAY implement a poll-based listener that:</t>

        <ol>
          <li>Periodically polls I-Poll inbox for TASK messages</li>
          <li>Filters for messages with "upip_fork": true in metadata</li>
          <li>Extracts the fork token from "fork_data"</li>
          <li>Validates the fork hash</li>
          <li>Executes the resume operation</li>
          <li>Sends ACK back to the original agent</li>
        </ol>

        <t>Poll interval SHOULD be configurable. Default: 5 seconds.
        Implementations SHOULD support exponential backoff when the
        inbox is empty.</t>
      </section>
    </section>

    <!-- Section 8: Validation Rules -->
    <section anchor="validation-rules">
      <name>Validation Rules</name>

      <!-- Section 8.1: Stack Validation -->
      <section anchor="stack-validation">
        <name>Stack Validation</name>

        <t>A UPIP stack is valid if and only if:</t>

        <ol>
          <li>All required fields are present</li>
          <li>state_hash matches SHA-256 of the state data</li>
          <li>deps_hash matches SHA-256 of the dependency data</li>
          <li>result_hash matches SHA-256 of exit_code + stdout + stderr</li>
          <li>stack_hash matches SHA-256(L1 || L2 || L3 || L4)</li>
        </ol>

        <t>Validation MUST be performed when loading a .upip.json file
        and SHOULD be performed before reproduction.</t>
      </section>

      <!-- Section 8.2: Fork Validation on Resume -->
      <section anchor="fork-validation">
        <name>Fork Validation on Resume</name>

        <t>When resuming a fork token, the following checks MUST be
        performed:</t>

        <ol>
          <li>FORK HASH: Recompute fork_hash from token fields and
          compare with stored fork_hash. Records tamper evidence.</li>

          <li>STORED HASH: Compare fork_hash with the hash stored in
          the .fork.json file header.</li>

          <li>CAPABILITIES: Verify each entry in capability_required
          against the local environment. Record missing capabilities.</li>

          <li>EXPIRATION: Check expires_at if present. Expired forks
          SHOULD generate a warning but MUST NOT be blocked.</li>
        </ol>

        <t>All four checks MUST be recorded in the L5 VERIFY record.
        Failed checks MUST NOT prevent execution (evidence over
        enforcement principle).</t>
      </section>

      <!-- Section 8.3: Tamper Evidence -->
      <section anchor="tamper-evidence">
        <name>Tamper Evidence</name>

        <t>If fork_hash validation fails, the verification record
        MUST include:</t>

        <sourcecode type="json"><![CDATA[
{
  "fork_hash_match": false,
  "expected_hash": "fork:sha256:<original>",
  "computed_hash": "fork:sha256:<recomputed>",
  "tamper_evidence": true
}
        ]]></sourcecode>

        <t>This creates an immutable record that tampering occurred,
        without preventing the process from continuing. The decision
        of whether to act on tamper evidence is left to the
        consuming application.</t>
      </section>
    </section>

    <!-- Section 9: Security Considerations -->
    <section anchor="security">
      <name>Security Considerations</name>

      <!-- Section 9.1: Hash Chain Integrity -->
      <section anchor="hash-chain-integrity">
        <name>Hash Chain Integrity</name>

        <t>UPIP uses SHA-256 for all hash computations. Implementations
        MUST use SHA-256 as defined in <xref target="FIPS180-4"/>. Future
        versions MAY support SHA-3 or other hash functions via an
        algorithm identifier prefix.</t>

        <t>The hash chain structure ensures that modifying any component
        at any layer propagates to the stack hash, providing tamper
        evidence for the entire bundle.</t>
      </section>

      <!-- Section 9.2: Evidence vs Enforcement -->
      <section anchor="evidence-vs-enforcement">
        <name>Evidence vs Enforcement</name>

        <t>UPIP is deliberately designed as an evidence protocol, not an
        enforcement protocol. Fork validation failures do not block
        execution; they are recorded as evidence. This design choice
        reflects the reality that:</t>

        <ul>
          <li>In adversarial environments, enforcement can be circumvented</li>
          <li>Evidence creates accountability that enforcement cannot</li>
          <li>Downstream consumers can make their own trust decisions
          based on the evidence chain</li>
        </ul>

        <t>Applications that require enforcement SHOULD implement
        additional policy layers on top of UPIP evidence.</t>
      </section>

      <!-- Section 9.3: Memory Hash for AI Actors -->
      <section anchor="memory-hash-ai">
        <name>Memory Hash for AI Actors</name>

        <t>When fork_type is "ai_to_ai", the active_memory_hash
        represents the SHA-256 of the serialized AI context window.
        This raises unique considerations:</t>

        <ul>
          <li>Context serialization format is model-dependent</li>
          <li>The blob may contain sensitive information</li>
          <li>Exact reproduction of AI state is generally not possible</li>
        </ul>

        <t>Implementations SHOULD encrypt memory blobs at rest.
        Implementations MUST NOT require exact memory reproduction
        for fork validation. The memory hash serves as evidence of
        state at fork time, not as a reproducibility guarantee.</t>
      </section>

      <!-- Section 9.4: Capability Verification -->
      <section anchor="capability-verification">
        <name>Capability Verification</name>

        <t>Capability requirements in fork tokens are self-reported by
        the forking actor. The receiving actor SHOULD independently
        verify capabilities rather than trusting the requirement
        specification alone.</t>

        <t>Package version verification SHOULD use installed package
        metadata. GPU availability SHOULD be verified via hardware
        detection, not configuration claims.</t>
      </section>

      <!-- Section 9.5: Replay Attacks -->
      <section anchor="replay-attacks">
        <name>Replay Attacks</name>

        <t>Fork tokens include fork_id and forked_at fields to mitigate
        replay attacks. Implementations SHOULD track consumed fork_ids
        and reject duplicate fork_ids within a configurable time
        window.</t>

        <t>The expires_at field provides time-based expiration. Agents
        SHOULD set expires_at for forks that are time-sensitive.</t>
      </section>

      <section anchor="regulatory-alignment">
        <name>Regulatory Alignment</name>

        <t>UPIP's evidence-based design aligns with requirements from
        the <xref target="EU-AI-ACT"/>, <xref target="NIST-AI-RMF"/>,
        and <xref target="ISO42001"/>. The complete process capture at
        each layer provides the audit trail required by these
        frameworks for AI system transparency and accountability.</t>
      </section>
    </section>

    <!-- Section 10: IANA Considerations -->
    <section anchor="iana">
      <name>IANA Considerations</name>

      <t>This document requests registration of:</t>

      <section anchor="media-types">
        <name>Media Type Registrations</name>

        <t>Media Type: application/upip+json</t>
        <t>File Extension: .upip.json</t>

        <t>Media Type: application/upip-fork+json</t>
        <t>File Extension: .fork.json</t>
      </section>

      <section anchor="header-fields">
        <name>HTTP Header Field Registrations</name>

        <ul>
          <li>X-UPIP-Stack-Hash</li>
          <li>X-UPIP-Fork-ID</li>
          <li>X-UPIP-Fork-Hash</li>
        </ul>
      </section>
    </section>
  </middle>

  <back>
    <references>
      <name>References</name>

      <references>
        <name>Normative References</name>

        <reference anchor="RFC2119"
                   target="https://www.rfc-editor.org/info/rfc2119">
          <front>
            <title>Key words for use in RFCs to Indicate
                   Requirement Levels</title>
            <author fullname="S. Bradner" initials="S."
                    surname="Bradner"/>
            <date month="March" year="1997"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
        </reference>

        <reference anchor="RFC8174"
                   target="https://www.rfc-editor.org/info/rfc8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in
                   RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B."
                    surname="Leiba"/>
            <date month="May" year="2017"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
        </reference>

        <reference anchor="RFC8259"
                   target="https://www.rfc-editor.org/info/rfc8259">
          <front>
            <title>The JavaScript Object Notation (JSON) Data
                   Interchange Format</title>
            <author fullname="T. Bray" initials="T." surname="Bray"/>
            <date month="December" year="2017"/>
          </front>
          <seriesInfo name="RFC" value="8259"/>
        </reference>

        <reference anchor="RFC4122"
                   target="https://www.rfc-editor.org/info/rfc4122">
          <front>
            <title>A Universally Unique IDentifier (UUID) URN
                   Namespace</title>
            <author fullname="P. Leach" initials="P." surname="Leach"/>
            <author fullname="M. Mealling" initials="M."
                    surname="Mealling"/>
            <author fullname="R. Salz" initials="R." surname="Salz"/>
            <date month="July" year="2005"/>
          </front>
          <seriesInfo name="RFC" value="4122"/>
        </reference>

        <reference anchor="FIPS180-4">
          <front>
            <title>Secure Hash Standard (SHS)</title>
            <author>
              <organization>National Institute of Standards and
              Technology (NIST)</organization>
            </author>
            <date month="August" year="2015"/>
          </front>
          <seriesInfo name="FIPS PUB" value="180-4"/>
        </reference>
      </references>

      <references>
        <name>Informative References</name>

        <reference anchor="TIBET">
          <front>
            <title>TIBET: Transaction/Interaction-Based Evidence
                   Trail</title>
            <author fullname="J. van de Meent" initials="J."
                    surname="van de Meent"/>
            <author fullname="Root AI" surname="Root AI"/>
            <date month="January" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft"
                      value="draft-vandemeent-tibet-provenance-00"/>
        </reference>

        <reference anchor="JIS">
          <front>
            <title>JIS: JTel Identity Standard</title>
            <author fullname="J. van de Meent" initials="J."
                    surname="van de Meent"/>
            <author fullname="Root AI" surname="Root AI"/>
            <date month="January" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft"
                      value="draft-vandemeent-jis-identity-00"/>
        </reference>

        <reference anchor="EU-AI-ACT">
          <front>
            <title>Regulation laying down harmonised rules on
                   artificial intelligence</title>
            <author>
              <organization>European Commission</organization>
            </author>
            <date year="2024"/>
          </front>
        </reference>

        <reference anchor="NIST-AI-RMF">
          <front>
            <title>Artificial Intelligence Risk Management Framework
                   (AI RMF 1.0)</title>
            <author>
              <organization>National Institute of Standards and
              Technology (NIST)</organization>
            </author>
            <date month="January" year="2023"/>
          </front>
        </reference>

        <reference anchor="ISO42001">
          <front>
            <title>Artificial intelligence - Management system</title>
            <author>
              <organization>International Organization for
              Standardization (ISO)</organization>
            </author>
            <date year="2023"/>
          </front>
          <seriesInfo name="ISO/IEC" value="42001:2023"/>
        </reference>
      </references>
    </references>

    <!-- Appendix A: UPIP Stack JSON Schema -->
    <section anchor="appendix-a">
      <name>UPIP Stack JSON Schema</name>

      <sourcecode type="json"><![CDATA[
{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "type": "object",
  "required": ["protocol", "version", "stack_hash",
                "state", "deps", "process", "result"],
  "properties": {
    "protocol": {"const": "UPIP"},
    "version": {"type": "string"},
    "title": {"type": "string"},
    "created_by": {"type": "string"},
    "created_at": {"type": "string", "format": "date-time"},
    "stack_hash": {
      "type": "string",
      "pattern": "^upip:sha256:[a-f0-9]{64}$"
    },
    "state": {
      "type": "object",
      "required": ["state_type", "state_hash"],
      "properties": {
        "state_type": {
          "enum": ["git", "files", "image", "empty"]
        },
        "state_hash": {"type": "string"}
      }
    },
    "deps": {
      "type": "object",
      "required": ["deps_hash"],
      "properties": {
        "python_version": {"type": "string"},
        "packages": {"type": "object"},
        "deps_hash": {"type": "string"}
      }
    },
    "process": {
      "type": "object",
      "required": ["command", "intent", "actor"],
      "properties": {
        "command": {"type": "array", "items": {"type": "string"}},
        "intent": {"type": "string"},
        "actor": {"type": "string"}
      }
    },
    "result": {
      "type": "object",
      "required": ["success", "exit_code", "result_hash"],
      "properties": {
        "success": {"type": "boolean"},
        "exit_code": {"type": "integer"},
        "result_hash": {"type": "string"}
      }
    },
    "fork_chain": {
      "type": "array",
      "items": {"type": "object"}
    }
  }
}
      ]]></sourcecode>
    </section>

    <!-- Appendix B: Fork Token JSON Schema -->
    <section anchor="appendix-b">
      <name>Fork Token JSON Schema</name>

      <sourcecode type="json"><![CDATA[
{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "type": "object",
  "required": ["fork_id", "fork_type", "fork_hash",
                "active_memory_hash", "forked_at"],
  "properties": {
    "fork_id": {"type": "string", "pattern": "^fork-"},
    "parent_hash": {"type": "string"},
    "parent_stack_hash": {
      "type": "string",
      "pattern": "^upip:sha256:"
    },
    "continuation_point": {"type": "string"},
    "intent_snapshot": {"type": "string"},
    "active_memory_hash": {
      "type": "string",
      "pattern": "^sha256:"
    },
    "memory_ref": {"type": "string"},
    "fork_type": {
      "enum": ["script", "ai_to_ai", "human_to_ai", "fragment"]
    },
    "actor_from": {"type": "string"},
    "actor_to": {"type": "string"},
    "actor_handoff": {"type": "string"},
    "capability_required": {"type": "object"},
    "forked_at": {"type": "string", "format": "date-time"},
    "expires_at": {"type": "string"},
    "fork_hash": {
      "type": "string",
      "pattern": "^fork:sha256:[a-f0-9]{64}$"
    },
    "partial_layers": {"type": "object"},
    "metadata": {"type": "object"}
  }
}
      ]]></sourcecode>
    </section>

    <!-- Appendix C: Use Case Examples -->
    <section anchor="appendix-c">
      <name>Use Case Examples</name>

      <!-- C.1: Multi-Agent AI Task Delegation -->
      <section anchor="use-case-multi-agent">
        <name>Multi-Agent AI Task Delegation</name>

        <t>An AI orchestrator (Agent A) analyzes a dataset, creates a UPIP
        bundle, forks it to a specialist AI (Agent B) for deep analysis,
        and receives the result with cryptographic proof.</t>

        <sourcecode type="json"><![CDATA[
Agent A:
  capture_and_run(["python", "scan.py"], intent="Initial scan")
  fork_upip(actor_from="A", actor_to="B", intent="Deep analysis")
  deliver_fork(fork, to_agent="B")

Agent B:
  pull_forks()
  resume_upip(fork, command=["python", "deep_analyze.py"])
  ack_fork(fork, resume_hash=stack.hash, success=True)
        ]]></sourcecode>

        <t>Result: Both agents have UPIP stacks linked by fork_chain.
        Any auditor can verify the complete chain.</t>
      </section>

      <!-- C.2: Drone Swarm Coordination -->
      <section anchor="use-case-drone-swarm">
        <name>Drone Swarm Coordination</name>

        <t>A command station dispatches N reconnaissance tasks to N drones.
        Each drone receives a fragment fork token, executes its
        assigned sector scan, and returns the result.</t>

        <sourcecode type="json"><![CDATA[
Command Station:
  base_stack = capture_and_run(["mission_plan.py"])
  for i in range(N):
    fork = fork_upip(base_stack,
                     actor_from="command",
                     actor_to=f"drone-{i}",
                     fork_type="fragment",
                     metadata={"sector": sectors[i]})
    deliver_fork(fork, to_agent=f"drone-{i}")

Each Drone:
  fork_msg = pull_forks()
  stack = resume_upip(fork, command=["scan_sector.py"])
  ack_fork(fork, resume_hash=stack.hash)

Command Station:
  # Verify all N results, reconstruct combined map
  for ack in collect_acks():
    verify(ack.resume_hash)
        ]]></sourcecode>
      </section>

      <!-- C.3: Scientific Experiment Reproduction -->
      <section anchor="use-case-scientific">
        <name>Scientific Experiment Reproduction</name>

        <t>Lab A publishes an experiment as a UPIP bundle. Lab B
        reproduces it independently and gets cryptographic proof
        that results match (or don't).</t>

        <sourcecode type="json"><![CDATA[
Lab A:
  stack = capture_and_run(
    ["python", "train_model.py"],
    source_dir="./experiment",
    intent="Train model v3 on dataset-2026Q1"
  )
  save_upip(stack, "experiment-2026Q1.upip.json")
  # Publish to journal / data repository

Lab B:
  stack = load_upip("experiment-2026Q1.upip.json")
  verify = reproduce_upip(stack)
  # verify.match == True: exact reproduction
  # verify.match == False: divergence (investigate)
        ]]></sourcecode>
      </section>
    </section>

    <!-- Acknowledgements -->
    <section anchor="acknowledgements" numbered="false">
      <name>Acknowledgements</name>
      <t>The UPIP protocol was developed as part of HumoticaOS,
      an AI governance framework built on human-AI symbiosis.
      UPIP builds on concepts from the TIBET evidence trail protocol
      and extends them into the domain of process integrity and
      multi-actor continuation.</t>
      <t>The Fork Token mechanism was inspired by the need for
      cryptographic chain of custody in multi-agent AI systems,
      where processes move between heterogeneous actors across
      trust boundaries.</t>
    </section>
  </back>
</rfc>
