<?xml version='1.0' encoding='utf-8'?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     version="3"
     docName="draft-condrey-rats-pop-appraisal-04"
     ipr="trust200902"
     category="exp"
     submissionType="independent"
     sortRefs="true"
     symRefs="true"
     tocInclude="true"
     tocDepth="4">

  <front>
    <title abbrev="PoP Appraisal">Proof of Process (PoP): Forensic Appraisal and Security Model</title>
    <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-appraisal-04"/>
    <author fullname="David Condrey" initials="D." surname="Condrey">
      <organization abbrev="WritersLogic">WritersLogic Inc</organization>
      <address>
        <postal>
          <city>San Diego</city>
          <region>California</region>
          <country>United States</country>
        </postal>
        <email>david@writerslogic.com</email>
      </address>
    </author>
    <date year="2026" month="February" day="18"/>

    <area>Security</area>
    <workgroup>Individual Submission</workgroup>

    <keyword>attestation</keyword>
    <keyword>forensics</keyword>
    <keyword>biometrics</keyword>
    <keyword>security economics</keyword>

    <abstract>
      <t>
        This document specifies the forensic appraisal methodology and quantitative security model for the Proof of Process (PoP) framework. It defines how Verifiers evaluate behavioral entropy, perform liveness detection, and calculate forgery cost bounds. Additionally, it establishes the taxonomy for Absence Proofs and the Writers Authenticity Report (WAR) format, as well as the Tool Receipt protocol for artificial intelligence (AI) attribution within the linear human authoring process.
      </t>
    </abstract>

    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/writerslogic/draft-condrey-rats-pop"/>.</t>
    </note>
  </front>

  <middle>
    <section anchor="introduction">
      <name>Introduction</name>
      <t>
        The value of Proof of Process (PoP) evidence lies in the Verifier's ability to distinguish biological effort from algorithmic simulation. While traditional RATS <xref target="RFC9334"/> appraisals verify system state, PoP appraisal verifies a continuous physical process. This document provides the normative framework for forensic appraisal, defining the logic required to generate a Writers Authenticity Report (WAR).
      </t>
      <t>
        This document is a companion to <xref target="PoP-Protocol"/>,
        which defines the Evidence Packet wire format and Attester
        procedures. The present document specifies the Verifier's appraisal
        logic, Attestation Result (WAR) wire format, and forensic
        methodology. Implementers of Verifier components require both
        documents.
      </t>
      <t>
        At T3/T4 attestation tiers, platform integrity verification as described in the SEAT use cases <xref target="SEAT-UseCases"/> provides the trust anchor for PoP's hardware-bound claims. When PoP Evidence is delivered over an attested TLS channel <xref target="SEAT-EXPAT"/>, the Verifier gains assurance that the Attesting Environment's platform was trustworthy during evidence generation.
      </t>
    </section>

    <section anchor="terminology">
      <name>Terminology</name>
      <t>
        This document uses the following terms in addition to those defined
        in <xref target="RFC9334"/> and <xref target="PoP-Protocol"/>:
      </t>
      <dl>
        <dt>Synthetic Authoring:</dt>
        <dd>Content generated by AI or automated tools that is subsequently attributed to a human author.</dd>
        <dt>Evidence Quantization:</dt>
        <dd>The process of reducing timing resolution in behavioral data to protect author privacy while maintaining forensic utility.</dd>
        <dt>IKI (Inter-Keystroke Interval):</dt>
        <dd>The time elapsed between consecutive keystrokes, measured in milliseconds.</dd>
        <dt>C_intra:</dt>
        <dd>Pearson correlation between pause duration and subsequent edit complexity within a single checkpoint interval. Values near 0.0 indicate robotic pacing; values above 0.3 indicate human-like variable effort.</dd>
        <dt>CLC (Cognitive Load Correlation):</dt>
        <dd>Statistical correlation between content semantic complexity and typing cadence, used to distinguish original composition from retyping.</dd>
        <dt>SNR (Signal-to-Noise Ratio) Analysis:</dt>
        <dd>Spectral analysis of jitter intervals to distinguish biological motor noise patterns from synthetic injection.</dd>
      </dl>
    </section>

    <section anchor="requirements-language">
      <name>Requirements Language</name>
      <t>
        The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
        "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
        "OPTIONAL" in this document are to be interpreted as described in
        BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and
        only when, they appear in all capitals, as shown here.
      </t>
    </section>

    <section anchor="verification-procedure">
      <name>Step-by-Step Verification Procedure</name>
      <t>
        A Verifier MUST perform the following procedure to appraise a PoP Evidence Packet:
      </t>
      <ol>
        <li><em>Structural Validation:</em> The Verifier MUST reject with verdict invalid (4) any Evidence Packet that: (a) fails CBOR decoding, (b) lacks CBOR tag 1347571280, (c) has version != 1, (d) is missing mandatory fields (keys 1-6 in evidence-packet, keys 1-9 in each checkpoint), or (e) contains CBOR types that do not match the CDDL schema.</li>
        <li><em>Chain Integrity:</em> Verify the SHA-256 hash link between all checkpoints. Any break invalidates the entire Evidence Packet. The Verifier MUST set the verdict to invalid (4). The warnings field SHOULD include the checkpoint sequence number where the break was detected.</li>
        <li><em>Temporal Order:</em> For each process-proof, recompute Argon2id from the declared seed to obtain state_0, then verify sampled Merkle proofs against the committed root (process-proof key 4, merkle-root). Verify that claimed-duration is within [0.5x, 3.0x] of the expected wall-clock time for the declared proof-params on reference hardware (defined as a system with DDR4 memory providing approximately 25 GB/s sustained bandwidth). Expected times are defined in <xref target="PoP-Protocol"/>, Mandatory SWF Parameters section.</li>
        <li><em>Entropy Threshold:</em> Independently estimate entropy from the jitter-binding intervals array using a standard entropy estimator (e.g., NIST SP 800-90B most common value estimator). Verify the independent estimate meets or exceeds 3.0 bits per inter-keystroke interval. The Attester's self-reported entropy-estimate field MUST NOT be relied upon. Low-entropy segments (below threshold) MUST be flagged as "Non-Biological."</li>
        <li><em>Entanglement:</em> Verify the HMAC value (entangled-mac) over the combined document, jitter, and physical state.</li>
        <li><em>State Matching:</em> Verify that the final checkpoint's content-hash matches the document-ref content-hash. Verify that the cumulative char-count from edit-deltas is consistent with the document-ref char-count.</li>
        <li><em>Channel Binding:</em> If the Evidence Packet contains a channel-binding field and was received over TLS, verify that the binding-value matches the locally-computed TLS Exported Keying Material. Reject the Evidence Packet on mismatch.</li>
      </ol>
      <t>
        Steps 4 and 5 apply only when jitter-binding and entangled-mac fields
        are present (ENHANCED and MAXIMUM profiles). For CORE Evidence Packets
        lacking these fields, the Verifier MUST skip Steps 4 and 5 and note
        in the WAR warnings that behavioral analysis was not performed.
      </t>
    </section>

    <section anchor="forensic-assessment">
      <name>Forensic Assessment Mechanisms</name>
      <t>
        The appraisal logic is designed to detect "Synthetic Authoring" -- content generated by AI and subsequently "back-filled" with timing and hardware attestation.
      </t>
      <dl spacing="normal">
        <dt>SNR (Signal-to-Noise Ratio) Analysis:</dt>
        <dd>Verifiers MUST compute the power spectral density of jitter intervals. Human motor signals exhibit characteristic noise patterns consistent with biological motor control <xref target="Monrose2000"/>. Evidence exhibiting spectral flatness greater than 0.9 (indicating white noise rather than biological 1/f-like noise) MUST be flagged as potentially synthetic.</dd>

        <dt>Cognitive Load Correlation (CLC):</dt>
        <dd>Verifiers MUST correlate timing patterns with semantic complexity. Human authors exhibit increased inter-keystroke intervals (IKI) and pause frequency during composition of semantically complex segments compared to simple connective text. Verifiers MUST compute the Pearson correlation between segment semantic complexity and mean IKI. Evidence with r &lt; 0.2 (or r &lt; 0.1 in assistive mode) MUST be flagged as a Semantic Mismatch.</dd>

        <dt>Mechanical Turk Detection:</dt>
        <dd>Verifiers MUST compute C_intra (Pearson correlation between pause duration and subsequent edit complexity within each checkpoint). C_intra values below 0.15 MUST be flagged as indicating robotic pacing, where an automated system maintains a machine-clocked editing rate independent of content demands.</dd>

        <dt>Error Topology Analysis:</dt>
        <dd>Verifiers SHOULD analyze error patterns for consistency with human cognitive processing <xref target="Salthouse1986"/>: localized corrections near recent insertions, fractal self-similarity in revision patterns, and deletion-to-insertion ratios consistent with natural composition. Evidence exhibiting unnaturally low error rates (below 1 correction per 500 characters) or randomly distributed errors lacking positional correlation SHOULD be flagged.</dd>

        <dt>QR Presence Challenge (OOB-PC):</dt>
        <dd>When presence-challenge structures are present in the Evidence Packet, Verifiers MUST verify that the response-time is within the corresponding checkpoint's time window and MUST validate the device-signature. NOTE: The Attester-side procedure for issuing presence challenges is specified in <xref target="PoP-Protocol"/>.</dd>

        <dt>Session Consistency Analysis:</dt>
        <dd>Verifiers MUST analyze cross-checkpoint behavioral trends. IKI distributions should exhibit gradual drift consistent with fatigue effects. An abrupt change is defined as a shift in mean IKI between consecutive checkpoints exceeding 2 standard deviations of the session-wide IKI distribution. Verifiers MUST flag transitions exceeding this threshold as potential data source switching. Jitter-binding intervals across consecutive checkpoints MUST be checked for statistical independence (cross-checkpoint correlation below 0.3). Edit-delta patterns SHOULD be checked for non-stationarity consistent with human creative flow.</dd>
      </dl>
      <t>
        A conforming Verifier MUST evaluate all forensic mechanisms for
        which the Evidence Packet contains sufficient data. Any single
        triggered flag is sufficient to assign the suspicious verdict.
        Verifiers MAY implement additional analysis mechanisms beyond
        those defined in this specification.
      </t>

      <section anchor="snr-computation">
        <name>SNR Computation (Informative)</name>
        <t>
          The signal-to-noise ratio measures productive editing activity versus
          idle or mechanical noise within each evidence window:
        </t>
        <artwork><![CDATA[
  SNR = 10 * log10(P_signal / P_noise)

  where:
    P_signal = (keystroke_count + revision_count) / window_duration
    P_noise  = (pause_total_ms + idle_intervals) / window_duration
        ]]></artwork>
        <t>
          Typical ranges observed in human authorship:
        </t>
        <ul>
          <li>Human sessions: -3 dB to +12 dB, with variation reflecting
            cognitive processing cycles.</li>
          <li>Automated input (copy-paste, scripted typing): consistently
            above +15 dB due to minimal pause behavior.</li>
          <li>Sessions above +20 dB across all windows SHOULD be flagged
            as potentially non-human.</li>
          <li>Sessions below -10 dB across all windows indicate predominantly
            idle behavior and SHOULD be flagged as potentially fabricated
            padding.</li>
        </ul>
        <t>
          The Verifier SHOULD compute per-window SNR and session-wide SNR
          statistics (mean, variance, trend) as forensic indicators.
        </t>
      </section>

      <section anchor="clc-computation">
        <name>CLC and IKI Computation (Informative)</name>
        <t>
          The Compositional Lyapunov Coefficient (CLC) measures the rate at
          which writing complexity evolves over the session, analogous to
          Lyapunov exponents in dynamical systems:
        </t>
        <artwork><![CDATA[
  CLC = (1/n) * sum_{i=1}^{n} ln(|delta_IKI[i]| / |delta_IKI[i-1]|)

  where:
    delta_IKI[i] = IKI_mean[i] - IKI_mean[i-1]
    n = number of consecutive window pairs
        ]]></artwork>
        <t>
          The Incremental Kolmogorov Information (IKI) measures informational
          complexity added per window:
        </t>
        <artwork><![CDATA[
  IKI[i] ~= compressed_size(delta_content[i]) / raw_size(delta_content[i])
        ]]></artwork>
        <t>
          Typical ranges: human authorship exhibits positive CLC values
          (0.01 to 0.5) reflecting natural creative divergence. CLC near
          zero indicates mechanical regularity. IKI values for human writing
          typically range from 0.3 to 0.8; values consistently near 1.0
          suggest random content insertion, values near 0.0 suggest verbatim
          copying.
        </t>
      </section>

      <section anchor="turk-scoring">
        <name>Mechanical Turk Scoring (Informative)</name>
        <t>
          Indicators of mechanical turk behavior include:
        </t>
        <ul>
          <li>Paste-to-keystroke ratio exceeding 0.7 across a session.</li>
          <li>Burst insertion: more than 200 characters appearing in under
            2 seconds, characteristic of clipboard paste operations.</li>
          <li>Low IKI variance: pasted content with uniformly high
            compressibility (IKI below 0.2), consistent with LLM-generated
            prose.</li>
          <li>Absence of cognitive pause patterns before and after complex
            sentences.</li>
          <li>Temporal clustering: paste events at regular intervals suggesting
            a prompt-copy-paste workflow.</li>
        </ul>
        <t>
          Verifiers SHOULD compute a mechanical turk probability score
          from 0.0 (no indicators) to 1.0 (all indicators present). A
          score exceeding 0.6 SHOULD trigger a recommendation for tool
          receipt documentation.
        </t>
      </section>

      <section anchor="error-topology-model">
        <name>Error Topology Model (Informative)</name>
        <t>
          Error topology analysis constructs a directed graph of error and
          correction patterns. The error graph G = (V, E) has vertices V
          representing edit operations and edges E representing temporal
          succession. Human error topology exhibits:
        </t>
        <ul>
          <li>Power-law distribution of error cluster sizes.</li>
          <li>Short-range temporal locality (errors corrected within 5
            seconds).</li>
          <li>Increasing error rates at cognitive load boundaries (end of
            paragraphs, section transitions).</li>
          <li>Fractal self-similarity in revision patterns.</li>
        </ul>
        <t>
          Simulated error injection produces uniform error distribution,
          regular correction intervals, and no correlation between error
          rates and structural boundaries. A graph clustering coefficient
          below 0.1 combined with uniform correction latency is flagged
          as potentially synthetic.
        </t>
      </section>
    </section>

    <section anchor="economic-model">
      <name>Forgery Cost Bounds (Quantified Security)</name>
      <t>
        Forgery cost bounds provide a Verifier with a lower bound on the computational resources required to forge an Evidence Packet. The cost (<em>C_total</em>) is computed as:
      </t>
      <artwork><![CDATA[
  C_total = C_swf + C_entropy + C_hardware
      ]]></artwork>
      <section anchor="cost-swf">
        <name>Sequential Work Function Cost (C_swf)</name>
        <t>
          The SWF cost component provides a lower bound on the computational
          time an adversary must expend:
        </t>
        <artwork><![CDATA[
  C_swf >= n * t_checkpoint

  where:
    n = number of checkpoints in the Evidence chain
    t_checkpoint = wall-clock time for one SWF computation
        ]]></artwork>
        <t>
          The memory-hard nature of Argon2id ensures that an adversary with
          k parallel processors achieves at most O(sqrt(k)) speedup due to
          memory bandwidth constraints. The minimum forgery time equals the
          sum of SWF claimed-durations across all checkpoints. At T1 tier
          without hardware binding, C_swf represents an economic cost only
          (the adversary must spend real time, but has no hardware constraint).
        </t>
      </section>

      <section anchor="cost-entropy">
        <name>Behavioral Evidence Synthesis Cost (C_entropy)</name>
        <t>
          The entropy cost component estimates the resources required to
          synthesize behavioral noise satisfying all forensic constraints:
        </t>
        <artwork><![CDATA[
  C_entropy = O(d * n * log(1/epsilon))

  where:
    d = number of independent forensic dimensions
    n = number of checkpoints
    epsilon = target false-negative rate
        ]]></artwork>
        <t>
          At T1/T2, only basic entropy and timing are checked (d = 2). For
          T3/T4, the full forensic assessment applies (d >= 7, including CLC,
          IKI, error topology, SNR dynamics, session consistency, and
          cross-checkpoint correlation), making synthesis exponentially more
          expensive in the number of correlated dimensions the adversary must
          simultaneously satisfy.
        </t>
        <t>
          The cost of synthesizing behavioral noise that satisfies all
          forensic constraints is inherently uncertain and depends on
          adversary capability. Verifiers SHOULD set C_entropy conservatively.
          When the Verifier cannot independently assess AI synthesis costs,
          C_entropy SHOULD be set to 0 and the WAR warnings field SHOULD
          note that entropy cost was not estimated.
        </t>
      </section>

      <section anchor="cost-hardware">
        <name>Hardware Attestation Cost (C_hardware)</name>
        <ul>
          <li><em>T1/T2:</em> C_hardware = 0. No hardware root of trust;
            keys are software-managed.</li>
          <li><em>T3 (Hardware-Bound):</em> Requires compromise of TPM or
            platform Secure Element. Estimated cost: USD 10,000-100,000
            per device class, depending on the specific hardware and
            attack methodology.</li>
          <li><em>T4 (Hardware-Hardened):</em> Requires invasive hardware
            attacks, manufacturer collusion, or firmware exploits targeting
            PUF-bound keys. Estimated cost: USD 100,000 or more.</li>
        </ul>
        <t>
          Verifiers MUST include these estimates in the WAR to allow Relying Parties to set trust thresholds based on objective economic risk.
        </t>
        <t>
          The c-total field in the forgery-cost-estimate MUST equal the sum
          of c-swf, c-entropy, and c-hardware. All component costs within a
          single forgery-cost-estimate MUST be expressed in the same
          cost-unit.
        </t>
      </section>
    </section>

    <section anchor="absence-proofs">
      <name>Absence Proofs: Negative Evidence Taxonomy</name>
      <t>
        Absence proofs assert that certain events did NOT occur during the monitored session. They are divided into categories based on verifiability:
      </t>
      <dl>
        <dt>Type 1: Computationally-Bound Claims</dt>
        <dd>Verifiable from the Evidence Packet alone (e.g., "Max single delta size &lt; 500 bytes" or "No checkpoint timestamps out of order").</dd>
        <dt>Type 2: Monitoring-Dependent Claims</dt>
        <dd>Require trust in the AE's event monitoring (e.g., "No paste from unauthorized AI tool" or "No clipboard activity detected"). Trust in these claims MUST be weighted by the declared Attestation Tier (T1-T4).</dd>
        <dt>Type 3: Environmental Claims</dt>
        <dd>Assertions about the execution environment (e.g., "No debugger attached" or "Hardware temperature remained within stable physical bounds").</dd>
      </dl>
      <t>
        Type 1 (Computationally-Bound) claims MUST be verified computationally
        by the Verifier from the Evidence Packet data alone. Type 3
        (Environmental) claims SHOULD be evaluated against physical-state
        markers when present, and MUST be treated as unverifiable when
        physical-state is absent.
      </t>
    </section>

    <section anchor="war-wire-format">
      <name>Attestation Result Wire Format</name>
      <t>
        The Writers Authenticity Report (WAR) is a CBOR-encoded
        <xref target="RFC8949"/> Attestation Result identified by semantic
        tag 1463894560 (encoding ASCII "WAR "). The CDDL notation
        <xref target="RFC8610"/> defines the wire format:
      </t>
      <artwork type="cddl"><![CDATA[
pop-war = #6.1463894560(attestation-result)

attestation-result = {
    1 => uint,                    ; version (MUST be 1)
    2 => hash-value,              ; evidence-ref
    3 => verdict,                 ; appraisal verdict
    4 => attestation-tier,        ; assessed assurance level
    5 => uint,                    ; chain-length
    6 => uint,                    ; chain-duration (seconds)
    ? 7 => entropy-report,        ; entropy assessment (omit for CORE)
    ? 8 => forgery-cost-estimate, ; quantified forgery cost
    ? 9 => [+ absence-claim],     ; absence claims (1+ when present)
    ? 10 => [* tstr],             ; warnings
    11 => bstr,                   ; verifier-signature (COSE_Sign1)
    12 => pop-timestamp,           ; created (appraisal timestamp)
    * int => any,                 ; extension fields
}

verdict = &(
    authentic: 1,                 ; consistent with human authorship
    inconclusive: 2,              ; insufficient evidence
    suspicious: 3,                ; anomalies detected
    invalid: 4,                   ; chain broken or forged
)

entropy-report = {
    1 => float32,                 ; timing-entropy (bits/sample)
    2 => float32,                 ; revision-entropy (bits)
    3 => float32,                 ; pause-entropy (bits)
    4 => bool,                    ; meets-threshold
}

forgery-cost-estimate = {
    1 => float32,                 ; c-swf
    2 => float32,                 ; c-entropy
    3 => float32,                 ; c-hardware
    4 => float32,                 ; c-total
    5 => cost-unit,               ; currency
}

cost-unit = &(
    usd: 1,
    cpu-hours: 2,
)

absence-claim = {
    1 => absence-type,            ; proof category
    2 => time-window,             ; claimed window
    3 => tstr,                    ; claim-id
    ? 4 => any,                   ; threshold/parameter
    5 => bool,                    ; assertion
}

absence-type = &(
    computationally-bound: 1,     ; verifiable from Evidence alone
    monitoring-dependent: 2,      ; requires trust in AE monitoring
    environmental: 3,             ; environmental assertions
)

time-window = {
    1 => pop-timestamp,           ; start
    2 => pop-timestamp,           ; end
}

; Shared type definitions reproduced from [PoP-Protocol] for reader
; convenience. In case of conflict, [PoP-Protocol] is authoritative.
pop-timestamp = #6.1(float32)      ; CBOR tag 1 (epoch-based, float32)
hash-value = {
    1 => hash-algorithm,
    2 => bstr,
}
hash-algorithm = &(
    sha256: 1,
    sha384: 2,
    sha512: 3,
)
attestation-tier = &(
    software-only: 1,             ; T1: AAL1
    attested-software: 2,         ; T2: AAL2
    hardware-bound: 3,            ; T3: AAL3
    hardware-hardened: 4,         ; T4: LoA4
)
      ]]></artwork>
      <t>
        The evidence-ref field MUST contain a hash-value computed as
        SHA-256 over the CBOR-encoded evidence-packet structure
        (including CBOR tag 1347571280), excluding any COSE_Sign1
        wrapper. This binds the Attestation Result to a specific
        Evidence Packet.
      </t>
      <t>
        In the absence-claim structure, claim-id is a unique textual
        identifier for the claim (e.g., "no-paste-event",
        "max-delta-below-500"). The assertion field is true if the claim
        holds and false if the Verifier determined it does not hold. The
        time-window specifies the temporal scope of the claim within the
        Evidence Packet's session.
      </t>
      <t>
        When appraising CORE Evidence Packets that lack jitter-binding data,
        the Verifier SHOULD omit the entropy-report field from the
        Attestation Result and include a warning indicating that behavioral
        entropy analysis was not performed.
      </t>
      <t>
        The created field (key 12) MUST contain the timestamp at which the
        Verifier completed the appraisal. Relying Parties use this field
        to evaluate the freshness of the Attestation Result.
      </t>

      <section anchor="entropy-report-computation">
        <name>Entropy Report Computation</name>
        <t>
          The Verifier MUST compute entropy-report fields as follows:
        </t>
        <dl>
          <dt>timing-entropy:</dt>
          <dd>Shannon entropy of quantized jitter intervals across all
            checkpoints, expressed in bits per sample.</dd>
          <dt>revision-entropy:</dt>
          <dd>Shannon entropy of edit-delta sizes (chars-added values)
            across all checkpoints, expressed in bits.</dd>
          <dt>pause-entropy:</dt>
          <dd>Shannon entropy of inter-checkpoint pause durations,
            expressed in bits.</dd>
          <dt>meets-threshold:</dt>
          <dd>True if and only if timing-entropy is at or above the
            minimum threshold (3.0 bits per sample) AND revision-entropy
            is at or above 3.0 bits AND pause-entropy is at or above 2.0
            bits. These thresholds are calibrated for the NIST SP 800-90B
            most common value estimator. Implementations using alternative
            entropy estimators MUST provide equivalent assurance levels.</dd>
        </dl>
      </section>

      <section anchor="verdict-assignment">
        <name>Verdict Assignment</name>
        <t>
          The Verifier MUST assign the verdict based on the appraisal
          outcome:
        </t>
        <dl>
          <dt>authentic (1):</dt>
          <dd>All verification steps passed. Evidence is consistent with
            human authorship. No forensic flags triggered.</dd>
          <dt>inconclusive (2):</dt>
          <dd>Verification steps passed but insufficient behavioral data
            available for forensic assessment (e.g., CORE profile without
            jitter-binding).</dd>
          <dt>suspicious (3):</dt>
          <dd>One or more forensic flags triggered (low entropy, failed
            CLC correlation, mechanical pacing detected) but chain integrity
            is intact. When multiple forensic checks produce contradictory
            results, the Verifier MUST assign the more conservative verdict
            (suspicious over authentic).</dd>
          <dt>invalid (4):</dt>
          <dd>Chain integrity broken, SWF verification failed, or
            structural validation error. Evidence cannot be trusted.</dd>
        </dl>
      </section>
    </section>

    <section anchor="tool-receipt-protocol">
      <name>Tool Receipt Protocol (AI Attribution)</name>
      <t>
        NOTE: This section is informational. The complete CDDL wire format for
        Tool Receipts, including signature algorithms and binding mechanisms,
        will be specified in a future revision. Implementations SHOULD treat
        this section as guidance only.
      </t>
      <t>
        When external tools (LLMs) contribute content, the framework enables a "compositional provenance" model:
      </t>
      <ol>
        <li>Receipt Signing: The Tool signs a "Receipt" containing its tool_id, an output_commit (SHA-256 hash of generated text), and an optional input_ref (SHA-256 hash of the prompt).</li>
        <li>Binding: The human Attester records a PASTE event in the transcript referencing the Tool Receipt's output_commit.</li>
        <li>Countersigning: The Attester binds the Receipt into the next human-driven checkpoint, anchoring the automated work into the linear human effort.</li>
      </ol>
      <t>
        Verifiers appraise the ratio of human-to-machine effort based on these receipts and the intervening SWF-proved intervals.
      </t>
    </section>

    <section anchor="adversary-model">
      <name>Adversary Model</name>
      <t>
        This document inherits the adversary model defined in the Threat Model section of <xref target="PoP-Protocol"/>. The appraisal procedures defined herein assume the adversarial Attester capabilities and constraints specified there. The primary threat is an adversarial Attester -- an author who controls the Attesting Environment and seeks to generate Evidence for content they did not authentically author.
      </t>
      <t>
        The following adversary tiers characterize the appraisal-specific
        threat landscape. Each tier defines the adversary capabilities that
        the corresponding Attestation Tier is designed to resist:
      </t>
      <dl>
        <dt>Tier 1 Adversary (Casual):</dt>
        <dd>Can manipulate system clocks and intercept local IPC. Cannot
          perform real-time behavioral simulation exceeding basic cadence
          matching. The T1 appraisal policy accepts the risk of basic
          retype attacks; SWF time-binding provides the primary defense.</dd>
        <dt>Tier 2 Adversary (Motivated):</dt>
        <dd>Can invest computational resources up to the cost of a
          high-end workstation and study the verification algorithm to
          craft evidence targeting specific thresholds. The T2 appraisal
          policy defends through multi-dimensional behavioral analysis
          (SNR + CLC + mechanical turk detection).</dd>
        <dt>Tier 3 Adversary (Professional):</dt>
        <dd>Has access to custom hardware (FPGAs, specialized ASICs) for
          SWF acceleration and sophisticated behavioral models trained on
          human authorship data. The T3 appraisal policy defends through
          HAT cross-validation and advanced forensic metrics (CLC, IKI,
          error topology, and SNR dynamics).</dd>
        <dt>Tier 4 Adversary (Nation-State):</dt>
        <dd>Has all Tier 3 capabilities plus: can potentially compromise
          hardware manufacturer endorsement chains, deploy large-scale
          parallel computation, and employ teams of human operators for
          sophisticated retype attacks. The T4 appraisal policy defends
          through the combined cost of SWF sequentiality,
          multi-dimensional behavioral evidence synthesis (d >= 7
          correlated dimensions), and hardware attestation integrity.
          Even a Tier 4 adversary faces a minimum forgery cost equal to
          the claimed authorship duration plus the hardware compromise
          cost.</dd>
      </dl>
    </section>

    <section anchor="privacy-considerations">
      <name>Privacy Considerations</name>
      <section anchor="privacy">
        <name>Evidence and Attestation Result Privacy</name>
        <t>
          High-resolution behavioral data poses a stylometric de-anonymization
          risk <xref target="Goodman2007"/>. Implementations SHOULD support
          Evidence Quantization, reducing timing resolution to a level that
          maintains forensic confidence while breaking unique author fingerprints.
        </t>
        <t>
          The entropy-report in Attestation Results (timing-entropy,
          revision-entropy, pause-entropy) may enable cross-document author
          identification by Relying Parties. Verifiers SHOULD quantize
          entropy-report values to reduce fingerprinting precision while
          preserving forensic utility. Relying Parties MUST NOT correlate
          entropy reports across multiple Attestation Results to identify
          or track authors.
        </t>
      </section>
      <section anchor="privacy-quantization">
        <name>Evidence Quantization Requirements</name>
        <t>
          Attestation Results MUST quantize forensic indicator values to
          the following resolutions:
        </t>
        <ul>
          <li>Cadence (IKI) values: millisecond resolution. Sub-millisecond
            data MUST NOT be included.</li>
          <li>Entropy values: 0.01 bit resolution (two decimal places).</li>
          <li>SNR values: 0.5 dB resolution.</li>
          <li>CLC and IKI metric values: two decimal places.</li>
        </ul>
        <t>
          These quantization levels are calibrated to preserve the
          forensic utility of all assessment mechanisms defined in
          <xref target="forensic-assessment"/> while limiting the
          precision available for stylometric fingerprinting.
        </t>
      </section>
      <section anchor="privacy-retention">
        <name>Data Retention and Behavioral Profiles</name>
        <t>
          Verifiers MUST NOT maintain per-author behavioral profile
          databases. Attestation Results SHOULD NOT include raw forensic
          indicator values; tier-level pass/fail determinations are
          sufficient for Relying Parties. Evidence retention SHOULD NOT
          exceed 90 days (the default validity period). Implementations
          SHOULD support anonymous Evidence submission to prevent
          linking authorship sessions to real-world identities.
        </t>
      </section>
    </section>

    <section anchor="accessibility">
      <name>Accessibility and Assistive Modes</name>
      <t>
        Verifiers MUST NOT automatically reject evidence based solely on atypical timing patterns. Implementations MUST support "Assistive Modes" that adjust SNR and CLC thresholds for authors with motor disabilities or those using assistive technologies (eye-tracking, dictation).
      </t>
      <t>
        To signal assistive mode usage, the Attester SHOULD include an assistive-mode indicator in the profile-declaration structure of the Evidence Packet. When this indicator is present, Verifiers MUST apply adjusted thresholds as follows:
      </t>
      <section anchor="assistive-eye-tracking">
        <name>Eye-Tracking Mode</name>
        <t>
          Eye-tracking input produces IKI ranges of 500-3000 ms (versus
          100-300 ms for keyboard). Adjusted thresholds:
        </t>
        <ul>
          <li>Entropy: 2.0 to 4.0 bits/sample (reduced from 3.0 minimum)</li>
          <li>SNR: -5 dB to +5 dB (narrower than keyboard range). SNR
            anomaly threshold: +15 dB.</li>
          <li>CLC correlation: r &gt; 0.1 (reduced from r &gt; 0.2)</li>
          <li>Error topology: Adjusted for gaze drift corrections, which
            produce characteristic error patterns distinct from keyboard
            errors.</li>
        </ul>
      </section>

      <section anchor="assistive-dictation">
        <name>Dictation Mode</name>
        <t>
          Dictation input produces burst patterns with higher cadence
          variance than keyboard. Adjusted thresholds:
        </t>
        <ul>
          <li>SNR: -8 dB to +8 dB (wider range reflecting speech pauses)</li>
          <li>CLC correlation: r &gt; 0.1 (range 0.1 to 0.8)</li>
          <li>Paste-to-keystroke ratio threshold: disabled (dictation
            engines produce burst insertions by design)</li>
          <li>Error topology: waived (dictation corrections follow
            speech-recognition patterns, not typing patterns)</li>
        </ul>
      </section>

      <section anchor="assistive-accommodations">
        <name>Additional Accommodations</name>
        <ul>
          <li>Switch-access input: minimum event count per checkpoint
            reduced to 1 (from default of 5).</li>
          <li>Head-tracking and mouth-stick input: apply eye-tracking
            thresholds.</li>
          <li>When assistive mode thresholds produce anomalous results,
            the Verifier SHOULD flag the inconsistency in the WAR
            warnings rather than reject the Evidence.</li>
        </ul>
        <t>
          The WAR MUST indicate when assistive mode thresholds were applied.
          Assistive mode is signaled through the profile-declaration
          structure in the Evidence Packet. Implementations MAY include
          an assistive-mode feature flag (value 60) in the feature-flags
          array. The following values are defined: 0 (none), 1
          (motor-disability), 2 (eye-tracking), 3 (dictation). A future
          revision of <xref target="PoP-Protocol"/> will formalize this
          signaling mechanism.
        </t>
      </section>
    </section>

    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>
        This document has no IANA actions. All IANA registrations for the PoP framework are defined in <xref target="PoP-Protocol"/>.
      </t>
    </section>

    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>
        This document defines forensic appraisal procedures that inherit and extend the security model from <xref target="PoP-Protocol"/>. The broader RATS security considerations <xref target="Sardar-RATS"/> also apply. Implementers should consider the following security aspects:
      </t>
      <section anchor="sec-entropy-manipulation">
        <name>Entropy Manipulation Attacks</name>
        <t>
          An adversary may attempt to inject synthetic jitter patterns that satisfy entropy thresholds while lacking biological origin. The use of multi-dimensional analysis (SNR, CLC, Error Topology) rather than single metrics provides defense-in-depth against high-fidelity simulation.
        </t>
      </section>
      <section anchor="sec-verifier-trust">
        <name>Verifier Trust Model</name>
        <t>
          The forensic assessments defined in this document produce probabilistic confidence scores, not binary determinations. Relying Parties MUST understand that forgery cost bounds represent economic estimates, not cryptographic guarantees. Trust decisions SHOULD incorporate the declared Attestation Tier (T1-T4) and the specific absence proof types claimed.
        </t>
      </section>
      <section anchor="sec-stylometric-risk">
        <name>Stylometric De-anonymization</name>
        <t>
          High-resolution behavioral data (keystroke timing, pause patterns) can enable author identification even when document content is not disclosed. Implementations SHOULD support Evidence Quantization to reduce timing resolution while maintaining forensic utility. The trade-off between forensic confidence and privacy should be documented for Relying Parties.
        </t>
      </section>
      <section anchor="sec-assistive-bypass">
        <name>Assistive Mode Abuse</name>
        <t>
          Adversaries may falsely claim assistive technology usage to bypass behavioral entropy checks. Verifiers SHOULD require consistent assistive mode declarations across sessions and MAY request additional out-of-band verification for mode changes. The WAR should indicate when assistive modes were active, as specified in the accessibility section above.
        </t>
      </section>
    </section>
  </middle>

  <back>
    <references>
      <name>References</name>
      <references>
        <name>Normative References</name>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml"/>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml"/>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8610.xml"/>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8949.xml"/>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9334.xml"/>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.5869.xml"/>
        <reference anchor="PoP-Protocol">
          <front>
            <title>Proof of Process (PoP): Architecture and Evidence Format</title>
            <author fullname="David Condrey" initials="D." surname="Condrey"/>
            <date year="2026" month="February"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-protocol-05"/>
        </reference>
      </references>
      <references>
        <name>Informative References</name>
        <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9052.xml"/>
        <reference anchor="Monrose2000" target="https://doi.org/10.1145/351427.351438">
          <front>
            <title>Keystroke dynamics as a biometric for authentication</title>
            <author fullname="F. Monrose" initials="F." surname="Monrose"/>
            <author fullname="A. Rubin" initials="A." surname="Rubin"/>
            <date year="2000"/>
          </front>
        </reference>
        <reference anchor="Goodman2007" target="https://doi.org/10.1007/978-3-540-77343-6_14">
          <front>
            <title>Using Stylometry for Biometric Keystroke Dynamics</title>
            <author fullname="A. Goodman" initials="A." surname="Goodman"/>
            <author fullname="V. Zabala" initials="V." surname="Zabala"/>
            <date year="2007"/>
          </front>
        </reference>
        <reference anchor="Salthouse1986" target="https://doi.org/10.1037/0033-295X.93.3.303">
          <front>
            <title>Perceptual, Cognitive, and Motoric Aspects of Transcription Typing</title>
            <author fullname="Timothy A. Salthouse" initials="T.A." surname="Salthouse"/>
            <date year="1986"/>
          </front>
          <seriesInfo name="Psychological Review" value="93(3), 303-319"/>
        </reference>
        <reference anchor="Sardar-RATS" target="https://datatracker.ietf.org/doc/html/draft-sardar-rats-sec-cons-02">
          <front>
            <title>Security Considerations for Remote ATtestation procedureS (RATS)</title>
            <author fullname="Muhammad Usama Sardar" initials="M.U." surname="Sardar"/>
            <date year="2026" month="February"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-sardar-rats-sec-cons-02"/>
        </reference>
        <reference anchor="SEAT-EXPAT" target="https://datatracker.ietf.org/doc/html/draft-fossati-seat-expat-01">
          <front>
            <title>Remote Attestation with Exported Authenticators</title>
            <author fullname="Muhammad Usama Sardar" initials="M.U." surname="Sardar"/>
            <author fullname="Thomas Fossati" initials="T." surname="Fossati"/>
            <author fullname="Tirumaleswar Reddy" initials="T." surname="Reddy"/>
            <author fullname="Yaron Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="Hannes Tschofenig" initials="H." surname="Tschofenig"/>
            <author fullname="Ionut Mihalcea" initials="I." surname="Mihalcea"/>
            <date year="2026" month="January"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-fossati-seat-expat-01"/>
        </reference>
        <reference anchor="SEAT-UseCases" target="https://datatracker.ietf.org/doc/html/draft-mihalcea-seat-use-cases-01">
          <front>
            <title>Use Cases and Properties for Integrating Remote Attestation with Secure Channel Protocols</title>
            <author fullname="Ionut Mihalcea" initials="I." surname="Mihalcea"/>
            <author fullname="Muhammad Usama Sardar" initials="M.U." surname="Sardar"/>
            <author fullname="Thomas Fossati" initials="T." surname="Fossati"/>
            <author fullname="Tirumaleswar Reddy" initials="T." surname="Reddy"/>
            <author fullname="Yuning Jiang" initials="Y." surname="Jiang"/>
            <author fullname="Meiling Chen" initials="M." surname="Chen"/>
            <date year="2026" month="January"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-mihalcea-seat-use-cases-01"/>
        </reference>
      </references>
    </references>

    <section anchor="verification-checklist" numbered="false">
      <name>Verification Constraint Summary</name>
      <t>
        The following constraints summarize the verification requirements defined in the preceding sections:
      </t>
      <section anchor="structural-checks" numbered="false">
        <name>Structural Integrity</name>
        <ol>
          <li>Chain Integrity: SHA-256 hash chain is unbroken from genesis to final checkpoint.</li>
          <li>Temporal Monotonicity: All checkpoint timestamps strictly exceed their predecessors.</li>
          <li>SWF Continuity: Recompute Argon2id from seed; verify sampled Merkle proofs.</li>
          <li>Content Binding: Final document hash matches document-ref in Evidence Packet.</li>
        </ol>
      </section>
      <section anchor="behavioral-checks" numbered="false">
        <name>Behavioral Analysis (ENHANCED/MAXIMUM profiles)</name>
        <ol>
          <li>Entropy Threshold: Independent entropy estimate >= 3.0 bits per inter-keystroke interval per checkpoint.</li>
          <li>SNR Analysis: Jitter exhibits characteristic biological noise patterns, not periodic or spectrally flat patterns.</li>
          <li>CLC Correlation: Semantic complexity correlates with timing (r > 0.2, or r > 0.1 for assistive mode).</li>
          <li>Error Topology: Correction patterns consistent with human cognitive processing.</li>
          <li>Mechanical Turk Detection: No robotic pacing (machine-clocked editing rate).</li>
        </ol>
      </section>
      <section anchor="absence-checks" numbered="false">
        <name>Absence Proof Validation</name>
        <ol>
          <li>Type 1 Claims: Verify computationally from Evidence Packet (delta sizes, timestamp ordering).</li>
          <li>Type 2 Claims: Weight by Attestation Tier (T1-T4).</li>
          <li>Type 3 Claims: Evaluate environmental assertions against physical-state markers.</li>
        </ol>
      </section>
      <section anchor="tool-receipt-checks" numbered="false">
        <name>Tool Receipt Validation (when present)</name>
        <ol>
          <li>Verify Tool signature over Receipt.</li>
          <li>Verify PASTE event references correct output_commit.</li>
          <li>Calculate human-to-machine effort ratio from SWF-proved intervals.</li>
        </ol>
      </section>
    </section>

    <section anchor="per-tier-constraints" numbered="false">
      <name>Per-Tier Verification Constraints</name>
      <t>
        This appendix summarizes the verification thresholds and
        constraints for each Attestation Tier. These values are the
        normative defaults; deployment profiles MAY adjust them within
        the ranges specified.
      </t>
      <section anchor="constraints-t1" numbered="false">
        <name>T1 (Software-Only) Constraints</name>
        <ul>
          <li>Chain integrity: prev-hash linkage required.</li>
          <li>Temporal ordering: monotonic timestamps required; SWF
          claimed-duration within [0.5x, 3.0x] of expected time.</li>
          <li>Entropy: minimum 3.0 bits/sample when jitter-binding is
          present. No upper bound enforced.</li>
          <li>Entanglement: jitter seal presence required when
          jitter-binding is present; HMAC verification SHOULD be
          performed (MAC key derivable from public merkle-root).</li>
          <li>State matching: final content hash match required.</li>
          <li>Forensic assessment: SNR computation OPTIONAL; CLC, error
          topology, and mechanical turk detection RECOMMENDED for
          ENHANCED+ profiles.</li>
          <li>Forgery cost bound: C_total = C_swf + C_entropy (no
          hardware component). Physical-state fields are self-reported
          and provide no additional assurance.</li>
        </ul>
      </section>
      <section anchor="constraints-t2" numbered="false">
        <name>T2 (Attested Software) Constraints</name>
        <ul>
          <li>Chain integrity: all T1 requirements.</li>
          <li>Temporal ordering: all T1 requirements; SWF
          claimed-duration within [0.5x, 3.0x] of expected time on
          reference hardware.</li>
          <li>Entropy: 3.0 to 6.0 bits/sample per checkpoint.
          Values above 6.0 suggest injected randomness and SHOULD
          be flagged.</li>
          <li>Entanglement: jitter seal and entangled-mac presence
          required for ENHANCED+ profiles. HMAC verification
          SHOULD be performed.</li>
          <li>State matching: final content hash match required;
          intermediate content hash progression SHOULD be
          verified for monotonic growth.</li>
          <li>Forensic assessment: SNR, CLC, and mechanical turk
          detection required. Error topology OPTIONAL.</li>
          <li>Forgery cost bound: C_total = C_swf + C_entropy.
          Minimum forgery time equals the sum of SWF
          claimed-durations.</li>
        </ul>
      </section>
      <section anchor="constraints-t3" numbered="false">
        <name>T3 (Hardware-Bound) Constraints</name>
        <ul>
          <li>Chain integrity: all T2 requirements. COSE_Sign1
          signature MUST verify against hardware-bound key.</li>
          <li>Temporal ordering: all T2 requirements; HAT delta
          cross-validation SHOULD be performed when TPM monotonic
          counter data is available.</li>
          <li>Entropy: 3.0 to 5.5 bits/sample, reflecting tighter
          calibration against verified human authorship baselines.</li>
          <li>Entanglement: HMAC verification MUST be performed.
          Device attestation certificate chain SHOULD be validated
          against known Endorser roots.</li>
          <li>State matching: all T2 requirements; intermediate
          content hash progression MUST be verified for monotonic
          growth. Non-monotonic changes (document size decreasing
          by more than 50% between consecutive checkpoints) MUST
          be flagged.</li>
          <li>Forensic assessment: all T2 requirements plus error
          topology analysis required. QR presence challenge
          OPTIONAL.</li>
          <li>Forgery cost bound: C_total = C_swf + C_entropy +
          C_hardware. Hardware compromise cost estimated at
          USD 10,000-100,000.</li>
        </ul>
      </section>
      <section anchor="constraints-t4" numbered="false">
        <name>T4 (Hardware-Hardened) Constraints</name>
        <ul>
          <li>Chain integrity: all T3 requirements.</li>
          <li>Temporal ordering: all T3 requirements; HAT delta
          cross-validation MUST be performed; HAT-SWF agreement
          within 5% tolerance required.</li>
          <li>Entropy: 3.0 to 5.0 bits/sample; entropy trajectory
          standard deviation MUST exceed 0.1 bits across the
          session. A constant-entropy session is a strong indicator
          of synthetic generation.</li>
          <li>Entanglement: all T3 requirements; timing vector
          entropy consistency check required (within 0.5 bits of
          reported entropy-estimate).</li>
          <li>State matching: all T3 requirements.</li>
          <li>Forensic assessment: all T3 requirements;
          cross-correlation analysis between entropy and SNR
          required. QR presence challenge RECOMMENDED.</li>
          <li>Forgery cost bound: C_total = C_swf + C_entropy +
          C_hardware. Hardware compromise cost estimated at
          USD 100,000 or more. Total minimum forgery cost exceeds
          sum of claimed-durations plus hardware procurement.</li>
        </ul>
      </section>
    </section>

    <section anchor="acknowledgements" numbered="false">
      <name>Acknowledgements</name>
      <t>
        The author thanks the participants of the RATS working group for
        their ongoing work on remote attestation architecture and security
        considerations that informed this specification.
      </t>
    </section>
  </back>
</rfc>