Skip to content

Prompt to Coherence Ops

BRYAN DAVID WHITE edited this page Feb 23, 2026 · 5 revisions

Prompt Engineering → Coherence Ops Translator

How we stop "prompt vibes" and ship institutional reliability.

Source doc: docs/17-prompt-to-coherence-ops.md


The Problem

Prompts are handwritten runtime code. They drift, they break, and they can't be audited.

When a prompt says "always," "never," "must," or "if X then Y" — that's a policy. When it's embedded in prose, it's invisible to tests, unreachable by tooling, and impossible to version.

  • Prompt engineering optimizes one call.
  • Coherence Engineering operationalizes every call.

The Translation Move

We don't "write better prompts." We compile prompts into an operating system:

flowchart LR
  PROMPT["Prompt<br/>(prose)"] --> EXTRACT["Extract<br/>rules + claims"]
  EXTRACT --> TYPES["1 · Types<br/>Claim, Evidence,<br/>Source, Assumption"]
  EXTRACT --> POLICIES["2 · Policies<br/>Policy Pack<br/>invariants"]
  EXTRACT --> EVENTS["3 · Events<br/>DriftEvent<br/>state machine"]
  EXTRACT --> RENDER["4 · Renderer<br/>Lens + Objective<br/>+ Context → Schema"]
  TYPES --> EPISODE["Sealed<br/>DecisionEpisode"]
  POLICIES --> EPISODE
  EVENTS --> EPISODE
  RENDER --> EPISODE

  style PROMPT fill:#c0392b,color:#fff
  style EPISODE fill:#27ae60,color:#fff
  style TYPES fill:#2980b9,color:#fff
  style POLICIES fill:#8e44ad,color:#fff
  style EVENTS fill:#d35400,color:#fff
  style RENDER fill:#16a085,color:#fff
Loading

1) Types (Data Model)

Every claim an LLM touches must have structure:

Type Purpose RAL Mapping
Claim Assertion the system makes or consumes DecisionEpisode.plan, tool output value
Evidence Data backing a claim evidenceRefs[], feature records with capturedAt
Source Origin + freshness of evidence sourceRef, capturedAt, ttlMs
Assumption Unstated belief with an expiry ttlMs, maxFeatureAgeMs, assumption.halfLife
Drift Detected divergence from expected state DriftEvent (typed, fingerprinted)
Patch Corrective change triggered by drift Patch node in Memory Graph
Memory Sealed, immutable record of a decision DecisionEpisode (sealed, hashed)

See: Concepts, Coherence Ops Mapping

2) Policies (Invariants)

Anything a prompt expresses as a rule belongs in a Policy Pack, not prose:

Prompt Pattern Policy Translation Enforcement
"Always cite sources" Claim → Evidence → Source chain Verifier rejects episodes missing evidenceRefs
"Never overwrite previous answers" Seal → Version → Patch (append-only) sealHash immutability; patches link to prior episode
"Information may be outdated" Assumption TTL / half-life ttlMs gate; stale → degrade ladder fires
"Do not execute, only recommend" authorization.mode = recommend_only Safe Action Contract blocks auto dispatch

See: Policy Packs, Degrade Ladder

3) Events (State Machine)

Prompts hide control flow in natural language. Coherence Ops makes it explicit:

stateDiagram-v2
  [*] --> Claim_Made : LLM asserts claim

  Claim_Made --> Evidence_Check : Verify evidence exists
  Evidence_Check --> Source_Fresh : Check TTL / TOCTOU
  Evidence_Check --> Drift_Verify : Evidence missing

  Source_Fresh --> Plan : Evidence fresh
  Source_Fresh --> Drift_Freshness : TTL breached

  Drift_Verify --> Ask_Questions : Request clarification
  Ask_Questions --> Evidence_Check : Evidence provided

  Drift_Freshness --> Degrade : Trigger degrade ladder
  Degrade --> Cache_Bundle : Step 1
  Degrade --> Small_Model : Step 2
  Degrade --> Rules_Only : Step 3
  Degrade --> HITL : Step 4
  Degrade --> Abstain : Step 5

  Plan --> Act : Execute action
  Act --> Verify : Post-condition check
  Verify --> Seal : Verification passed
  Verify --> Drift_Outcome : Verification failed

  Seal --> [*] : DecisionEpisode sealed

  Drift_Outcome --> Patch : Emit DriftEvent
  Drift_Freshness --> Patch
  Drift_Verify --> Patch
  Patch --> Memory_Graph : Update Memory Graph
  Memory_Graph --> [*]
Loading

See: Drift → Patch, Runtime Flow

4) Renderers (Prompt Compiler)

When a prompt is still needed (the final LLM call), it is compiled, not authored:

Lens + Objective + Allowed Context → JSON Schema output
Component Maps to Purpose
Lens decisionType Role / perspective the model adopts
Objective DTE plan stage What the model must produce
Allowed Context evidenceRefs (TTL-gated) Evidence that passed freshness gates
Output Schema JSON Schema Structure the response must conform to

No schema = no trust. The renderer is deterministic given its inputs; the LLM fills in the reasoning.


What You Get

Capability Prompt Engineering Coherence Engineering
Repeatability Hope + temperature=0 Policy Pack + DTE + sealed episodes
Testability Manual spot checks Golden tests against DecisionEpisode schema
Auditability Grep the prompt DLR / RS / DS / MG with provenance chains
Portability Rewrite per model Model-agnostic; swap LLM, keep policies
Reliability "It usually works" Contractual: passes verification or degrades gracefully

Rule of Thumb

If a prompt says "always / never / must / if X then Y" …it belongs in policy + events + tests, not prose.

Translation Checklist

  1. Extract claims — every assertion → typed Claim with required Evidence.
  2. Identify sources — every piece of evidence → sourceRef, capturedAt, ttlMs.
  3. Surface assumptions — anything unstated → explicit TTL / half-life.
  4. Encode rules as policies — "always/never/must" → Policy Pack invariants.
  5. Map control flow to events — "if X then Y" → DriftEvent triggers + degrade ladder.
  6. Define the output schema — renderer's JSON Schema replaces freeform output.
  7. Write golden tests — expected DecisionEpisode shape for known inputs.

Result

Prompt engineering becomes Coherence Engineering:

Truth · Reasoning · Memory — operationalized.

The prompt doesn't disappear — it becomes the last mile of a system that has already enforced freshness, verified evidence, sealed decisions, and prepared a degrade path before the LLM ever sees a token.


Diagrams

See Also

Clone this wiki locally