Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 16 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,13 +155,20 @@ Execution systems cannot make or alter decisions.

---

## Comparison

How does governed architecture differ from conventional agent platforms? This framework evaluates systems on architecture, not claims.

- [Governance Models: Governed vs. Conventional](docs/comparison/governance-models.md)

---
## Comparison

How does governed architecture differ from conventional agent platforms? This framework evaluates systems on architecture, not claims.

- [Governance Models: Governed vs. Conventional](docs/comparison/governance-models.md)

---

## Whitepapers

- [Cryptographically Governed AI Execution (CGAE) — v1.0.0 · Canonical](docs/whitepapers/cgae/v1.0.0.md)
- [Whitepapers index (canonical + archive links)](docs/whitepapers/index.md)
- [Archived: Governed Execution for Operational AI — v1.1](docs/whitepapers/archive/governed-execution-v1.0.md)


## Digital Forensics and AI Investigations with Keon

Expand Down Expand Up @@ -258,7 +265,7 @@ Recommended reading order:
4. **Proof campaigns** — what is proven and how to verify it
5. **Evidence artifacts** — audit- and court-grade outputs

If a claim cannot be traced to code, a tag, or a proof artifact, it should be treated as incomplete.
If a claim cannot be traced to code, a tag, or a proof artifact, it should be treated as incomplete. [KS-EVIDENCE-004]

### Next Steps
- Explore the [docs folder](./docs/) for in-depth guides.
Expand All @@ -282,7 +289,7 @@ Keon documentation follows these rules:

## What's Proven

Every public claim in the Keon governance model is backed by sealed, tagged, independently verifiable proof campaigns. These are not demos — they are **immutable evidence bundles** with SHA256-hashed manifests.
Every public claim in the Keon governance model is backed by sealed, tagged, independently verifiable proof campaigns. These are not demos — they are **immutable evidence bundles** with SHA256-hashed manifests. [KS-EVIDENCE-004]

| Capability | What It Proves | Status |
|-----------|---------------|--------|
Expand Down
5 changes: 3 additions & 2 deletions START_HERE.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,8 +201,9 @@ Governed runtimes integrate Keon to:
| Goal | Time | Start here |
|-----------------------------------|------------|--------------------------------------------------------------|
| Understand Keon quickly | 5–10 min | This page |
| Grasp the full architecture | 15–30 min | [Whitepaper](./whitepaper/WHITEPAPER.md) |
| Verify every public claim | 30–60 min | [Claims Registry](./canon/CLAIMS_REGISTRY.yaml) |
| Grasp the full architecture | 15–30 min | [CGAE v1.0.0 (canonical whitepaper)](./docs/whitepapers/cgae/v1.0.0.md) |
| Explore the full whitepaper library | 30 min | [Whitepapers index (canonical + archive)](./docs/whitepapers/index.md) |
| Verify every public claim | 30–60 min | [Claims Registry](./canon/CLAIMS_REGISTRY.yaml) [KS-EVIDENCE-004] |
| See real production usage | 30+ min | [OMEGA](https://github.com/m0r6aN/omega-docs) |
| Audit / forensic deep-dive | 1–2 hrs | [Proof maps + sealed Evidence Packs](./EVIDENCE/README.md) |
| Verify proven capabilities | 15–30 min | [Proof Campaign Status](https://github.com/m0r6aN/omega-docs/blob/main/REPORT/PROOFS/PROOF_CAMPAIGN_STATUS.md) |
Expand Down
14 changes: 14 additions & 0 deletions docs/whitepapers/archive/governed-execution-v1.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Governed Execution for Operational AI — v1.1 (Archived)

**Status:** Archived
**Published:** February 8, 2026
**Superseded by:** [CGAE v1.0.0](../cgae/v1.0.0.md)

This directory preserves the previously canonical whitepaper along with its addenda and verification artifacts. Review it when you need historical context or traceability for the Governed Execution thesis.

## Artifacts

- [PDF download (Historical canonical)](../../../whitepaper/WHITEPAPER.pdf)
- [Change log & addenda](../../../whitepaper/CHANGELOG.md)
- [Claims registry snapshots](../../../canon/CLAIMS_REGISTRY.yaml)
- [Proof map](../../../canon/PROOF_MAP.yaml)
324 changes: 324 additions & 0 deletions docs/whitepapers/cgae/v1.0.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,324 @@
Understood.

We are not describing a product.

We are defining infrastructure.

---

# **Cryptographically Governed AI Execution**

*Published: February 12, 2026*
*Version: v1.0.0*
*Status: Canonical*

### A New Category for Enterprise-Grade Autonomous Systems

---

## Executive Abstract

Artificial intelligence systems are rapidly moving from advisory roles to operational authority. They draft code, deploy infrastructure, approve transactions, interact with customers, and initiate downstream automation. Yet the dominant execution model remains opaque.

Decisions are logged after the fact.
Policies are evaluated loosely or not at all.
Multi-tenant isolation is assumed rather than cryptographically enforced.
Audit trails are reconstructive, not deterministic.

Modern AI execution environments optimize for capability and speed—but not for verifiability.

This paper introduces a new category:

> **Cryptographically Governed AI Execution (CGAE)**

CGAE defines a model in which every AI-initiated action is:

* Evaluated against enforceable policy before execution
* Bound cryptographically to its governing decision
* Scoped to a specific tenant and authority context
* Emitted as portable, verifiable evidence
* Deterministically reproducible and externally auditable

In CGAE systems, governance is not documentation.
It is not logging.
It is not configuration.

Governance is enforced at execution time and sealed as cryptographic proof.

This category reframes AI systems from “intelligent assistants” into **governed execution substrates** capable of meeting enterprise, regulatory, and acquisition-grade scrutiny.

---

## Core Thesis

AI systems will not be trusted at scale until execution is inseparable from verifiable governance.

Logging is insufficient.
Monitoring is insufficient.
Human approval gates are insufficient.

Trust in autonomous systems requires:

* Pre-execution policy enforcement
* Cryptographic authority binding
* Immutable evidence emission
* Deterministic verification

Cryptographically Governed AI Execution defines this standard.

---

## Architectural Pillars of CGAE

### 1. Intent as a First-Class Primitive

Execution begins with explicit, structured **Intent**.

Intent is not a prompt.
It is a typed declaration of desired action, scope, and context.

Intent must be:

* Canonically serialized
* Tenant-scoped
* Authority-attributed

Without formal intent, governance cannot bind execution.

---

### 2. Pre-Execution Policy Evaluation

Policy is enforced before action.

Every intent must pass evaluation against:

* Authority boundaries
* Scope constraints
* Termination guards
* Quorum or approval requirements

Policy outcomes are not advisory.
They produce binding decisions.

---

### 3. Receipt as Minted Authority

Policy evaluation emits a **Receipt**.

A receipt is:

* Cryptographically signed
* Bound to the original intent
* Scoped to a tenant and execution window
* Single-use or constrained by policy

Execution without a valid receipt is invalid by definition.

Receipts transform governance from configuration into authority.

---

### 4. Deterministic Ledgering

All intents, evaluations, receipts, and actions are:

* Canonically serialized
* Hash-addressable
* Immutable

The ledger is not a log file.
It is a deterministic chain of execution state transitions.

Reconstruction is not required.
Verification is mathematical.

---

### 5. Manifest and Pack Sealing

Execution emits portable evidence artifacts:

* Intent
* Receipt
* Execution result
* Hash bindings
* Verification metadata

These are assembled into a **Manifest** and sealed as an evidence pack.

Evidence is:

* Self-verifying
* Cryptographically sealed
* Transferable across environments
* Independently verifiable

This moves auditability from “request access to logs” to “verify this artifact.”

---

### 6. Tenant Isolation by Cryptographic Boundary

Multi-tenant risk is addressed structurally.

Every execution path is:

* Tenant-scoped
* Authority-bound
* Context-validated

Replay attacks across tenants are prevented by cryptographic binding, not convention.

Isolation is not assumed—it is enforced.

---

### 7. External Verification

A CGAE system must allow third-party validation.

An external verifier must be able to:

* Recompute canonical hashes
* Validate signatures
* Confirm receipt binding
* Confirm manifest integrity

Trust is not granted by vendor assurance.
It is proven through verification.

---

## Competitive Contrast

### Versus Traditional CI/CD

CI/CD ensures code integrity and deployment automation.
It does not govern autonomous decision-making at runtime.

CI/CD verifies artifacts.
CGAE verifies intent-to-execution authority chains.

---

### Versus Audit Logging Systems

Logging systems record what happened.

CGAE proves:

* Whether it was allowed
* Who authorized it
* Whether policy was satisfied
* Whether scope was respected

Logs are descriptive.
CGAE is enforceable.

---

### Versus Agentic Orchestration Frameworks

Agent frameworks coordinate tasks and tool calls.

They typically rely on:

* Soft policy enforcement
* Runtime checks
* Post-hoc logging

CGAE embeds governance into the execution substrate itself.

Agents may generate intent.
They cannot bypass authority.

---

### Versus AI Copilots

Copilots suggest actions.

CGAE systems execute actions with verifiable authority.

This distinction is foundational.

---

## Why Now

Three forces converge:

1. **AI is gaining operational authority.**
Autonomous systems are approving transactions, deploying code, managing infrastructure.

2. **Regulators are increasing scrutiny.**
AI accountability, auditability, and traceability are becoming statutory requirements.

3. **Enterprise buyers demand proof, not assurances.**
Security reviews now require deterministic evidence, not policy documents.

As AI shifts from advisory to autonomous, execution must become governable infrastructure.

CGAE defines that infrastructure layer.

---

## Enterprise and Strategic Relevance

Cryptographically Governed AI Execution is:

* **Compliance-ready**
Produces verifiable artifacts suitable for audit and regulatory review.

* **Acquirer-relevant**
Demonstrates systemic governance maturity beyond logging.

* **Platform-defining**
Enables safe deployment of high-authority AI systems.

* **Risk-reducing**
Eliminates replay, scope drift, and unauthorized execution paths.

Organizations deploying autonomous AI without cryptographic governance will face escalating operational and regulatory risk.

CGAE provides the structural solution.

---

## Intel-Inside Positioning Bridge

Cryptographically Governed AI Execution is not a product.

It is an execution substrate.

Applications can be:

* Customer-facing platforms
* Automation systems
* AI copilots
* Enterprise workflow engines

When powered by a CGAE substrate, they inherit:

* Pre-execution governance
* Cryptographic authority binding
* Portable evidence emission
* Deterministic verification

This enables a new positioning model:

> Powered by autonomous intelligence.
> Governed by cryptographic execution.
> Verified by evidence.

In this model, governance is not a feature layered on top.

It is the foundation beneath every action.

---

## Conclusion

We forecast **Cryptographically Governed AI Execution** as inevitable infrastructure for the autonomous era.
Loading