diff --git a/README.md b/README.md
index 4a3917a..27f02ba 100644
--- a/README.md
+++ b/README.md
@@ -1,443 +1,332 @@
-# ACR Framework
-## Autonomous Control & Resilience for Runtime AI Governance
+
-**An open reference architecture for governing autonomous AI systems in production environments.**
+

-Agentic AI needs a control plane. ACR provides six operational control pillars enforced at runtime.
-Learn more at https://autonomouscontrol.io/control-plane.
+# **ACR Framework**
+### Autonomous Control & Resilience for Runtime AI Governance
-[](LICENSE)
-[](https://github.com/SynergeiaLabs/acr-framework/releases)
-[]()
-[](https://github.com/SynergeiaLabs/acr-framework/actions/workflows/link-check.yml)
+**The control plane your autonomous AI agents need before you let them touch production.**
-π [Read the Docs](./docs) | π― [Use Cases](./docs/guides/acr-use-cases.md) | π [Threat Model](./docs/security/acr-strike-threat-model.md) | πΊοΈ [NIST Mapping](./docs/compliance/acr-nist-ai-rmf-mapping.md) | π [Adopt ACR](./ADOPTION.md)
+[](LICENSE)
+[](https://github.com/AdamDiStefanoAI/acr-framework/releases)
+[]()
+[]()
-## Try the ACR Control Plane
-Run a runnable reference control plane that demonstrates ACR six-pillar runtime enforcement.
+[](docs/compliance/acr-nist-ai-rmf-mapping.md)
+[](docs/compliance)
+[](docs/compliance)
+[]()
+[]()
+[]()
+[]()
+[]()
-```bash
-cd implementations/acr-control-plane
-cp .env.example .env
-docker-compose up --build
-
-# Verify health
-curl http://localhost:8000/acr/health
-```
-
-Open the operator console at `http://localhost:8000/console` (Operator API key: `dev-operator-key`; Kill switch secret: `killswitch_dev_secret_change_me`).
-
----
-## What Youβll Find Here
-
-- `docs/`: the ACR framework specifications (the βwhyβ and βwhat must be enforcedβ)
-- `implementations/acr-control-plane/`: a runnable ACR Control Plane reference implementation (FastAPI + OPA + Postgres + Redis)
-- `implementations/`: a landing page to jump straight into the control plane demo
-
-## Keywords (for search & discovery)
-
-- agentic AI governance
-- runtime control plane
-- policy as code
-- Open Policy Agent (OPA) / Rego
-- human-in-the-loop approvals
-- drift detection
-- kill-switch containment
-- execution observability / audit evidence
-
----
-
-## Overview
-
-The **ACR Framework** defines a runtime governance architecture for autonomous AI systems operating in enterprise production environments.
-
-As AI systems evolve from static models into autonomous agentsβcapable of accessing data, invoking tools, and making operational decisionsβtraditional governance approaches centered on policy documentation and pre-deployment reviews no longer provide sufficient control.
-
-**ACR establishes architectural patterns for enforcing governance during live system operation**, enabling organizations to maintain control over AI behavior in production.
+**[π Docs](./docs)** Β· **[π Quickstart](#-quickstart-60-seconds)** Β· **[ποΈ For Executives](#-for-security--risk-executives)** Β· **[βοΈ For Engineers](#%EF%B8%8F-for-engineers)** Β· **[π‘οΈ Threat Model](docs/security/acr-strike-threat-model.md)** Β· **[π Adopt ACR](./ADOPTION.md)**
---
-## The Governance Gap
-
-Traditional AI governance programs focus on design-time controls:
-
-- **Policy frameworks:** NIST AI RMF, ISO/IEC 42001, organizational AI policies
-- **Pre-deployment reviews:** Model validation, impact assessments, approval workflows
-- **Risk classification:** High/medium/low risk categorization, use case evaluation
-
-**These controls stop at deployment.**
+
-Once an AI system enters production, most organizations lack architectural mechanisms to:
+## π― Pick Your Path
-- **Enforce behavioral constraints** during inference operations
-- **Detect drift** when system behavior deviates from design intent
-- **Respond automatically** to policy violations or anomalous actions
-- **Maintain audit trails** with decision-level visibility
-- **Intervene in real-time** when high-risk actions are attempted
-
-This gap creates operational risk as autonomous systems interact with enterprise infrastructure, access sensitive data, and influence business processes.
-
-**ACR addresses this gap by defining runtime control patterns adapted from proven infrastructure governance architectures.**
+| You are a... | Start here |
+|---|---|
+| ποΈ **CISO, CRO, GRC lead, board member** β *I need to understand the risk and the answer* | **[For Security & Risk Executives β](#-for-security--risk-executives)** |
+| βοΈ **Engineer, SRE, security architect** β *I need to read the code and run the thing* | **[For Engineers β](#%EF%B8%8F-for-engineers)** |
---
-## Why ACR?
-
-Like **OWASP** for application security and **MITRE ATT&CK** for threat modeling, ACR provides a **shared vocabulary and reference architecture** for runtime AI governance. It is:
+# ποΈ For Security & Risk Executives
-- **Framework-agnostic** β Works with any model provider, stack, or policy engine
-- **Standards-aligned** β Maps to NIST AI RMF, ISO/IEC 42001, SOC 2
-- **Implementation-flexible** β API gateway, SDK, sidecar, or control-plane service patterns
-- **Adoption-ready** β Maturity levels, adoption guide, citation and badge for implementers ([ADOPTION.md](./ADOPTION.md))
+## The problem in one sentence
-Whether you are building in-house controls, evaluating vendors, or preparing for audits, ACR gives you a consistent way to design, implement, and assess runtime governance.
+> **You are about to give software the authority to act on behalf of your business, and your existing AI governance program stops at the moment that software goes into production.**
----
-
-## Framework Principles
+Traditional AI governance β model cards, pre-deployment reviews, NIST AI RMF documentation, ISO 42001 paperwork β happens **before** an AI system runs. It produces binders, not brakes. The moment an autonomous agent starts accessing customer data, calling APIs, issuing refunds, or filing tickets on your behalf, your design-time controls have nothing to enforce against.
-ACR is built on three foundational principles:
+That gap is where the risk lives: a perfectly compliant agent on day one can drift, be manipulated, or simply behave unexpectedly on day ninety β and there is nothing in your stack to notice, throttle, or stop it.
-### 1. Governance Must Execute at Runtime
-Policy compliance cannot be verified only at design time. Controls must operate during system execution, enforcing boundaries as the AI system processes requests, accesses resources, and generates outputs.
+## What ACR is, in plain terms
-### 2. Defense in Depth Through Layered Controls
-No single control mechanism is sufficient. ACR defines six complementary control layers that work together to detect, prevent, and respond to governance violations.
+**ACR is the seatbelt, airbag, ABS and crumple zone for autonomous AI.** It is a runtime control plane that sits between your AI agents and the systems they touch, and it enforces your governance policies on every single action the agent attempts β before that action ever leaves the building.
-### 3. Adaptation of Proven Patterns to AI Context
-ACR does not invent runtime governanceβit adapts established infrastructure control patterns (observability, policy enforcement, circuit breakers, least privilege) to the unique characteristics of non-deterministic AI systems.
+It does not replace your AI governance program. It is the part of the program that **executes**.
----
-
-## ACR Architecture
-
-ACR defines a **control plane** that mediates between autonomous AI systems and enterprise resources:
-
-```
-βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-β Enterprise Applications β
-β (CRM, ERP, Data Warehouses, APIs) β
-ββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
- β
-ββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ
-β AI Systems / Agents β
-β (LLMs, Autonomous Agents, AI Workflows) β
-ββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
- β
-ββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ
-β ACR RUNTIME CONTROL PLANE β
-β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β Identity & Purpose Binding Layer β β
-β β (Service identity, operational scope, capability β β
-β β authorization, resource access control) β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β Behavioral Policy Enforcement Layer β β
-β β (Input validation, output filtering, action β β
-β β authorization, data handling rules) β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β Autonomy Drift Detection Layer β β
-β β (Behavioral baselines, statistical monitoring, β β
-β β anomaly detection, deviation alerts) β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β Execution Observability Layer β β
-β β (Structured telemetry, audit trails, decision β β
-β β lineage, compliance evidence) β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β Self-Healing & Containment Layer β β
-β β (Automated response, capability restriction, β β
-β β circuit breakers, escalation triggers) β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β Human Authority Layer β β
-β β (Manual intervention, approval workflows, β β
-β β override mechanisms, kill switches) β β
-β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
-β β
-ββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
- β
-ββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ
-β Enterprise Systems & Data β
-β (Databases, File Systems, External APIs, Tools) β
-βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
-```
-
-**The control plane enforces governance policy at runtime through six complementary control layers.**
-
-See [ACR Control Plane Architecture](./docs/architecture/acr-control-plane-architecture.md) for detailed design.
-
----
-
-## Control Layer Overview
-
-### Layer 1: Identity & Purpose Binding
-
-Every AI system operates with a cryptographically-bound identity tied to specific business purposes and authorized resources. Prevents operational scope expansion without approval.
-
-**[Full Specification β](./docs/pillars/01-identity-purpose-binding.md)**
-
----
-
-### Layer 2: Behavioral Policy Enforcement
-
-Governance policies translate into machine-enforceable runtime rules for input validation, output filtering, action authorization, and data handling.
-
-**[Full Specification β](./docs/pillars/02-behavioral-policy-enforcement.md)**
-
----
-
-### Layer 3: Autonomy Drift Detection
-
-Establishes behavioral baselines and monitors for statistical deviations indicating the system is operating outside intended parameters.
-
-**[Full Specification β](./docs/pillars/03-autonomy-drift-detection.md)**
-
----
+## What ACR gives you
-### Layer 4: Execution Observability
+| Outcome | What it means in practice |
+|---|---|
+| π **A real "off switch."** | A single command, kept on a separate system from the agent itself, instantly disables an agent globally. Even a compromised control plane cannot silently re-enable it. |
+| π¦ **Graduated containment, not just on/off.** | If an agent starts behaving abnormally, the system automatically throttles it, then restricts its tools, then escalates everything to a human, and only kills it as a last resort. Four tiers, no operator paged at 3am for the small stuff. |
+| ποΈ **A complete, signed audit trail of every decision.** | Every action an agent attempted, every policy that fired, every approval, every override β all hashed and exportable as a tamper-evident evidence bundle for auditors and incident response. |
+| π§ββοΈ **Human-in-the-loop, where it matters.** | High-risk actions (refunds above a threshold, schema changes, cross-tenant access) automatically pause and route to a named human approver with an SLA. If the human doesn't respond in time, the action auto-denies. |
+| π **Drift detection on agent *behavior*, not just model accuracy.** | The system continuously baselines what "normal" looks like for each agent and raises a score when an agent starts using new tools, denying more often, or behaving more erratically than its baseline allows. |
+| π **Compliance you can actually evidence.** | Pre-built mappings to NIST AI RMF, ISO/IEC 42001, and SOC 2, with the audit artifacts to back them up. Auditors get a download, not a meeting. |
-Captures structured telemetry for all AI operations, enabling audit trails, decision reconstruction, and compliance evidence generation.
+## The six pillars (the executive version)
-**[Full Specification β](./docs/pillars/04-execution-observability.md)**
+ACR enforces six categories of control on every agent action:
----
+| # | Pillar | What it answers |
+|---|---|---|
+| 1οΈβ£ | **Identity & Purpose Binding** | *"Is this really our agent, and is it doing what we hired it to do?"* |
+| 2οΈβ£ | **Behavioral Policy Enforcement** | *"Is this specific action allowed by our policy, right now?"* |
+| 3οΈβ£ | **Autonomy Drift Detection** | *"Is this agent behaving the way it did last week?"* |
+| 4οΈβ£ | **Execution Observability** | *"Can we prove what happened, to whom, when, and why?"* |
+| 5οΈβ£ | **Self-Healing Containment** | *"If it starts misbehaving, does it stop itself before we have to?"* |
+| 6οΈβ£ | **Human Authority** | *"When the stakes are high, does a human still get the final say?"* |
-### Layer 5: Self-Healing & Containment
+## Why this matters now
-Enables automated response to policy violations and drift through capability restriction, workflow interruption, system isolation, and escalation.
+Three forces are converging:
-**[Full Specification β](./docs/pillars/05-self-healing-containment.md)**
+1. **Regulators are catching up.** The EU AI Act, NIST AI RMF, and emerging US state laws all expect *operational* AI controls, not just paperwork.
+2. **Insurers are pricing AI risk.** Carriers are starting to ask whether you have runtime governance β not whether you have a model card.
+3. **The agents are getting more autonomous.** Tool use, multi-step planning, and agent-to-agent delegation mean the blast radius of a single bad decision is growing fast.
----
+Doing nothing is itself a decision. ACR gives you a defensible, standards-aligned, open-source answer.
-### Layer 6: Human Authority
+## How to evaluate ACR in 30 minutes
-Preserves human oversight through intervention mechanisms, approval workflows, override capabilities, and defined escalation paths.
+1. **Read this section.** β
You are most of the way there.
+2. **Skim the [Six Pillars overview](docs)** β one paragraph per pillar.
+3. **Look at the [NIST AI RMF mapping](docs/compliance/acr-nist-ai-rmf-mapping.md)** to see how it lines up with the framework you are already being measured against.
+4. **Hand the engineering section below to your security architect.** They will be able to tell you in an afternoon whether this fits your stack.
-**[Full Specification β](./docs/pillars/06-human-authority.md)**
+> π‘ **Bottom line for the board:** ACR is the difference between *"we have an AI policy"* and *"we can prove our AI followed the policy on every one of last quarter's 14 million decisions, and we stopped it the three times it tried not to."*
---
-## Implementation Approaches
+# βοΈ For Engineers
-ACR is an **architectural framework**, not a prescriptive implementation. Organizations can implement ACR controls using multiple approaches:
+## TL;DR
-### Deployment Patterns
+**ACR is a policy-enforcing gateway for AI agents.** Every action an agent wants to take goes through `POST /acr/evaluate` and gets a `allow | deny | escalate` decision in **<200ms**. It's a single FastAPI service backed by Postgres + Redis + OPA, with a separate kill-switch sidecar, deployable on Kubernetes or `docker compose up`.
-**API Gateway Pattern**
-- Deploy control plane as reverse proxy (Envoy, Kong, NGINX)
-- Intercept all model API traffic
-- Centralized enforcement, language-agnostic
-- Trade-off: Network hop adds latency, single point of failure
+## π Quickstart (60 seconds)
-**SDK/Library Pattern**
-- Embed control logic in application code
-- Wrap model API clients with governance layer
-- Low latency, distributed failure domain
-- Trade-off: Requires per-language SDK, version management
-
-**Sidecar Pattern**
-- Deploy control plane as sidecar container
-- Intercept traffic at network layer (service mesh)
-- Infrastructure-enforced, platform-native
-- Trade-off: Kubernetes/mesh dependency, complexity
+```bash
+git clone https://github.com/AdamDiStefanoAI/acr-framework.git
+cd acr-framework/implementations/acr-control-plane
+cp .env.example .env
+docker compose up --build
+```
-**Control Plane Service Pattern**
-- Separate governance service layer
-- Applications call control plane for policy decisions
-- Centralized logic, flexible integration
-- Trade-off: Additional network calls, latency sensitive
+Migrations run automatically on startup (one-shot `acr-migrate` service). When the gateway is healthy:
-Organizations select patterns based on infrastructure, latency requirements, and operational constraints.
+```bash
+curl http://localhost:8000/acr/health
+# β {"status":"healthy","version":"1.0","env":"development"}
-See [Implementation Guide](./docs/guides/acr-implementation-guide.md) for detailed deployment architectures.
+# Open the operator console
+open http://localhost:8000/console
+# Operator API key: dev-operator-key
+# Kill-switch secret: killswitch_dev_secret_change_me
+```
----
+That's it. You now have a working six-pillar control plane on your laptop.
+
+## The hot path
+
+```mermaid
+sequenceDiagram
+ autonumber
+ participant A as Agent (JWT)
+ participant G as ACR Gateway
+ participant R as Redis
+ participant DB as Postgres
+ participant O as OPA
+ participant K as Kill-switch (sidecar)
+ participant H as Human approver
+
+ A->>G: POST /acr/evaluate {action, params}
+ G->>G: [0] JWT subject == agent_id
+ G->>R: [1] is_killed? (~0.5ms)
+ G->>R: [1b] containment tier?
+ G->>DB: [2] agent active + lifecycle ok
+ G->>R: [3] rate-limit bucket
+ G->>O: [4] Rego evaluate
+ O-->>G: allow / deny / escalate
+ G->>G: [5] PII filter on params
+ alt escalate
+ G->>DB: create approval request
+ G->>H: signed webhook
+ G-->>A: 202 APPROVAL_PENDING
+ else allow / deny
+ G-->>A: 200 / 403
+ end
+ Note over G,DB: BackgroundTasks (non-blocking):
persist telemetry β drift sample β
recompute drift score every N calls
+ Note over K: Kill writes go through
independent service.
Reads are Redis-direct.
+```
-## Standards & Compliance Alignment
+**Latency budget:** ~5ms identity, ~10ms rate limit, ~20β50ms OPA, <5ms PII filter. Total p95 target: **<200ms**, telemetry/drift never blocks the response.
-ACR complements established AI governance and security frameworks:
+## Architecture at a glance
-### NIST AI Risk Management Framework (AI RMF)
-- **GOVERN:** Organizational structures, policies β *ACR enforcement mechanisms*
-- **MAP:** Risk identification, context β *Identity binding, threat modeling*
-- **MEASURE:** Metrics, monitoring β *Observability, drift detection*
-- **MANAGE:** Risk response, mitigation β *Policy enforcement, containment*
+```
+ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+ β ACR Control Plane β
+ β β
+ Agent ββJWTβββΆβ FastAPI gateway βββΆ OPA (Rego policies) β
+ β β β
+ β ββββΆ Postgres (agents, telemetry, approvals, β
+ β β drift baselines, policy releases) β
+ β β β
+ β ββββΆ Redis (kill switch, rate limits, drift cache)β
+ β β β
+ β ββββΆ Background tasks (telemetry, drift, sweeps) β
+ β β
+ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
+ β β
+ βΌ βΌ
+ ββββββββββββββββββββ ββββββββββββββββββββββββ
+ β Kill-switch SVC β β Human approvers β
+ β (independent, β β (HMAC-signed webhook β
+ β separate auth) β β + operator console) β
+ ββββββββββββββββββββ ββββββββββββββββββββββββ
+```
-### ISO/IEC 42001 (AI Management Systems)
-- **Clause 6.1 (Risk Management):** Risk assessment β *Drift detection, threat model*
-- **Clause 8.2 (Operation):** Operational controls β *Policy enforcement, observability*
-- **Clause 9.1 (Monitoring):** Performance monitoring β *Telemetry, metrics*
-- **Clause 10.1 (Nonconformity):** Corrective action β *Containment, incident response*
+## The six pillars (the engineering version)
+
+| # | Pillar | Module | Backed by |
+|---|---|---|---|
+| 1 | **Identity & Purpose Binding** | `pillar1_identity/` | JWT (HS256/RS256/ES256, alg-pinned), Postgres `agents` table, lifecycle state machine (`draft β active β deprecated β retired`), lineage chain, capability tags, heartbeat sweep loop |
+| 2 | **Behavioral Policy Enforcement** | `pillar2_policy/` | OPA via pooled `httpx.AsyncClient`, exponential-backoff retry, `aiobreaker` circuit breaker (5 fails / 60s β fail-secure deny). Policies authored in Rego with a draft β release β activate Studio lifecycle. |
+| 3 | **Autonomy Drift Detection** | `pillar3_drift/` | Four signals β denial rate (0.35), call frequency (0.25), error rate (0.20), action diversity / Shannon entropy (0.20). Two-tier baselines: live ungoverned + versioned governed (`candidate β approved β active`) with dual-approval. |
+| 4 | **Execution Observability** | `pillar4_observability/` | Structured `ACRTelemetryEvent` per request, correlation-ID indexed, OTLP + Prometheus exporters, SHA256-signed evidence bundles via `/acr/evidence/{correlation_id}`. |
+| 5 | **Self-Healing Containment** | `pillar5_containment/` | Four graduated tiers β Throttle (0.60) β Restrict (0.75) β Isolate (0.90) β Kill (0.95). Kill writes go through an **independent sidecar** with its own `KILLSWITCH_SECRET` so a compromised gateway cannot silently re-enable agents. |
+| 6 | **Human Authority** | `pillar6_authority/` | Persisted approval requests with configurable SLA (default 240m), background expiry loop, HMAC-SHA256-signed webhooks with idempotency keys, break-glass override (security_admin only, mandatory reason, WARN-logged). |
+
+## Stack & ops posture
+
+- **Language:** Python 3.11+, FastAPI, async SQLAlchemy 2 + asyncpg, Pydantic v2
+- **Stores:** PostgreSQL (monthly partitioned `telemetry_events` and `drift_metrics`, 90-day retention sweep), Redis 7
+- **Policy engine:** Open Policy Agent (Rego), with a draft/release/activate Policy Studio
+- **Auth:** JWT for agents (alg allowlist), API keys *and* OIDC for operators, four RBAC roles (`agent_admin`, `security_admin`, `approver`, `auditor`)
+- **Observability:** structlog JSON logs, OpenTelemetry OTLP traces + metrics, `/acr/metrics` Prometheus endpoint, SHA256-signed evidence bundles
+- **Deployment:** `docker compose` for local, Kustomize for Kubernetes (HPAs, PDBs, NetworkPolicies, RBAC, init container that runs `alembic upgrade head` automatically)
+- **CI:** GitHub Actions with lint, test, `pip-audit` security scanning
+
+## Design choices that matter
+
+- **Fail-secure everywhere.** OPA down β deny. Redis down β deny. Unexpected exception β deny. Never silently allow.
+- **The kill switch is a separate process.** Reads are Redis-direct for the hot path; writes go through an independent service with its own secret. A compromised gateway cannot un-kill an agent.
+- **Hot path is non-blocking.** Telemetry persistence, drift sampling, and drift score recomputation are all `BackgroundTasks`. The 200ms latency budget covers only the decision, not the bookkeeping.
+- **Policy is code, not config.** All allow/deny/escalate logic is Rego, versioned and shipped through the Policy Studio's release lifecycle. No `if statement in Python` policy decisions.
+- **Drift baselines have governance.** Without it, a slowly-misbehaving agent could simply re-baseline itself into "normal." Governed baselines require proposal β approval β activation by separate operators.
+- **Migrations are automatic.** Both K8s and `docker compose` run `alembic upgrade head` before the gateway starts. You cannot forget to migrate.
+- **Weak secrets are rejected at startup.** Production environments refuse to boot with default JWT keys, weak kill-switch secrets, or missing executor HMAC keys.
+
+## The agent registry (Pillar 1, expanded)
+
+The registry is more than a key-value store β it carries the metadata other pillars need to make smart decisions:
+
+| Field | Why it exists |
+|---|---|
+| `agent_id`, `owner`, `purpose`, `risk_tier` | Identity and accountability |
+| `version` | Audit trail of which version of an agent did what |
+| `parent_agent_id` | Lineage for orchestrator β subagent relationships (non-FK so retirement preserves history) |
+| `capabilities[]` | Declared skills, distinct from the tool allowlist β used by `/acr/agents/discover` |
+| `lifecycle_state` | `draft β active β deprecated β retired` state machine, gates the evaluate hot path |
+| `health_status` + `last_heartbeat_at` | Stale agents auto-downgrade to `unhealthy` via the sweep loop |
+| `allowed_tools[]` / `forbidden_tools[]` | Authoritative tool boundaries enforced by Rego |
+| `boundaries` | Rate, spend, region, and credential rotation limits |
+
+## API surface (selected)
-### SOC 2 Type II Controls
-- **CC6.1 (Logical Access):** Access controls β *Identity binding, RBAC*
-- **CC7.2 (System Monitoring):** Monitoring controls β *Observability, alerting*
-- **CC8.1 (Change Management):** Change controls β *Drift detection, baselines*
-- **PI1.4 (Privacy):** Data privacy β *Output filtering, PII redaction*
+```
+POST /acr/evaluate # the hot path
+POST /acr/agents # register
+GET /acr/agents/discover?capability=... # discovery
+POST /acr/agents/{id}/lifecycle # state transitions
+POST /acr/agents/{id}/heartbeat # health
+GET /acr/agents/{id}/lineage # ancestor chain + children
+POST /acr/agents/{id}/token # issue short-lived JWT
+
+GET /acr/drift/{id} # current drift score
+POST /acr/drift/{id}/baseline/propose # governed baseline workflow
+POST /acr/drift/{id}/baseline/{ver}/approve
+POST /acr/drift/{id}/baseline/{ver}/activate
+
+GET /acr/events/{correlation_id} # decision chain replay
+GET /acr/evidence/{correlation_id} # signed evidence bundle (SHA256)
+
+POST /acr/approvals/{id}/approve # human authority
+POST /acr/approvals/{id}/override # break-glass (WARN-logged)
+
+GET /acr/health /acr/live /acr/ready # k8s probes
+GET /acr/metrics # Prometheus text
+```
-See [NIST AI RMF Mapping](./docs/compliance/acr-nist-ai-rmf-mapping.md) for detailed control mappings.
+Full OpenAPI is served at `/docs` and `/redoc` (operator-authenticated outside development).
----
+## Repo layout
-## Use Cases
+```
+acr-framework/
+βββ docs/ β framework spec (the "why")
+βββ implementations/
+β βββ acr-control-plane/ β runnable reference implementation
+β βββ src/acr/
+β β βββ pillar1_identity/ β registry, lifecycle, lineage, JWT
+β β βββ pillar2_policy/ β OPA client, circuit breaker, output filter
+β β βββ pillar3_drift/ β signals, baselines, governance
+β β βββ pillar4_observability/ β telemetry, OTLP, evidence bundles
+β β βββ pillar5_containment/ β graduated tiers, kill switch
+β β βββ pillar6_authority/ β approvals, webhooks, override
+β β βββ policy_studio/ β draft β release β activate lifecycle
+β β βββ gateway/ β FastAPI router, middleware, executor
+β β βββ operator_console/ β read-only web UI
+β β βββ db/migrations/ β Alembic (head: 0012)
+β βββ policies/ β Rego policies + tests
+β βββ deploy/k8s/base/ β Kustomize manifests
+β βββ tests/ β 160+ unit + integration tests
+β βββ docker-compose.yml
+β βββ Dockerfile{,.killswitch}
+βββ README.md β you are here
+```
-ACR applies to autonomous AI systems across enterprise contexts:
+## Where to look first if you're reading the code
-- **Customer Service Agents:** Enforce data privacy, prevent unauthorized discounts, detect tone drift
-- **Data Analysis Agents:** Control database access, prevent SQL injection, monitor query patterns
-- **Code Generation Agents:** Restrict file system access, block credential exposure, detect malicious code patterns
-- **Document Processing Agents:** Enforce PII handling, control external API calls, monitor extraction accuracy
-- **Multi-Agent Workflows:** Coordinate inter-agent authorization, maintain workflow audit trails, contain cascading failures
+1. **`src/acr/main.py`** β application wiring, lifespan, background loops
+2. **`src/acr/gateway/router.py`** β the `/acr/evaluate` hot path
+3. **`src/acr/pillar2_policy/engine.py`** β the OPA client with retry + circuit breaker
+4. **`src/acr/pillar5_containment/graduated.py`** β the four-tier containment ladder
+5. **`policies/`** β the Rego policies that actually make the allow/deny calls
-See [ACR Use Cases](./docs/guides/acr-use-cases.md) for detailed scenarios and control applications.
+## Compliance mappings
----
+| Standard | Mapping |
+|---|---|
+| πΊπΈ NIST AI RMF | [`docs/compliance/acr-nist-ai-rmf-mapping.md`](docs/compliance/acr-nist-ai-rmf-mapping.md) |
+| π ISO/IEC 42001 | [`docs/compliance/`](docs/compliance) |
+| π SOC 2 | [`docs/compliance/`](docs/compliance) |
+| π‘οΈ Threat model | [`docs/security/acr-strike-threat-model.md`](docs/security/acr-strike-threat-model.md) |
-## Documentation
-
-### Core Framework
-- **[Framework README](./README.md)** - This document
-- **[Control Plane Architecture](./docs/architecture/acr-control-plane-architecture.md)** - Technical architecture
-- **[Runtime Architecture](./docs/architecture/acr-runtime-architecture.md)** - Deployment patterns
-- **[Production Lifecycle](./docs/architecture/acr-production-lifecycle.md)** - End-to-end workflow
-
-### Pillar Specifications
-- **[Pillars Overview](./docs/pillars/README.md)** - All six control layers
-- **[Layer 1: Identity & Purpose Binding](./docs/pillars/01-identity-purpose-binding.md)**
-- **[Layer 2: Behavioral Policy Enforcement](./docs/pillars/02-behavioral-policy-enforcement.md)**
-- **[Layer 3: Autonomy Drift Detection](./docs/pillars/03-autonomy-drift-detection.md)**
-- **[Layer 4: Execution Observability](./docs/pillars/04-execution-observability.md)**
-- **[Layer 5: Self-Healing & Containment](./docs/pillars/05-self-healing-containment.md)**
-- **[Layer 6: Human Authority](./docs/pillars/06-human-authority.md)**
-
-### Technical Specifications
-- **[Telemetry Schema](./docs/specifications/telemetry-schema.md)** - JSON schema for observability
-- *Policy DSL Requirements* (planned v1.1)
-- *Drift Detection Requirements* (planned v1.1)
-
-### Security & Compliance
-- **[STRIKE Threat Model](./docs/security/acr-strike-threat-model.md)** - AI-specific threats
-- **[NIST AI RMF Mapping](./docs/compliance/acr-nist-ai-rmf-mapping.md)** - Compliance alignment
-- **[Glossary](./docs/security/acr-glossary.md)** - Term definitions
-
-### Getting Started
-- **[FAQ](./docs/guides/FAQ.md)** - Frequently asked questions
-- **[Implementation Guide](./docs/guides/acr-implementation-guide.md)** - Deployment guidance
-- **[Use Cases](./docs/guides/acr-use-cases.md)** - Real-world scenarios
+## Project status
----
+- β
Six pillars implemented and tested
+- β
Production-grade Kubernetes manifests
+- β
Automated migrations (no manual `alembic upgrade head`)
+- β
Compliance mappings for NIST AI RMF, ISO 42001, SOC 2
+- β
160+ unit tests, evidence-bundle integration tests
+- π [Roadmap](./ROADMAP.md) Β· [Changelog](./CHANGELOG.md) Β· [Adoption guide](./ADOPTION.md)
## Contributing
-ACR is an open framework designed to evolve with the autonomous AI ecosystem.
-
-**Contribution areas:**
-- Control layer refinements and extensions
-- Implementation pattern documentation
-- Threat model expansion (new attack vectors, mitigations)
-- Standards mappings (EU AI Act, sector-specific regulations)
-- Case studies and deployment experiences
-- Research on drift detection, policy languages, observability schemas
-
-**Contribution process:**
-1. Review existing issues and discussions
-2. Open issue describing proposed contribution
-3. Discuss approach and alignment with ACR principles
-4. Submit pull request with documentation updates
-5. Community review and merge
-
-**Code contributions:**
-ACR is an architectural frameworkβreference implementations are welcome but maintained separately. The core framework repository focuses on specifications, design patterns, and architectural guidance.
-
-See [CONTRIBUTING.md](./CONTRIBUTING.md) for detailed guidelines.
+We welcome PRs, issues, and threat-model discussions. See [`CONTRIBUTING.md`](./CONTRIBUTING.md), [`GOVERNANCE.md`](./GOVERNANCE.md), and [`CODE_OF_CONDUCT.md`](./CODE_OF_CONDUCT.md). Security issues: [`SECURITY.md`](./SECURITY.md).
---
-## Framework Governance
-
-**Maintainer:** Adam DiStefano ([@SynergeiaLabs](https://github.com/SynergeiaLabs))
-
-**Roadmap:**
-- **v1.0 (Current):** Core six-layer architecture, NIST/ISO mappings, threat model
-- **v1.1 (Q2 2026):** Expanded implementation patterns, telemetry schema standardization
-- **v1.2 (Q3 2026):** Multi-model orchestration patterns, federated governance
-- **v2.0 (2027):** Extensions for emerging AI architectures, regulatory compliance modules
+
-See [ROADMAP.md](./ROADMAP.md) for detailed development plan. For project structure, decision-making, and working groups, see [GOVERNANCE.md](./GOVERNANCE.md).
+## **Like OWASP for AppSec. Like MITRE ATT&CK for threats.**
+## **ACR is the reference architecture for runtime AI governance.**
-**Community:**
-- GitHub Discussions: Architecture questions, use case sharing
-- Monthly community calls: Framework evolution, implementation experiences (planned Q2 2026)
-- Working groups: Drift detection, policy languages, observability standards
+[](LICENSE)
+[](./ADOPTION.md)
+[](https://github.com/AdamDiStefanoAI/acr-framework)
----
-
-## Implementations
-
-Organizations and vendors implementing ACR-aligned solutions are listed below. To **add your implementation**, see [ADOPTION.md](./ADOPTION.md) (criteria, maturity levels, and how to submit a listing).
-
-**Open Source:**
-- [ACR Control Plane](./implementations/acr-control-plane) β reference runtime control-plane implementation (FastAPI + OPA + Postgres + Redis) with trust-path enforcement ([site](https://autonomouscontrol.io/control-plane)).
-- *[Add community implementations as they emerge]*
-
-**Commercial:**
-- *[Products implementing ACR patterns can self-register here]*
-
-**Research:**
-- *[Academic implementations and extensions]*
-
-**Note:** ACR is a framework specification. Listing does not imply endorsement; implementations self-declare alignment. Maturity levels (1β3) are defined in [ADOPTION.md](./ADOPTION.md).
-
----
-
-## License
-
-Apache 2.0 License - see [LICENSE](./LICENSE)
-
-This framework is freely available for use, modification, and distribution. Commercial implementations are encouraged.
-
----
-
-## Citation and Branding
-
-If you reference ACR in research, standards, or product documentation, please cite:
-
-```bibtex
-@misc{acr-framework-2026,
- author = {DiStefano, Adam},
- title = {ACR Framework: Autonomous Control \& Resilience for Runtime AI Governance},
- year = {2026},
- publisher = {GitHub},
- journal = {GitHub repository},
- howpublished = {\url{https://github.com/SynergeiaLabs/acr-framework}},
- version = {1.0}
-}
-```
-
-Implementers may use the **ACR Aligned** badge and refer to [ADOPTION.md](./ADOPTION.md) for citation text, logo usage, and maturity levels.
-
----
-
-## Resources
-
-- **Website:** [autonomouscontrol.io](https://autonomouscontrol.io)
-- **Documentation:** [docs](./docs)
-- **Adoption & maturity:** [ADOPTION.md](./ADOPTION.md)
-- **Governance:** [GOVERNANCE.md](./GOVERNANCE.md)
-- **Contributing:** [CONTRIBUTING.md](./CONTRIBUTING.md)
-- **Discussions:** [GitHub Discussions](https://github.com/SynergeiaLabs/acr-framework/discussions)
-- **Issues:** [GitHub Issues](https://github.com/SynergeiaLabs/acr-framework/issues)
-
----
+**[autonomouscontrol.io/control-plane](https://autonomouscontrol.io/control-plane)**
-**ACR Framework v1.0** | March 2026 | Runtime Governance for Autonomous AI | [Cite](README.md#citation-and-branding) | [Adopt](ADOPTION.md)
+