Agentic Compliance Auditor is a version-aware audit system for checking whether internal policies, procedures, control libraries, and external guidance remain aligned over time. It ingests policy material, parses documents into sections, extracts normalized control statements, compares document versions and source pairs, generates typed findings with citations, routes those findings into review tasks, preserves audit lineage, and exposes evaluation and observability outputs through API and UI surfaces.
Policy environments do not fail only because content is missing. They also fail because related documents evolve at different speeds. A policy may require one timeline, a control standard may require another, and a procedure may continue referencing an older control version. Those mismatches create operational, governance, and regulatory risk even when every document appears reasonable in isolation.
Agentic Compliance Auditor makes that drift inspectable. It centers deterministic comparison rules as the authoritative layer, uses AI assistance as a secondary aid, and keeps every finding grounded in version history, citations, review actions, and audit events.
- Version-aware ingestion of internal and external policy material
- Section parsing and normalized control statement extraction
- Rules-first contradiction and drift detection
- Typed findings with cited source and target evidence
- Review-task routing with reviewer actions and audit logging
- Evaluation reporting and degraded-mode visibility
- A seeded end-to-end workflow rendered in a React UI
Agentic Compliance Auditor is implemented as a modular monolith with a Django backend and a React frontend.
The backend is built with Django 5.1.15, Django REST Framework, drf-spectacular, and SimpleJWT. It is organized into domain apps that separate ingestion, lineage, statement extraction, comparison, findings, review operations, audit logging, evaluation, observability, and health concerns.
Core backend apps:
coreaccountsdocumentslineagesectioningstatementscomparisonsfindingsreviewsauditsevalsobservabilityhealth
The API exposes endpoints for document ingestion, lineage inspection, comparison runs, findings, review tasks, audit events, eval reports, and operational metrics.
The frontend is a React, TypeScript, and Vite application organized by feature area. It renders seeded workflow data for documents, lineage, comparisons, findings, review queue, metrics, eval results, and admin utilities.
- PostgreSQL via
pgvector/pgvector:pg16 - Redis 7 for queue and cache support
- Celery running with
-P soloon Windows - Django Channels configured for WebSocket readiness in v1
pgvectorenabled only for statement-level similarity support onControlStatement.embedding
The intended local runtime is:
- backend running locally
- frontend running locally
- PostgreSQL running in Docker Compose
- Redis running in Docker Compose
- Create or upload a policy document.
- Persist the document with checksum-based deduplication.
- Parse the document into sections.
- Extract normalized control statements.
- Launch a comparison run against one or more targets.
- Apply deterministic contradiction and drift rules.
- Generate typed findings with citations and memo output.
- Route reviewable findings into review tasks.
- Record audit events for traceability.
- Expose metrics and eval results for inspection.
/— document library/documents/:id— document metadata, sections, statements, and lineage/lineage— version and relationship chains/comparisons/new— comparison builder/findings— findings dashboard/findings/:id— finding detail with citations and memo/review-queue— reviewer work queue/metrics— system metrics and operational overview/evals— latest evaluation metrics/admin-tools— admin and script entry points
- Python 3.13
- Django 5.1.15
- Django REST Framework
- drf-spectacular
- SimpleJWT
- Celery
- Redis
- Channels
- pgvector Python client
- React
- TypeScript
- Vite
- React Router
- TanStack Query
- Axios
- Vitest
- Testing Library
- PostgreSQL with pgvector
- Redis 7
- Docker Compose
- Ruff
- Black
- pytest
- npm
- GitHub Actions
agentic-compliance-auditor/
├── .github/
│ └── workflows/
├── backend/
│ ├── apps/
│ ├── config/
│ ├── requirements/
│ └── tests/
├── demo_data/
├── docs/
│ ├── adr/
│ ├── architecture/
│ ├── demos/
│ ├── diagrams/
│ ├── domain/
│ └── screenshots/
├── evals/
│ ├── datasets/
│ ├── reports/
│ ├── runs/
│ └── schemas/
├── frontend/
│ ├── package.json
│ ├── vite.config.ts
│ ├── vitest.config.ts
│ ├── tsconfig.json
│ ├── src/
│ └── tests/
├── infra/
│ ├── nginx/
│ └── scripts/
├── .editorconfig
├── .env.example
├── .gitignore
├── .pre-commit-config.yaml
├── docker-compose.yml
├── LICENSE
├── Makefile
├── package.json
├── pyproject.toml
└── README.md
- Python 3.13
- Node 24
- npm
- Docker Desktop with Docker Compose
- Git
cd D:\AI-Projects
git clone <your-repository-url> agentic-compliance-auditor
cd D:\AI-Projects\agentic-compliance-auditor
Copy-Item .env.example .env -Forcecd D:\AI-Projects\agentic-compliance-auditor
docker compose up -d db redis
docker compose pscd D:\AI-Projects\agentic-compliance-auditor\backend
python -m venv .venv
& .\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
pip install -r requirements\dev.txtcd D:\AI-Projects\agentic-compliance-auditor\frontend
npm installCore configuration is defined in .env.example.
POSTGRES_DBPOSTGRES_USERPOSTGRES_PASSWORDPOSTGRES_HOSTPOSTGRES_PORTPOSTGRES_TEST_DB
REDIS_HOSTREDIS_PORT
DJANGO_SECRET_KEYDJANGO_DEBUGDJANGO_ALLOWED_HOSTSDJANGO_SETTINGS_MODULE
BACKEND_PORTFRONTEND_PORT
CORS_ALLOWED_ORIGINSCSRF_TRUSTED_ORIGINS
JWT_ACCESS_MINUTESJWT_REFRESH_DAYS
AI_PROVIDERAI_MODEL_NAMEOPENAI_API_KEY
DEFAULT_REVIEW_QUEUEDEFAULT_SLA_HOURSPGVECTOR_DIMENSIONS
cd D:\AI-Projects\agentic-compliance-auditor
docker compose up -d db rediscd D:\AI-Projects\agentic-compliance-auditor\backend
& .\.venv\Scripts\Activate.ps1
python manage.py migratecd D:\AI-Projects\agentic-compliance-auditor\backend
& .\.venv\Scripts\Activate.ps1
python ..\infra\scripts\seed_demo_data.py
python ..\infra\scripts\generate_eval_cases.py
python ..\infra\scripts\run_eval_suite.pycd D:\AI-Projects\agentic-compliance-auditor\backend
& .\.venv\Scripts\Activate.ps1
python manage.py runserverThis is a long-running process. Stop it with CTRL + C.
cd D:\AI-Projects\agentic-compliance-auditor\backend
& .\.venv\Scripts\Activate.ps1
celery -A config worker -l info -P soloThis is a long-running process. Stop it with CTRL + C.
cd D:\AI-Projects\agentic-compliance-auditor\frontend
npm run devThis is a long-running process. Stop it with CTRL + C.
cd D:\AI-Projects\agentic-compliance-auditor
docker compose downThe repository includes deterministic seeded data so the full workflow can be inspected without manual data preparation.
- 3 users
- 12 documents
- 12 sections
- 12 statements
- 1 comparison run
- 1 finding
- 2 citations
- 1 memo
- 1 review task
- 13 audit events
- 1 eval run
A seeded comparison highlights a timeline mismatch between:
Complaints Escalation Policy v3Complaints Control Standard v5
The source requires acknowledgment within 10 business days. The target requires 5 business days. That mismatch produces a typed finding with citations, memo output, and a routed review task.
The seeded dataset also includes:
- internal policies
- control-library content
- procedure references
- external guidance examples
- lineage relationships
- observability prompt versions
- evaluation baseline metrics
GET /health/liveGET /health/readyGET /health/deps
GET /api/documents/POST /api/documents/GET /api/documents/{id}/PATCH /api/documents/{id}/GET /api/documents/{id}/sections/GET /api/documents/{id}/statements/GET /api/documents/{id}/lineage/
GET /api/lineage/POST /api/lineage/GET /api/lineage/version-chains/
GET /api/comparisons/runs/POST /api/comparisons/runs/GET /api/comparisons/runs/{id}/POST /api/comparisons/runs/{id}/retry/POST /api/comparisons/runs/{id}/replay/
GET /api/findings/GET /api/findings/{id}/GET /api/findings/{id}/citations/GET /api/findings/{id}/memo/GET /api/findings/{id}/export_packet/
GET /api/review-tasks/POST /api/review-tasks/GET /api/review-tasks/{id}/POST /api/review-tasks/{id}/assign/POST /api/review-tasks/{id}/approve/POST /api/review-tasks/{id}/override/POST /api/review-tasks/{id}/dismiss/POST /api/review-tasks/{id}/escalate/
GET /api/audit-events/GET /api/audit-events/{id}/
GET /api/evals/runs/POST /api/evals/runs/GET /api/evals/runs/{id}/GET /api/evals/reports/latest/GET /api/metrics/overview/GET /api/metrics/review-ops/GET /api/metrics/conflicts/
GET /api/schema/GET /api/docs/
The eval pipeline uses synthetic datasets to test contradiction detection, drift behavior, stale-reference handling, no-conflict cases, adversarial wording, and fallback behavior.
contradiction_casesdrift_casesstale_reference_casesno_conflict_casesadversarial_casesfallback_cases
- contradiction precision
- contradiction recall
- stale-reference accuracy
- citation validity rate
- review routing accuracy
- contradiction precision:
1.00 - contradiction recall:
1.00 - stale-reference accuracy:
1.00 - citation validity rate:
1.00 - review routing accuracy:
1.00
The latest machine-readable report is written to:
evals/reports/latest.json
The workflow is designed to continue operating when AI assistance is unavailable or intentionally disabled.
If the contradiction-analysis prompt configuration is inactive, the comparison flow can still:
- run deterministic rules
- create review-required findings
- attach citations
- create review tasks
- record degraded execution status in observability outputs
- provider unavailable
- schema validation failure
- model timeout
- stale or missing prompt configuration
- policy and control drift without model support
AI assistance can improve explanation quality, but deterministic rules remain authoritative for core contradiction detection and review routing.
- This system is a policy-control audit workflow, not a legal advice system.
- It is not a regulatory interpretation engine.
- It does not claim production compliance accuracy.
- The current extraction and contradiction logic is intentionally narrow and deterministic.
pgvectorsupport is limited to statement similarity storage onControlStatement.embedding; it is not the primary audit mechanism.- The default AI provider is mock-backed, and deterministic rules remain the source of truth.
- The seeded workflow is intended to make system behavior inspectable and reproducible in local development.
docs/architecture/system-overview.mddocs/architecture/ingestion-pipeline.mddocs/architecture/version-lineage.mddocs/architecture/contradiction-model.mddocs/architecture/review-workflow.mddocs/architecture/evaluation-design.mddocs/architecture/failure-modes.md
docs/adr/0001-modular-monolith.mddocs/adr/0002-postgresql-primary-store.mddocs/adr/0003-structured-control-statements.mddocs/adr/0004-rules-first-ai-assisted.mddocs/adr/0005-version-lineage-first.md
docs/domain/policy-taxonomy.mddocs/domain/contradiction-types.mddocs/domain/severity-model.mddocs/domain/synthetic-data-spec.md
docs/demos/seeded-policy-packs.mddocs/demos/failure-demo.md
cd D:\AI-Projects\agentic-compliance-auditor\backend
& .\.venv\Scripts\Activate.ps1
pytest .\tests\unit
pytest .\tests\integration
pytest .\tests\api
pytest .\tests\workflowscd D:\AI-Projects\agentic-compliance-auditor\frontend
npx vitest run --config .\vitest.config.tscd D:\AI-Projects\agentic-compliance-auditor\backend
& .\.venv\Scripts\Activate.ps1
python ..\infra\scripts\generate_eval_cases.py
python ..\infra\scripts\run_eval_suite.pyThe repository includes separate workflows for:
- backend CI
- frontend CI
- eval regression
- dependency security checks
This project is licensed under the MIT License.
Copyright (c) 2026 Cherry Augusta
See the LICENSE file for full details.















