AI-powered process discovery and documentation engine – from raw text to BPMN 2.0, SIPOC, and RACI. Self-hosted with bring-your-own LLM.
🌐 Website & Enterprise Options: processace.com
⚠️ Status: Beta. Core ingestion, generation, and editing features are functional. APIs may evolve.
ProcessAce turns raw process evidence into standard, tool-agnostic process documentation in minutes.
- Ingest Evidence:
- Text documents (SOPs, meeting notes, emails).
- Planned: Audio/Video recordings, Images.
- Analyze & Normalize:
- Uses LLMs (OpenAI, Google Gemini, Anthropic Claude) to extract steps, actors, and systems.
- Normalizes data into a structured evidence model.
- Generate Artifacts:
- BPMN 2.0 Diagrams: Auto-generated with professional layout (Manhattan routing, grid system).
- SIPOC Tables: Supplier-Input-Process-Output-Customer matrices.
- RACI Matrices: Responsible-Accountable-Consulted-Informed matrices.
- Narrative Docs: Markdown-based process descriptions.
- Interactive Editing:
- BPMN Viewer/Editor: View and modify diagrams directly in the browser (
bpmn-jsv18). - Rich Text: Edit narrative docs with a WYSIWYG Markdown editor (
EasyMDE). - Tables: Interactive SIPOC/RACI editing with add/delete row support.
- BPMN Viewer/Editor: View and modify diagrams directly in the browser (
- Export Artifacts:
- BPMN: Export as XML (for tools) or PNG/SVG (for presentations).
- SIPOC/RACI: Export tables as CSV.
- Narrative: Download as Markdown or Print/Save as PDF.
- User Authentication & Workspaces:
- Secure Login: Email/password with JWT (HTTP-only cookies).
- Role-Based Access: Admin, Editor, and Viewer roles. First registered user becomes Admin.
- Workspaces: Create, switch, and share workspaces for organizing projects (Admin/Editor/Viewer roles).
- User Data Isolation: Jobs and artifacts scoped per user and workspace.
- Multi-Provider LLM Support:
- Choose provider and model for each processing job (OpenAI, Google GenAI, Anthropic).
- API keys are stored encrypted (AES-256-CBC) in the database.
- Robust Architecture:
- Dockerized: Easy deployment with Docker Compose (App + Redis).
- Async Processing: Redis-backed job queue (BullMQ) for long-running generative tasks.
- Persistence: SQLite database (
better-sqlite3, WAL mode).
- Docker & Docker Compose (Recommended)
- An LLM API key (OpenAI, Google GenAI, or Anthropic)
- A 32-byte Hex string (for secure API key encryption)
-
Clone the repository:
git clone [https://github.com/jgleiser/ProcessAce.git](https://github.com/jgleiser/ProcessAce.git) cd ProcessAce -
Configure Environment:
cp .env.example .env # Edit .env and set ENCRYPTION_KEY (required for secure API key storage) -
Run with Docker Compose:
docker compose up -d --build
Note (Windows/Mac/WSL2): If you encounter
SQLITE_IOERR_SHMOPENerrors, ensure the environment variableDISABLE_SQLITE_WAL=trueis set indocker-compose.yml(it is by default). -
Open the Web UI: Navigate to
http://localhost:3000. -
Create an Account: Go to
/register.htmlto create your first user account (becomes Admin), then login. -
Configure LLM Provider: Go to App Settings (
/app-settings.html) to set your LLM provider and API key. -
Test the Magic: Drop the provided
samples/sample_process.txtfile into the upload zone on your dashboard to see your first BPMN diagram and SIPOC table generated instantly!
The base Docker stack is cloud-only by default. Bundled Ollama is now opt-in through a Compose override, and host-native Ollama remains supported through environment variables.
For the full setup and troubleshooting guide, see docs/ollama_guide.md.
Use the dedicated Ollama override:
docker compose -f docker-compose.yml -f docker-compose.ollama.yml up -d --buildIn this mode, the app container uses:
OLLAMA_BASE_URL_DEFAULT=http://ollama:11434/v1OLLAMA_PULL_HOST=http://ollama:11434
If you only want OpenAI, Google GenAI, or Anthropic, the default stack stays lean:
docker compose up -d --buildNo bundled ollama container is started in this mode.
Docker Desktop on Windows does not currently provide a stable AMD passthrough path for the bundled Ollama container. For Windows hosts with AMD GPUs, run Ollama on the host and point the app container to it:
-
Install and start Ollama on Windows.
-
Set the following in
.env:OLLAMA_BASE_URL_DEFAULT=http://host.docker.internal:11434/v1 OLLAMA_PULL_HOST=http://host.docker.internal:11434
-
Start the stack normally:
docker compose up -d --build
The App Settings page and Ollama model manager will use the host Ollama instance.
For Linux hosts with ROCm-capable AMD GPUs, use both the Ollama override and the AMD override:
docker compose -f docker-compose.yml -f docker-compose.ollama.yml -f docker-compose.ollama-amd.yml up -d --buildThis override switches the Ollama image to ollama/ollama:rocm and passes through /dev/kfd and /dev/dri.
Host prerequisites:
- Linux host running Docker Engine
- ROCm-capable AMD GPU with a working host driver stack
- Docker access to
/dev/kfdand/dev/dri
Bundled or host Ollama:
- Open
/app-settings.html - Select
Ollama (Local) - Use
Load ModelsorCheck Statusto verify connectivity - Manage curated local generation models in
2.1 Local Model Manager
Important:
- Ollama is supported for artifact generation
- transcription remains on OpenAI-compatible STT providers
Linux AMD Docker:
docker compose exec ollama ls /dev/kfd /dev/dri- Run a model and verify
docker compose exec ollama ollama ps
Windows host fallback:
- Confirm the settings page loads models through
http://host.docker.internal:11434/v1 - Verify GPU activity on the Windows host while Ollama runs
- If the Linux AMD container cannot see
/dev/kfdor/dev/dri, the host ROCm or graphics stack is not exposed to Docker correctly. - If you expected bundled Ollama but no
ollamacontainer exists, start the stack withdocker-compose.ollama.yml. - If model pulls still hit the wrong Ollama endpoint, check
OLLAMA_BASE_URL_DEFAULTandOLLAMA_PULL_HOSTin.env. - If Ollama is unreachable from Docker in host mode, confirm the host Ollama service is listening on port
11434and reachable throughhost.docker.internal.
ProcessAce does not bundle or resell any LLM. You configure your own provider and keys via the App Settings page. The application natively supports:
- OpenAI (default:
gpt-5-nano-2025-08-07) - Google GenAI (default:
gemini-2.5-flash-lite) - Anthropic (default:
claude-haiku-4-5-20251001)
ProcessAce is built for reliability and process mining readiness:
- Frontend: Vanilla HTML5/JS/CSS Single Page Application.
- Backend: Node.js Express API.
- Database: SQLite (
better-sqlite3). - Queue & Workers: Redis (BullMQ) for background job processing.
- Audit Trails: Structured, event-style logging (Pino) for events like
job_queued,llm_call, andartifact_version_created.
See docs/architecture.md for a deep dive.
- User Guide: How to use the application.
- API Reference: REST API endpoint documentation.
- Architecture: System design and component details.
- Agent Guidelines: Coding standards for AI agents.
- Roadmap: Development phases and what's coming next.
ProcessAce is source-available under the ProcessAce Sustainable Use License.
- Free to use internally, self-host, and modify for internal use.
- You may not run ProcessAce as a multi-tenant SaaS/platform or resell it without a commercial license.
See LICENSE.md for the full terms. For commercial/enterprise licensing, visit processace.com or see COMMERCIAL_LICENSE.md.
Contributions are welcome! Please check CONTRIBUTING.md and CODE_OF_CONDUCT.md. By contributing, you agree that your contributions may be used in both the Sustainable Use edition and any future commercial editions of ProcessAce.