Your AI coding companion. Build features, fix bugs, and ship faster โ with autonomous agents that plan, code, and validate for you.
This project is a Codex-based fork derived from https://github.com/AndyMik90/Auto-Claude.
Auto-Codex is a desktop app that supercharges your AI coding workflow. Whether you're a vibe coder just getting started or an experienced developer, Auto-Codex meets you where you are.
Powered by the OpenAI Codex CLI for reliable, sandboxed agent execution.
- Autonomous Tasks โ Describe what you want to build, and agents handle planning, coding, and validation while you focus on other work
- Agent Terminals โ Run Codex CLI in up to 12 terminals with a clean layout, smart naming based on context, and one-click task context injection
- Safe by Default โ All work happens in git worktrees, keeping your main branch undisturbed until you're ready to merge
- Self-Validating โ Built-in QA agents check their own work before you review
The result? 10x your output while maintaining code quality.
- Parallel Agents: Run multiple builds simultaneously while you focus on other work
- Context Engineering: Agents understand your codebase structure before writing code
- Self-Validating: Built-in QA loop catches issues before you review
- Isolated Workspaces: All work happens in git worktrees โ your code stays safe
- AI Merge Resolution: Intelligent conflict resolution when merging back to main โ no manual conflict fixing
- Memory Layer: Agents remember insights across sessions for smarter decisions
- Cross-Platform: Desktop app runs on Mac, Windows, and Linux
- Any Project Type: Build web apps, APIs, CLIs โ works with any software project
The Desktop UI is the recommended way to use Auto-Codex. It provides visual task management, real-time progress tracking, and a Kanban board interface.
- Node.js 18+ - Download Node.js
- Python 3.10+ - Download Python (3.12+ recommended)
- Docker Desktop - Required for the Memory Layer (optional)
- Codex CLI -
npm install -g @openai/codex - OpenAI Account - Required for Codex CLI access (API key or OAuth token for non-interactive use)
- Git Repository - Your project must be initialized as a git repository
# Verify install
codex --version
# Interactive login (recommended)
codex loginIf you use a third-party gateway/activator that authenticates Codex CLI via API key (commonly stored in ~/.codex/auth.json + ~/.codex/config.toml), Auto-Codex will treat that as authenticated as long as those files are present and valid. Note: GUI-launched apps often do not inherit .zshrc environment variables, so prefer the Codex config files over shell-only exports.
Headless/CI: set one of these instead of interactive login.
# Shell env (one-off)
export OPENAI_API_KEY=sk-...
# Or use an OAuth token from Codex CLI login (e.g. `codex login --device-auth`)
export CODEX_CODE_OAUTH_TOKEN=...
# Or point to an existing Codex CLI config directory
export CODEX_CONFIG_DIR=/path/to/codex/config
# Or rely on the default Codex CLI config at ~/.codex
# (disable with AUTO_CODEX_DISABLE_DEFAULT_CODEX_CONFIG_DIR=1)
# Or add any of the above to auto-codex/.env for repeat usage
OPENAI_API_KEY=sk-...Auto-Codex requires a git repository to create isolated worktrees for safe parallel development. If your project isn't a git repo yet:
cd your-project
git init
git add .
git commit -m "Initial commit"Why git? Auto-Codex uses git branches and worktrees to isolate each task in its own workspace, keeping your main branch clean until you're ready to merge. This allows you to work on multiple features simultaneously without conflicts.
Docker runs the FalkorDB database that powers Auto-Codex's cross-session memory.
| Operating System | Download Link |
|---|---|
| Mac (Apple Silicon M1/M2/M3/M4) | Download for Apple Chip |
| Mac (Intel) | Download for Intel Chip |
| Windows | Download for Windows |
| Linux | Installation Guide |
Not sure which Mac? Click the Apple menu (๐) โ "About This Mac". Look for "Chip" - M1/M2/M3/M4 = Apple Silicon, otherwise Intel.
After installing: Open Docker Desktop and wait for the whale icon (๐ณ) to appear in your menu bar/system tray.
Using the Desktop UI? It automatically detects Docker status and offers one-click FalkorDB setup. No terminal commands needed!
๐ For detailed installation steps, troubleshooting, and advanced configuration, see guides/DOCKER-SETUP.md
For backup/restore, rollback, and operational procedures, see guides/OPERATIONS.md.
The Desktop UI runs Python scripts behind the scenes. Set up the Python environment:
cd auto-codex
# Using uv (recommended)
uv venv && uv pip install -r requirements.txt
# Or using standard Python
python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txtProduction (deterministic installs): use the lock file that matches your Python version.
# Example for Python 3.12/3.13
PY_VER_NODOT=$(python3 -c 'import sys; print(f"{sys.version_info[0]}{sys.version_info[1]}")')
uv pip install -r requirements-py${PY_VER_NODOT}.lockThe Auto-Codex Memory Layer provides cross-session context retention using a graph database:
# Pin image tags (required in production)
export FALKORDB_IMAGE_TAG=<version>
export GRAPHITI_MCP_IMAGE_TAG=<version>
# Make sure Docker Desktop is running, then:
docker-compose up -d falkordbcd auto-codex-ui
# Install dependencies (pnpm recommended, npm works too)
pnpm install
# or: npm install
# Dev mode (hot reload)
pnpm run dev
# or: npm run dev
# Production build + start
pnpm run build && pnpm run start
# or: npm run build && npm run startWindows users: If installation fails with node-gyp errors, click here
Auto-Codex automatically downloads prebuilt binaries for Windows. If prebuilts aren't available for your Electron version yet, you'll need Visual Studio Build Tools:
- Download Visual Studio Build Tools 2022
- Select "Desktop development with C++" workload
- In "Individual Components", add "MSVC v143 - VS 2022 C++ x64/x86 Spectre-mitigated libs"
- Restart terminal and run
npm installagain
- Add your project in the UI
- Create a new task describing what you want to build
- Watch as Auto-Codex creates a spec, plans, and implements your feature
- Review changes and merge when satisfied
.kiro/(including.kiro/specs) is deprecated legacy tooling and is intentionally removed/ignored. Use.auto-codex/for specs and task data.
Plan tasks and let AI handle the planning, coding, and validation โ all in a visual interface. Track progress from "Planning" to "Done" while agents work autonomously.
Spawn up to 12 AI-powered terminals for hands-on coding. Inject task context with a click, reference files from your project, and work rapidly across multiple sessions.
Power users: Connect multiple OpenAI accounts to run even more agents in parallel โ perfect for teams or heavy workloads.
Have a conversation about your project in a ChatGPT-style interface. Ask questions, get explanations, and explore your codebase through natural dialogue.
Based on your target audience, AI anticipates and plans the most impactful features you should focus on. Prioritize what matters most to your users.
Let AI help you create a project that shines. Rapidly understand your codebase and discover:
- Code improvements and refactoring opportunities
- Performance bottlenecks
- Security vulnerabilities
- Documentation gaps
- UI/UX enhancements
- Overall code quality issues
Write professional changelogs effortlessly. Generate release notes from completed Auto-Codex tasks or integrate with GitHub to create masterclass changelogs automatically.
See exactly what Auto-Codex understands about your project โ the tech stack, file structure, patterns, and insights it uses to write better code.
When your main branch evolves while a build is in progress, Auto-Codex automatically resolves merge conflicts using AI โ no manual <<<<<<< HEAD fixing required.
How it works:
- Git Auto-Merge First โ Simple non-conflicting changes merge instantly without AI
- Conflict-Only AI โ For actual conflicts, AI receives only the specific conflict regions (not entire files), achieving ~98% prompt reduction
- Parallel Processing โ Multiple conflicting files resolve simultaneously for faster merges
- Syntax Validation โ Every merge is validated before being applied
The result: A build that was 50+ commits behind main merges in seconds instead of requiring manual conflict resolution.
For terminal-based workflows, headless servers, or CI/CD integration, see guides/CLI-USAGE.md.
Run a quick Python import validation across the repo:
./scripts/validate-imports.shAuto-Codex focuses on three core principles: context engineering (understanding your codebase before writing code), good coding standards (following best practices and patterns), and validation logic (ensuring code works before you see it).
Phase 1: Spec Creation (3-8 phases based on complexity)
Before any code is written, agents gather context and create a detailed specification:
- Discovery โ Analyzes your project structure and tech stack
- Requirements โ Gathers what you want to build through interactive conversation
- Research โ Validates external integrations against real documentation
- Context Discovery โ Finds relevant files in your codebase
- Spec Writer โ Creates a comprehensive specification document
- Spec Critic โ Self-critiques using extended thinking to find issues early
- Planner โ Breaks work into subtasks with dependencies
- Validation โ Ensures all outputs are valid before proceeding
Phase 2: Implementation
With a validated spec, coding agents execute the plan:
- Planner Agent โ Creates subtask-based implementation plan
- Coder Agent โ Implements subtasks one-by-one with verification
- QA Reviewer โ Validates all acceptance criteria
- QA Fixer โ Fixes issues in a self-healing loop (up to 50 iterations)
Each session runs with a fresh context window. Progress is tracked via implementation_plan.json and Git commits.
Phase 3: Merge
When you're ready to merge, AI handles any conflicts that arose while you were working:
- Conflict Detection โ Identifies files modified in both main and the build
- 3-Tier Resolution โ Git auto-merge โ Conflict-only AI โ Full-file AI (fallback)
- Parallel Merge โ Multiple files resolve simultaneously
- Staged for Review โ Changes are staged but not committed, so you can review before finalizing
Three-layer defense keeps your code safe:
- OS Sandbox โ Bash commands run in isolation
- Filesystem Restrictions โ Operations limited to project directory
- Command Allowlist โ Only approved commands based on your project's stack
The Memory Layer is a hybrid RAG system combining graph nodes with semantic search to deliver the best possible context during AI coding. Agents remember insights from previous sessions, discovered codebase patterns persist and are reusable, and historical context helps agents make smarter decisions.
Architecture:
- Backend: FalkorDB (graph database) via Docker
- Library: Graphiti for knowledge graph operations
- Providers: OpenAI, Anthropic, Azure OpenAI, Google AI, or Ollama (local/offline)
| Setup | LLM | Embeddings | Notes |
|---|---|---|---|
| OpenAI | OpenAI | OpenAI | Simplest - single API key |
| Anthropic + Voyage | Anthropic | Voyage AI | High quality |
| Google AI | Gemini | Single API key, fast inference | |
| Ollama | Ollama | Ollama | Fully offline |
| Azure | Azure OpenAI | Azure OpenAI | Enterprise |
your-project/
โโโ .worktrees/ # Created during build (git-ignored)
โ โโโ <spec-name>/ # Isolated workspace per spec (git worktree)
โโโ .auto-codex/ # Per-project data (specs, plans, QA reports)
โ โโโ specs/ # Task specifications
โ โโโ roadmap/ # Project roadmap
โ โโโ ideation/ # Ideas and planning
โโโ auto-codex/ # Python backend (framework code)
โ โโโ run.py # Build entry point
โ โโโ runners/spec_runner.py # Spec creation orchestrator
โ โโโ prompts/ # Agent prompt templates
โ โโโ ...
โโโ auto-codex-ui/ # Electron desktop application
โ โโโ ...
โโโ docker-compose.yml # FalkorDB for Memory Layer
You don't create these folders manually - they serve different purposes:
auto-codex/- The framework repository itself (clone this once from GitHub).auto-codex/- Created automatically in YOUR project when you run Auto-Codex (stores specs, plans, QA reports).worktrees/- Isolated workspaces created during builds (git-ignored; clean up via--discard/--cleanup-worktreeswhen you no longer need them)
When using Auto-Codex on your project:
cd your-project/ # Your own project directory
python3 /path/to/auto-codex/run.py --spec 001
# Auto-Codex creates .auto-codex/ automatically in your-project/When developing Auto-Codex itself:
git clone https://github.com/tytsxai/Auto-Codex.git
cd auto-codex/ # You're working in the framework repoThe .auto-codex/ directory is gitignored and project-specific - you'll have one per project you use Auto-Codex on.
Desktop UI users: These are configured through the app settings โ no manual setup needed.
Existing users migrating from Claude SDK should read MIGRATION.md.
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
One of | OpenAI API key for Codex CLI and OpenAI-backed memory providers |
CODEX_CODE_OAUTH_TOKEN |
One of | OAuth token from Codex CLI login (e.g. codex login --device-auth) |
CODEX_CONFIG_DIR |
One of | Path to Codex CLI config directory for profile-based auth |
AUTO_CODEX_DISABLE_DEFAULT_CODEX_CONFIG_DIR |
No | Set to 1 to ignore the default ~/.codex config directory |
AUTO_CODEX_BYPASS_CODEX_SANDBOX |
No | Set to 0 to keep Codex CLI sandboxing enabled |
AUTO_CODEX_LEGACY_SECURITY |
No | Set to true to disable security flag enforcement (backwards compat) |
AUTO_BUILD_MODEL |
No | Model override (default: gpt-5.2-codex). Note: reasoning is a runtime parameter; legacy suffix input like -xhigh is accepted but discouraged (it is not a real model ID). |
AUTO_BUILD_REASONING_EFFORT |
No | Reasoning effort override (low/medium/high/xhigh) |
GRAPHITI_ENABLED |
Recommended | Set to true to enable Memory Layer |
GRAPHITI_LLM_PROVIDER |
For Memory | LLM provider: openai, anthropic, azure_openai, ollama, google |
GRAPHITI_EMBEDDER_PROVIDER |
For Memory | Embedder: openai, voyage, azure_openai, ollama, google |
ANTHROPIC_API_KEY |
For Anthropic | Required for Anthropic LLM |
VOYAGE_API_KEY |
For Voyage | Required for Voyage embeddings |
GOOGLE_API_KEY |
For Google | Required for Google AI (Gemini) provider |
See auto-codex/.env.example for complete configuration options.
Join our Discord to get help, share what you're building, and connect with other Auto-Codex users:
We welcome contributions! Whether it's bug fixes, new features, or documentation improvements.
See CONTRIBUTING.md for guidelines on how to get started.
This framework was inspired by Anthropic's Autonomous Coding Agent. Thank you to the Anthropic team for their innovative work on autonomous coding systems.
AGPL-3.0 - GNU Affero General Public License v3.0
This software is licensed under AGPL-3.0, which means:
- Attribution Required: You must give appropriate credit, provide a link to the license, and indicate if changes were made. When using Auto-Codex, please credit the project.
- Open Source Required: If you modify this software and distribute it or run it as a service, you must release your source code under AGPL-3.0.
- Network Use (Copyleft): If you run this software as a network service (e.g., SaaS), users interacting with it over a network must be able to receive the source code.
- No Closed-Source Usage: You cannot use this software in proprietary/closed-source projects without open-sourcing your entire project under AGPL-3.0.
In simple terms: You can use Auto-Codex freely, but if you build on it, your code must also be open source under AGPL-3.0 and attribute this project. Closed-source commercial use requires a separate license.
For commercial licensing inquiries (closed-source usage), please contact the maintainers.


