Codex-native prompts, templates, scripts, and agents that bring the neural-claude workflow to the Codex CLI. Everything is file-based, repo-local, and designed for repeatable iteration with clear state.
- No Claude-specific hooks, status lines, or TTS
- All state lives in
plans/and.codex/ - Prompts are namespaced as
neural.*
- Loop control:
neural.loop-start,neural.loop-plan,neural.loop-status,neural.loop-cancel - Planning:
neural.plan,neural.plan-execute - Memory:
neural.memory,neural.recall - Routing & analysis:
neural.route,neural.question,neural.pv,neural.evolve - Research:
neural.research,neural.gh-learn,neural.yt-learn - Sync & changelog:
neural.sync,neural.changelog-architect - Task tracking:
neural.todo-new,neural.todo-check - Meta creation:
neural.meta.agent,neural.meta.skill,neural.meta.prompt,neural.meta.improve,neural.meta.eval,neural.meta.brain - Output styles:
neural.output-style(default/concise/table/yaml/html/genui) - Skills & config:
neural.skill,neural.profile,neural.test
Project-scoped skills in .codex/skills/:
- autonomous-loop: Ralph loop usage and guardrails
- worktree-manager: parallel worktrees for multi-session work
- code-reviewer: production-minded reviews
- memory-system: progress-log memory
- pattern-detector: PRD/progress pattern analysis
- prompt-engineering: prompt creation/refinement
- plan-execute: structured planning and execution
- youtube-learner: transcript-based summaries
- skill-creator: bootstrap new skills with SKILL.md template
- skill-installer: install external skills from URLs/registries
- deep-research: multi-source comprehensive research
- test-runner: smart test execution with Ralph integration
plans/prd.jsonandplans/progress.jsonlexpertise.template.yamltodo-workflow.md
scripts/ralph-loop.shandscripts/ralph-once.shscripts/memory_read.py/scripts/memory_write.pyscripts/youtube-transcript.pyscripts/setup-global.sh/scripts/setup-project.sh
agents/multi-ai/AGENTS.mdagents/dispatcher/AGENTS.mdagents/meta-agent/AGENTS.md
- Run the global install from this repo:
scripts/setup-global.sh- Restart Codex so
/prompts:neural.*are picked up. - In any project, run the project install:
scripts/setup-project.sh- Verify prompts:
/prompts:neural.loop-start
The Ralph loop requires flock and timeout.
macOS (Homebrew):
brew install util-linux coreutils
export PATH="/opt/homebrew/opt/util-linux/bin:/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"Linux:
- Ensure
flock(util-linux) andtimeout(coreutils) are available inPATH.
scripts/setup-global.shThis installs:
~/.codex/neural-codex/(prompts, templates, skills, scripts, config stub)~/.codex/prompts/(so/prompts:neural.*appear)~/.codex/skills/(optional autodiscovery)
Use --force to overwrite existing files:
scripts/setup-global.sh --forcescripts/setup-project.shThis seeds a project with:
.codex/prompts/.codex/templates/.codex/skills/.codex/config.toml(MCP stubs)scripts/neural-codex/(loop + helpers)plans/prd.json,plans/progress.jsonl(from templates)
Install into another path:
scripts/setup-project.sh --path /path/to/projectTEST_CMD="npm test" scripts/neural-codex/ralph-loop.sh 5Notes:
- The loop claims one task per iteration from
plans/prd.json. - It writes progress to
plans/progress.jsonl. - It commits only when tests pass.
- Use
/prompts:neural.memoryto append notes toplans/progress.jsonl. - Use
/prompts:neural.recallto search the log. - For direct CLI usage:
scripts/memory_write.pyandscripts/memory_read.py.
Named configuration sets for different workflows. Switch with codex --profile <name>:
| Profile | Model | Approval | Use Case |
|---|---|---|---|
| default | gpt-5.2-codex | on-failure | Standard development |
| fast | gpt-4.1-mini | on-failure | Quick tasks, low cost |
| autonomous | gpt-5.2-codex | never | Ralph loop, unattended work |
| careful | gpt-5.2-codex | untrusted | Sensitive changes |
Example:
codex --profile autonomous exec "Fix the auth bug"Supported MCP servers are stubbed in .codex/config.toml and include:
- chrome-devtools
- github
- search (Exa)
- optional playwright
Set tokens in your shell as needed (e.g., GITHUB_PERSONAL_ACCESS_TOKEN).
The config file supports advanced options (see .codex/config.toml):
- Profiles: Named config sets with different models/approval policies
- Notifications: Webhooks, desktop alerts, CI integration
- History: Session transcripts with size caps
- Telemetry: OpenTelemetry for observability
- TUI: Clickable file citations (vscode, cursor, windsurf)
Reference: https://developers.openai.com/codex/config-advanced/
.
├── .codex/
│ ├── prompts/
│ ├── skills/
│ ├── templates/
│ └── config.toml
├── agents/
├── plans/
├── scripts/
└── README.md
Prompts not showing:
- Run
scripts/setup-global.shand restart Codex.
Ralph loop fails immediately:
- Ensure
flockandtimeoutare inPATH. - Ensure
codexCLI is installed and logged in.
Tests not running:
- Set
TEST_CMDexplicitly for your project.
Static site lives in docs/. Enable Pages with source main / docs/.