Generate usage insights from your Codex CLI sessions — inspired by Claude Code's /insights command.
Scans all your Codex sessions, extracts structured facets via LLM, aggregates stats, and generates a rich self-contained HTML report.
- Scans
~/.codex/sessions/for allrollout-*.jsonlsession files - Parses each session to extract metadata (project, duration, messages, model, tool calls, etc.)
- Extracts facets per session using Codex CLI (
codex exec) — goals, satisfaction, friction, outcomes — results are cached in~/.codex/usage-data/facets/ - Aggregates stats across all qualifying sessions
- Runs 7 analysis prompts: project areas, interaction style, what works, friction, suggestions, future opportunities, fun moments
- Generates a self-contained HTML report at
~/.codex/usage-data/report.html
- Node.js 18+
- Codex CLI installed and authenticated (used for LLM calls via
codex exec) - Codex CLI sessions in
~/.codex/sessions/
git clone https://github.com/bigx333/codex-insights.git
cd codex-insights
pnpm install# Generate full report
pnpm insights
# Dry run — parse sessions and show stats without LLM calls
pnpm insights --dry-run
# Limit to N most recent sessions
pnpm insights --limit 50
# Use a different model (default: gpt-5.2)
CODEX_INSIGHTS_MODEL=gpt-4.1-mini pnpm insightsThe report will be generated at ~/.codex/usage-data/report.html. Open it in a browser.
| Flag | Description |
|---|---|
--dry-run |
Parse and count sessions without making LLM calls |
--limit N |
Only process the N most recent qualifying sessions |
--help |
Show help |
| Variable | Default | Description |
|---|---|---|
CODEX_INSIGHTS_MODEL |
gpt-5.2 |
Model to use for facet extraction and analysis |
Sessions must have:
- ≥ 3 user messages
- ≥ 10 minutes duration
Warmup/minimal sessions are excluded from aggregated stats.
Facets are cached per session in ~/.codex/usage-data/facets/<session_id>.json. Delete the cache dir to force re-extraction:
rm -rf ~/.codex/usage-data/facets- Stats Overview — sessions, messages, hours coded
- Models & Projects — which models and codebases you use most
- Goals & Outcomes — what you work on and how well it goes
- Satisfaction & Strengths — how happy you are and what the AI does well
- Project Areas — LLM-identified areas of work
- Interaction Style — how you use the tool
- What Works — your best workflows
- Friction Points — where things go wrong
- Suggestions — AGENTS.md additions, features to try, usage patterns
- On the Horizon — future opportunities
- Memorable Moment — something fun from your sessions
LLM calls go through codex exec (not direct OpenAI API), so it uses your existing Codex CLI auth and model access. Facet extraction runs with concurrency capped at 3 workers.
- TypeScript + tsx
- Codex CLI (
codex exec) for LLM calls - Codex JSONL session format (
rollout-*.jsonl)
MIT