Skip to content

Generate usage insights from Codex CLI sessions - ported from Claude Code /insights

Notifications You must be signed in to change notification settings

bigx333/codex-insights

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

codex-insights

Generate usage insights from your Codex CLI sessions — inspired by Claude Code's /insights command.

Scans all your Codex sessions, extracts structured facets via LLM, aggregates stats, and generates a rich self-contained HTML report.

How It Works

  1. Scans ~/.codex/sessions/ for all rollout-*.jsonl session files
  2. Parses each session to extract metadata (project, duration, messages, model, tool calls, etc.)
  3. Extracts facets per session using Codex CLI (codex exec) — goals, satisfaction, friction, outcomes — results are cached in ~/.codex/usage-data/facets/
  4. Aggregates stats across all qualifying sessions
  5. Runs 7 analysis prompts: project areas, interaction style, what works, friction, suggestions, future opportunities, fun moments
  6. Generates a self-contained HTML report at ~/.codex/usage-data/report.html

Requirements

  • Node.js 18+
  • Codex CLI installed and authenticated (used for LLM calls via codex exec)
  • Codex CLI sessions in ~/.codex/sessions/

Setup

git clone https://github.com/bigx333/codex-insights.git
cd codex-insights
pnpm install

Usage

# Generate full report
pnpm insights

# Dry run — parse sessions and show stats without LLM calls
pnpm insights --dry-run

# Limit to N most recent sessions
pnpm insights --limit 50

# Use a different model (default: gpt-5.2)
CODEX_INSIGHTS_MODEL=gpt-4.1-mini pnpm insights

The report will be generated at ~/.codex/usage-data/report.html. Open it in a browser.

Options

Flag Description
--dry-run Parse and count sessions without making LLM calls
--limit N Only process the N most recent qualifying sessions
--help Show help

Environment Variables

Variable Default Description
CODEX_INSIGHTS_MODEL gpt-5.2 Model to use for facet extraction and analysis

Session Filtering

Sessions must have:

  • ≥ 3 user messages
  • ≥ 10 minutes duration

Warmup/minimal sessions are excluded from aggregated stats.

Caching

Facets are cached per session in ~/.codex/usage-data/facets/<session_id>.json. Delete the cache dir to force re-extraction:

rm -rf ~/.codex/usage-data/facets

Report Sections

  • Stats Overview — sessions, messages, hours coded
  • Models & Projects — which models and codebases you use most
  • Goals & Outcomes — what you work on and how well it goes
  • Satisfaction & Strengths — how happy you are and what the AI does well
  • Project Areas — LLM-identified areas of work
  • Interaction Style — how you use the tool
  • What Works — your best workflows
  • Friction Points — where things go wrong
  • Suggestions — AGENTS.md additions, features to try, usage patterns
  • On the Horizon — future opportunities
  • Memorable Moment — something fun from your sessions

Architecture

LLM calls go through codex exec (not direct OpenAI API), so it uses your existing Codex CLI auth and model access. Facet extraction runs with concurrency capped at 3 workers.

Stack

  • TypeScript + tsx
  • Codex CLI (codex exec) for LLM calls
  • Codex JSONL session format (rollout-*.jsonl)

License

MIT

About

Generate usage insights from Codex CLI sessions - ported from Claude Code /insights

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published