Hyntx is a CLI tool that analyzes your Claude Code prompts and helps you become a better prompt engineer through retrospective analysis and actionable feedback.
🚧 NOT READY FOR USE: This project is under active development. The published npm package does not produce output yet. Check back for updates.
Hyntx reads your Claude Code conversation logs and uses AI to detect common prompt engineering anti-patterns. It provides you with:
- Pattern detection: Identifies recurring issues in your prompts (missing context, vague instructions, etc.)
- Actionable suggestions: Specific recommendations with concrete "Before/After" rewrites
- Privacy-first: Automatically redacts secrets and defaults to local AI (Ollama)
- Zero configuration: Interactive setup on first run with auto-save to shell config
Think of it as a retrospective code review for your prompts.
- Offline-first analysis with local Ollama (privacy-friendly, cost-free)
- Multi-provider support: Ollama (local), Anthropic Claude, Google Gemini with automatic fallback
- Before/After rewrites: Concrete examples showing how to improve your prompts
- Automatic secret redaction: API keys, emails, tokens, credentials
- Flexible date filtering: Analyze today, yesterday, specific dates, or date ranges
- Project filtering: Focus on specific Claude Code projects
- Multiple output formats: Beautiful terminal output or markdown reports
- Watch mode: Real-time monitoring and analysis of prompts as you work
- Smart reminders: Oh-my-zsh style periodic reminders (configurable)
- Auto-configuration: Saves settings to your shell config automatically
- Dry-run mode: Preview what will be analyzed before sending to AI
npm install -g hyntxnpx hyntxpnpm add -g hyntxRun Hyntx with a single command:
hyntxOn first run, Hyntx will guide you through an interactive setup:
- Select one or more AI providers (Ollama recommended for privacy)
- Configure models and API keys for selected providers
- Set reminder preferences
- Auto-save configuration to your shell (or get manual instructions)
That's it! Hyntx will analyze today's prompts and show you improvement suggestions with concrete "Before/After" examples.
# Analyze today's prompts
hyntx
# Analyze yesterday
hyntx --date yesterday
# Analyze a specific date
hyntx --date 2025-01-20
# Analyze a date range
hyntx --from 2025-01-15 --to 2025-01-20
# Filter by project name
hyntx --project my-awesome-app
# Save report to file
hyntx --output report.md
# Preview without sending to AI
hyntx --dry-run
# Check reminder status
hyntx --check-reminder
# Watch mode - real-time analysis
hyntx --watch
# Watch specific project only
hyntx --watch --project my-app# Analyze last week for a specific project
hyntx --from 2025-01-15 --to 2025-01-22 --project backend-api
# Generate markdown report for yesterday
hyntx --date yesterday --output yesterday-analysis.mdHyntx uses environment variables for configuration. The interactive setup can auto-save these to your shell config (~/.zshrc, ~/.bashrc).
Configure one or more providers in priority order. Hyntx will try each provider in order and fall back to the next if unavailable.
# Single provider (Ollama only)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=llama3.2
# Multi-provider with fallback (tries Ollama first, then Anthropic)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=llama3.2
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
# Cloud-first with local fallback
export HYNTX_SERVICES=anthropic,ollama
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_OLLAMA_MODEL=llama3.2Ollama:
| Variable | Default | Description |
|---|---|---|
HYNTX_OLLAMA_MODEL |
llama3.2 |
Model to use |
HYNTX_OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
Anthropic:
| Variable | Default | Description |
|---|---|---|
HYNTX_ANTHROPIC_MODEL |
claude-3-5-haiku-latest |
Model to use |
HYNTX_ANTHROPIC_KEY |
- | API key (required) |
Google:
| Variable | Default | Description |
|---|---|---|
HYNTX_GOOGLE_MODEL |
gemini-2.0-flash-exp |
Model to use |
HYNTX_GOOGLE_KEY |
- | API key (required) |
# Set reminder frequency (7d, 14d, 30d, or never)
export HYNTX_REMINDER=7d# Add to ~/.zshrc or ~/.bashrc (or let Hyntx auto-save it)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=llama3.2
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_REMINDER=14d
# Optional: Enable periodic reminders
hyntx --check-reminder 2>/dev/nullThen reload your shell:
source ~/.zshrc # or source ~/.bashrcOllama runs AI models locally for privacy and cost savings.
-
Install Ollama: ollama.ai
-
Pull a model:
ollama pull llama3.2
-
Verify it's running:
ollama list
-
Run Hyntx (it will auto-configure on first run):
hyntx
-
Get API key from console.anthropic.com
-
Run Hyntx and select Anthropic during setup, or set manually:
export HYNTX_SERVICES=anthropic export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
-
Get API key from ai.google.dev
-
Run Hyntx and select Google during setup, or set manually:
export HYNTX_SERVICES=google export HYNTX_GOOGLE_KEY=your-google-api-key
Configure multiple providers for automatic fallback:
# If Ollama is down, automatically try Anthropic
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=llama3.2
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-hereWhen running, Hyntx will show fallback behavior:
⚠️ ollama unavailable, trying anthropic...
✅ anthropic connected
📊 Hyntx - 2025-01-20
──────────────────────────────────────────────────
📈 Statistics
Prompts: 15
Projects: my-app, backend-api
Score: 6.5/10
⚠️ Patterns (3)
🔴 Missing Context (60%)
• "Fix the bug in auth"
• "Update the component"
💡 Include specific error messages, framework versions, and file paths
Before:
❌ "Fix the bug in auth"
After:
✅ "Fix authentication bug in src/auth/login.ts where users get
'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5."
🟡 Vague Instructions (40%)
• "Make it better"
• "Improve this"
💡 Define specific success criteria and expected outcomes
Before:
❌ "Make it better"
After:
✅ "Optimize the database query to reduce response time from 500ms
to under 100ms. Focus on adding proper indexes."
──────────────────────────────────────────────────
💎 Top Suggestion
"Add error messages and stack traces to debugging requests for
10x faster resolution."
──────────────────────────────────────────────────
Hyntx takes your privacy seriously:
- Local-first: Defaults to Ollama for offline analysis
- Automatic redaction: Removes API keys, credentials, emails, tokens before analysis
- Read-only: Never modifies your Claude Code logs
- No telemetry: Hyntx doesn't send usage data anywhere
- OpenAI/Anthropic API keys (
sk-*,claude-*) - AWS credentials (
AKIA*, secret keys) - Bearer tokens
- HTTP credentials in URLs
- Email addresses
- Private keys (PEM format)
- Read logs: Parses Claude Code conversation logs from
~/.claude/projects/ - Extract prompts: Filters user messages from conversations
- Sanitize: Redacts sensitive information automatically
- Analyze: Sends sanitized prompts to AI provider for pattern detection
- Report: Displays findings with examples and suggestions
- Node.js: 22.0.0 or higher
- Claude Code: Must have Claude Code installed and used
- AI Provider: Ollama (local) or Anthropic/Google API key
Make sure you've used Claude Code at least once. Logs are stored in:
~/.claude/projects/<project-hash>/logs.jsonl
- Check Ollama is running:
ollama list - Start Ollama:
ollama serve - Verify the host:
echo $HYNTX_OLLAMA_HOST(default:http://localhost:11434)
- Check the date format:
YYYY-MM-DD - Verify you used Claude Code on those dates
- Try
--dry-runto see what logs are being read
# Clone the repository
git clone https://github.com/jmlweb/hyntx.git
cd hyntx
# Install dependencies
pnpm install
# Run in development mode
pnpm dev
# Build
pnpm build
# Test the CLI
pnpm starthyntx/
├── src/
│ ├── index.ts # CLI entry point
│ ├── core/ # Core business logic
│ │ ├── setup.ts # Interactive setup (multi-provider)
│ │ ├── reminder.ts # Reminder system
│ │ ├── log-reader.ts # Log parsing
│ │ ├── schema-validator.ts # Log schema validation
│ │ ├── sanitizer.ts # Secret redaction
│ │ ├── analyzer.ts # Analysis orchestration + batching
│ │ ├── reporter.ts # Output formatting (Before/After)
│ │ ├── watcher.ts # Real-time log file monitoring
│ │ └── history.ts # Analysis history management
│ ├── providers/ # AI providers
│ │ ├── base.ts # Interface & prompts
│ │ ├── ollama.ts # Ollama integration
│ │ ├── anthropic.ts # Claude integration
│ │ ├── google.ts # Gemini integration
│ │ └── index.ts # Provider factory with fallback
│ ├── utils/ # Utility functions
│ │ ├── env.ts # Environment config
│ │ ├── shell-config.ts # Shell auto-configuration
│ │ ├── paths.ts # System path constants
│ │ └── terminal.ts # Terminal utilities
│ └── types/
│ └── index.ts # TypeScript type definitions
├── docs/
│ └── SPECS.md # Technical specifications
└── package.json
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes using Conventional Commits
- Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
For detailed development roadmap, planned features, and implementation status, see docs/ROADMAP.md.
MIT License - see LICENSE file for details.
- Built for Claude Code users
- Inspired by retrospective practices in Agile development
- Privacy-first approach inspired by local-first software movement
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with ❤️ for better prompt engineering