Self-improving skills for Claude Code — no infra required.
Your CLAUDE.md rules, slash commands, and memory files degrade over time. Codebases change, models update, and the corrections you gave Claude last month are forgotten by next week.
/skill-review closes the loop. It reads everything Claude knows about your preferences — feedback memories, rules, commands — finds what's broken, stale, or missing, and suggests targeted fixes.
Think of it as a linter for your AI workflow.
Most Claude Code setups look like this:
~/.claude/
commands/
my-skill.md # wrote this 3 months ago
another-skill.md # works... sometimes
CLAUDE.md # 47 rules, some contradictory
memory/
feedback_1.md # "don't do X" (said this 5 times now)
feedback_2.md # outdated, references deleted project
Skills are static. Your environment is not.
- A rule you wrote for an old project still fires on new ones
- You've corrected Claude on the same thing 4 times but it's not a rule yet
- Two rules contradict each other and Claude picks randomly
- Half your memory files reference projects you abandoned
Nobody audits this. Until now.
/skill-review runs a 4-step self-improvement loop:
┌─────────┐ ┌─────────┐ ┌────────┐ ┌───────┐
│ GATHER │────▶│ INSPECT │────▶│ REPORT │────▶│ AMEND │
│ │ │ │ │ │ │ │
│ memories│ │ patterns│ │ ranked │ │ apply │
│ rules │ │ staleness│ │findings│ │ fixes │
│ commands│ │ conflicts│ │ │ │ │
└─────────┘ └─────────┘ └────────┘ └───────┘
Reads all your feedback memories, CLAUDE.md rules, slash commands, and memory index files.
Analyzes everything for:
- Recurring feedback — same correction given multiple times
- Stale rules — references to things that no longer exist
- Conflicting rules — two rules that contradict each other
- Missing rules — feedback that should've been promoted to a rule
- Underperforming skills — commands with known failure modes
- Redundant entries — duplicates across memories/rules
- Outdated project info — dead projects cluttering your memory
Produces a severity-ranked report:
| Severity | Meaning |
|---|---|
| A (Amend Now) | Actively causing problems — fix immediately |
| P (Promote) | Feedback should become a permanent rule |
| S (Stale) | Outdated — remove or update |
| I (Insight) | Worth noting, no action needed yet |
Offers batch fixes you can apply with one command:
- Batch A — All critical amendments
- Batch P — Promote feedback to rules
- Batch S — Clean up stale entries
- All — Apply everything
- Custom — Cherry-pick specific items
All changes require your approval. Nothing is modified without confirmation.
Copy the skill file to your Claude Code commands directory:
# User-level (available in all projects)
cp skill-review.md ~/.claude/commands/skill-review.mdOr for a specific project:
# Project-level
cp skill-review.md .claude/commands/skill-review.mdThat's it. No dependencies, no packages, no config.
In any Claude Code session:
/skill-review
Claude will read your entire skill surface, analyze it, and present findings. You decide what to fix.
## Skill Review Report — 2026-03-18
### Summary
- Total findings: 5
- Amend Now (A): 1
- Promote (P): 2
- Stale (S): 1
- Insights (I): 1
### Amend Now (A)
A1. [Source: CLAUDE.md] — "Use port 3000" conflicts with feedback memory
"never kill occupied ports"
Evidence: feedback_never_kill_ports.md (given 3x)
Suggested amendment: Remove hardcoded port rule, add "use next
available port" instead
### Promote (P)
P1. [Source: feedback_testing.md] — "Always run tsc before pushing"
mentioned in 4 separate feedback entries but not in CLAUDE.md
Pattern: User corrected this in 4 different sessions
Suggested rule: Add to CLAUDE.md: "Run tsc --noEmit before every push"
P2. [Source: conversation history] — User consistently prefers inline
editing over modals
Pattern: Corrected modal implementations 3 times
Suggested rule: Add to CLAUDE.md: "Prefer inline editing (Enter=save,
Escape=cancel) over modal dialogs"
### Stale (S)
S1. [Source: MEMORY.md] — Project "old-landing-page" no longer exists
on disk
Action: Remove from Active Projects list
### Insights (I)
I1. [Pattern] — 80% of feedback memories are about UI behavior, not logic
Note: Consider adding a UI conventions section to CLAUDE.md
The cognee-skills approach (graph DB + observation pipeline + automatic amendment + evaluation) is powerful but heavy. It makes sense at team scale with dozens of agents running hundreds of skills.
For individual developers using Claude Code, /skill-review gives you 90% of the value with 0% of the infra:
| cognee-skills | /skill-review | |
|---|---|---|
| Infra needed | Graph DB, Python package, evaluation pipeline | One markdown file |
| Observation | Automatic execution logging | Your existing memory files |
| Inspection | Graph traversal algorithms | Claude reads your files |
| Amendment | Automatic with rollback | Human-approved batches |
| Best for | Teams, autonomous agent fleets | Solo devs, Claude Code users |
The insight is the same — skills must evolve — the implementation is just right-sized for the tool.
You use Claude Code
│
▼
Something goes wrong
│
▼
You say "don't do that" ──▶ Claude saves feedback memory
│
▼
(time passes)
│
▼
/skill-review ──▶ "You've said 'don't do that' 4 times.
│ This should be a CLAUDE.md rule."
▼
Apply amendment ──▶ Rule added. Claude stops doing that. Forever.
Without /skill-review, those feedback memories just sit there. Claude reads them per-session but never connects the dots across sessions. This command is the missing feedback loop.
This skill was inspired by @tricalt's post on self-improving agent skills and the cognee project. The core idea — that skills should be living components, not static files — is theirs. This is a zero-infra implementation of that idea for Claude Code users.
MIT