Skip to content

oniani1/skill-review

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

/skill-review

Self-improving skills for Claude Code — no infra required.

Your CLAUDE.md rules, slash commands, and memory files degrade over time. Codebases change, models update, and the corrections you gave Claude last month are forgotten by next week.

/skill-review closes the loop. It reads everything Claude knows about your preferences — feedback memories, rules, commands — finds what's broken, stale, or missing, and suggests targeted fixes.

Think of it as a linter for your AI workflow.


The Problem

Most Claude Code setups look like this:

~/.claude/
  commands/
    my-skill.md        # wrote this 3 months ago
    another-skill.md   # works... sometimes
  CLAUDE.md            # 47 rules, some contradictory
  memory/
    feedback_1.md      # "don't do X" (said this 5 times now)
    feedback_2.md      # outdated, references deleted project

Skills are static. Your environment is not.

  • A rule you wrote for an old project still fires on new ones
  • You've corrected Claude on the same thing 4 times but it's not a rule yet
  • Two rules contradict each other and Claude picks randomly
  • Half your memory files reference projects you abandoned

Nobody audits this. Until now.

How It Works

/skill-review runs a 4-step self-improvement loop:

┌─────────┐     ┌─────────┐     ┌────────┐     ┌───────┐
│ GATHER  │────▶│ INSPECT │────▶│ REPORT │────▶│ AMEND │
│         │     │         │     │        │     │       │
│ memories│     │ patterns│     │ ranked │     │ apply │
│ rules   │     │ staleness│    │findings│     │ fixes │
│ commands│     │ conflicts│    │        │     │       │
└─────────┘     └─────────┘     └────────┘     └───────┘

1. Gather

Reads all your feedback memories, CLAUDE.md rules, slash commands, and memory index files.

2. Inspect

Analyzes everything for:

  • Recurring feedback — same correction given multiple times
  • Stale rules — references to things that no longer exist
  • Conflicting rules — two rules that contradict each other
  • Missing rules — feedback that should've been promoted to a rule
  • Underperforming skills — commands with known failure modes
  • Redundant entries — duplicates across memories/rules
  • Outdated project info — dead projects cluttering your memory

3. Report

Produces a severity-ranked report:

Severity Meaning
A (Amend Now) Actively causing problems — fix immediately
P (Promote) Feedback should become a permanent rule
S (Stale) Outdated — remove or update
I (Insight) Worth noting, no action needed yet

4. Amend

Offers batch fixes you can apply with one command:

  • Batch A — All critical amendments
  • Batch P — Promote feedback to rules
  • Batch S — Clean up stale entries
  • All — Apply everything
  • Custom — Cherry-pick specific items

All changes require your approval. Nothing is modified without confirmation.

Install

Copy the skill file to your Claude Code commands directory:

# User-level (available in all projects)
cp skill-review.md ~/.claude/commands/skill-review.md

Or for a specific project:

# Project-level
cp skill-review.md .claude/commands/skill-review.md

That's it. No dependencies, no packages, no config.

Usage

In any Claude Code session:

/skill-review

Claude will read your entire skill surface, analyze it, and present findings. You decide what to fix.

Example Output

## Skill Review Report — 2026-03-18

### Summary
- Total findings: 5
- Amend Now (A): 1
- Promote (P): 2
- Stale (S): 1
- Insights (I): 1

### Amend Now (A)
A1. [Source: CLAUDE.md] — "Use port 3000" conflicts with feedback memory
    "never kill occupied ports"
    Evidence: feedback_never_kill_ports.md (given 3x)
    Suggested amendment: Remove hardcoded port rule, add "use next
    available port" instead

### Promote (P)
P1. [Source: feedback_testing.md] — "Always run tsc before pushing"
    mentioned in 4 separate feedback entries but not in CLAUDE.md
    Pattern: User corrected this in 4 different sessions
    Suggested rule: Add to CLAUDE.md: "Run tsc --noEmit before every push"

P2. [Source: conversation history] — User consistently prefers inline
    editing over modals
    Pattern: Corrected modal implementations 3 times
    Suggested rule: Add to CLAUDE.md: "Prefer inline editing (Enter=save,
    Escape=cancel) over modal dialogs"

### Stale (S)
S1. [Source: MEMORY.md] — Project "old-landing-page" no longer exists
    on disk
    Action: Remove from Active Projects list

### Insights (I)
I1. [Pattern] — 80% of feedback memories are about UI behavior, not logic
    Note: Consider adding a UI conventions section to CLAUDE.md

Why Not cognee / Full Self-Improving Agents?

The cognee-skills approach (graph DB + observation pipeline + automatic amendment + evaluation) is powerful but heavy. It makes sense at team scale with dozens of agents running hundreds of skills.

For individual developers using Claude Code, /skill-review gives you 90% of the value with 0% of the infra:

cognee-skills /skill-review
Infra needed Graph DB, Python package, evaluation pipeline One markdown file
Observation Automatic execution logging Your existing memory files
Inspection Graph traversal algorithms Claude reads your files
Amendment Automatic with rollback Human-approved batches
Best for Teams, autonomous agent fleets Solo devs, Claude Code users

The insight is the same — skills must evolve — the implementation is just right-sized for the tool.

How It Fits Into Your Workflow

You use Claude Code
        │
        ▼
Something goes wrong
        │
        ▼
You say "don't do that" ──▶ Claude saves feedback memory
        │
        ▼
    (time passes)
        │
        ▼
  /skill-review ──▶ "You've said 'don't do that' 4 times.
        │            This should be a CLAUDE.md rule."
        ▼
  Apply amendment ──▶ Rule added. Claude stops doing that. Forever.

Without /skill-review, those feedback memories just sit there. Claude reads them per-session but never connects the dots across sessions. This command is the missing feedback loop.

Inspired By

This skill was inspired by @tricalt's post on self-improving agent skills and the cognee project. The core idea — that skills should be living components, not static files — is theirs. This is a zero-infra implementation of that idea for Claude Code users.

License

MIT

About

Self-improving skills for Claude Code. A slash command that audits your CLAUDE.md rules, feedback memories, and commands — finds what's broken, stale, or missing, and suggests fixes. Zero infra, one markdown file.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors