Skills teach Claude how to do things. Not prompts - actual workflows that trigger automatically.
We call them Potions because they're small, focused, and powerful. Brew carefully, drop them in, they do one thing well. Also sounds cooler than "claude-skill-collection".
And guess what? You can combine them. Chain a few potions together and you get an Elixir - a combo skill that orchestrates a full workflow. Like a debug elixir that won't let you jump to fixes until you actually understand the problem.
Just markdown files. No magic required.
# 1. Clone the repo
git clone https://github.com/ElliotJLT/Claude-Skill-Potions.git ~/.claude/skills
# 2. Copy a skill to your CLAUDE.md
cat ~/.claude/skills/skills/dont-be-greedy/SKILL.md >> ~/.claude/CLAUDE.md
# 3. Restart Claude Code-
battle-plan - Complete planning ritual before significant tasks. Orchestrates rubber-duck (scope) -> pre-mortem (risks) -> eta (estimate) -> confirmation. No coding until the plan is approved.
-
pre-mortem - Before starting significant tasks, imagines failure scenarios, assesses risks, and reorders the implementation plan to address high-risk items first. Based on Gary Klein's prospective hindsight research.
-
split-decision - Before committing to architectural or technology choices, presents 2-4 viable options as a comparison table with trade-offs. States a lean with reasoning, requires explicit confirmation. No single-option "best way" answers.
-
you-sure - Before destructive or irreversible actions (rm -rf, DROP TABLE, force push), pauses with a clear checklist of impact and requires explicit confirmation. Never auto-executes dangerous operations.
-
dont-be-greedy - Prevents context overflow by estimating file sizes, chunking large data, and summarizing before loading. Never loads raw files without checking first.
-
breadcrumbs - Leaves notes for future Claude sessions in
.claude/breadcrumbs.md. Records what was tried, what worked, what failed. Session-to-session memory without manual updates.
-
rubber-duck - When users describe problems vaguely, forces structured articulation through targeted questions before proposing solutions. Catches XY problems and handles frustrated users.
-
zero-in - Before searching, forces you to zero in: what exactly are you looking for, what would it look like, where would it live, what else might it be called. Prevents grep-and-pray.
-
prove-it - Before declaring tasks complete, actually verify the outcome. Addresses Claude's core limitation: optimizing for "looks right" over "works right." No victory laps without proof.
-
loose-ends - Before declaring work done, sweeps for: unused imports, TODO comments created, missing tests, console.logs left in, stale references. The cleanup that always gets skipped.
-
trace-it - Before modifying shared code (utils, types, configs), traces all callers first. Prevents "fixed one thing, broke three others."
-
stay-in-lane - Before making changes, verifies they match what was asked. Catches scope creep before it happens - no "while I'm here" improvements.
-
sanity-check - Before building on assumptions, validates them. Prevents assumption cascades where one wrong guess leads to a completely wrong solution.
-
keep-it-simple - Before adding abstraction, asks "do we need this now?" Resists over-engineering. Three similar lines are better than a premature abstraction.
-
eta - Estimates task completion time based on codebase scope, complexity keywords, and risk factors. Provides time ranges, not false precision.
-
learn-from-this - When a session contains a significant failure, analyses the root cause and drafts a new skill to prevent it. The skill library grows from real pain, not theory.
-
retrospective - After completing significant tasks, documents what worked, what failed, and key learnings. Failed attempts get documented first - they're read more than successes.
-
pair-mode - Transforms Claude into a pair programming partner. Explains reasoning, teaches patterns, adjusts depth to your level. Balances shipping with building durable understanding. Activate with "let's pair on this."
-
drip - Tracks and surfaces estimated water consumption per session (~0.5ml per 1,000 tokens). Makes the physical cost of AI visible. Not guilt - just awareness that intelligence has a footprint.
-
geordie - Responds entirely in Geordie dialect with Newcastle United references. For when ye need a bit of Tyneside charm in yer codebase, pet. Howay the lads!
Elixirs are orchestrator skills that chain multiple skills or coordinate complex workflows. See the elixirs guide for the pattern.
Workflow Elixirs:
-
debug-to-fix - Full debug cycle: clarify → investigate → fix → verify. Chains rubber-duck and prove-it with built-in investigation. Prevents jumping to fixes before understanding the problem.
-
safe-refactor - Refactoring cycle: assess risk → prepare → implement → verify. Chains pre-mortem and prove-it. Prevents "refactor broke production" disasters.
-
careful-delete - Destruction cycle: assess blast radius → explicit confirmation → document. Chains pre-mortem and you-sure. No
rm -rforDROP TABLEwithout ceremony. -
battle-plan - Also an elixir. Chains rubber-duck → pre-mortem → eta → you-sure for complete planning before significant tasks.
Orchestration Patterns:
-
fan-out - Parallel processing for independent subtasks. Spawns concurrent work streams, aggregates results. Use for research, multi-file analysis, or any "do N things independently."
-
pipeline - Sequential stages with explicit handoffs. Each stage completes before the next begins. Use for workflows where order matters: design → implement → test → review.
-
map-reduce - Batch processing with aggregation. Map phase processes items independently, reduce phase combines results. Use for codebase-wide analysis, bulk transformations, or "check everything."
Skills that improve the skill system itself. Together they form a flywheel: work → discover → forge skill → smarter future sessions.
-
skill-gate - Forces explicit YES/NO evaluation of each relevant skill before proceeding. Increases activation from ~20% to 80%+. The commitment mechanism that makes skills reliable.
-
skill-forge - When a session contains a non-obvious discovery or hard-won solution, auto-generates a new skill capturing it. The skill library grows from real work, not theory.
-
skill-creator - Meta-skill for creating well-structured skills. Guides through purpose, triggers, instructions, guardrails. Ensures new skills meet the quality bar.
Skills for building and optimizing LLM agents.
- agent-audit - Systematically strip an agent to minimum viable form. Instruments usage, ablates components, identifies what's load-bearing vs cruft. Answers: "What can we remove without breaking it?" The leanest agent that still works.
A skill teaches Claude how to do something. It's the difference between:
- "Here's a prompt template for analysing data"
- "When a CSV is uploaded, immediately run
analyze.py, generate stats, create charts, return summary"
Skills have three layers:
| Layer | What it contains | Token cost |
|---|---|---|
| Metadata | Name + description (the trigger) | ~100 tokens, always loaded |
| Instructions | Step-by-step workflow | Loaded when triggered |
| Resources | Python scripts, templates | Executed, not loaded into context |
The magic is Layer 3. Scripts execute and return output - they don't bloat context with code.
git clone https://github.com/ElliotJLT/Claude-Skill-Potions.git ~/.claude/skills
cat ~/.claude/skills/skills/dont-be-greedy/SKILL.md >> ~/.claude/CLAUDE.mdgit clone https://github.com/ElliotJLT/Claude-Skill-Potions.git .claude-skills
cat .claude-skills/skills/dont-be-greedy/SKILL.md >> CLAUDE.mdcd ~/.claude/skills && git pullSkills activate based on descriptions, but this only works ~20% of the time. For reliable activation, install the forced-eval hook:
# Copy hook
cp ~/.claude/skills/hooks/skill-forced-eval-hook.sh ~/.claude/hooks/
# Add to ~/.claude/settings.json{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "~/.claude/hooks/skill-forced-eval-hook.sh"
}
]
}
]
}
}This increases activation from ~20% to ~84%.
For context-aware suggestions (only suggests relevant skills based on what you're working on), use the context-loader hook instead. See hooks/README.md for details and trade-offs.
See the skill writing guide for the full spec.
Key rules:
- Description is the trigger - Write "When [condition], [actions]"
- One job, done well - If it has "and also", make two skills
- Code in scripts, not markdown - Reference
scripts/foo.py, don't embed code - Override defaults - Add NEVER/ALWAYS sections for proactive behavior
Quality external resources for Claude Code skills:
-
diet103/claude-code-infrastructure-showcase - Production-tested activation hooks and skill-rules.json patterns. 6 months testing across 50k+ lines of TypeScript.
-
obra/superpowers - Complete development methodology with 20+ skills. Uses TDD for skill development.
-
ykdojo/claude-code-tips - 40+ practical tips and workarounds.
-
spences10/svelte-claude-skills - Research on activation reliability (200+ prompt tests). Source of our forced-eval hook.