A structured, project-agnostic specification process for Claude Code.
AI coding agents are excellent at implementation (HOW) but often jump straight to code before the requirements are clear (WHAT). This skill adds a lightweight specification phase that surfaces ambiguity upfront -- before it becomes expensive rework. It follows Claude Code's Explore -> Plan -> Implement -> Commit workflow.
When you invoke /spec, Claude follows a 5-phase process:
Phase 1: Explore Understand the request, explore code, read project context
Phase 2: Specify User stories, acceptance criteria, edge cases, assumptions
Phase 3: Clarify Interactive disambiguation (max 5 questions, skippable)
Phase 4: Validate 8-point spec quality checklist
Phase 5: Handoff Summary report, TDD integration mapping
The output is a single markdown spec file with testable Given/When/Then acceptance criteria that feed directly into TDD implementation.
| Scope | Approach |
|---|---|
| Trivial (typo, config, rename) | Don't use this skill -- just ask Claude directly |
| Small (well-understood, single concern) | /spec with quick shortcut (skip clarification) |
| Medium (multi-concern, some ambiguity) | /spec full process |
| Large (cross-cutting, unfamiliar domain) | /spec full process, thorough clarification |
Rule of thumb: If you can describe the diff in one sentence and there's no ambiguity, skip the skill. If the feature has competing interpretations, use it.
# From your project root
mkdir -p .claude/skills/spec
cp -r /path/to/spec-skill/SKILL.md .claude/skills/spec/
cp -r /path/to/spec-skill/references/ .claude/skills/spec/Or clone and copy:
git clone https://github.com/openkash/ai-agent-spec-skill.git /tmp/spec-skill
mkdir -p .claude/skills/spec
cp /tmp/spec-skill/SKILL.md .claude/skills/spec/
cp -r /tmp/spec-skill/references/ .claude/skills/spec/Claude Code auto-discovers skills in .claude/skills/. Once copied,
the skill appears in the / menu and Claude can invoke it
automatically when relevant.
Copy the template and fill in your project's details:
cp /tmp/spec-skill/PROJECT.md .claude/skills/spec/PROJECT.mdOr start from an example config:
# Pick your stack
cp /tmp/spec-skill/project-configs/android-kotlin.md \
.claude/skills/spec/PROJECT.mdEdit PROJECT.md with your domain context, architecture, and
project-specific concerns.
Invoke the skill directly:
/spec add a feature that lets users export their data as CSV
/spec implement rate limiting on the public API endpoints
/spec add keyboard shortcuts for common actions
Or let Claude invoke it automatically when you describe work that needs clarification:
I need to add multi-calendar support. There are several approaches
and I'm not sure about the scope. Help me spec it out.
This skill is designed as a companion to TDD implementation skills like ai-agent-tdd-skill. The workflow:
/spec <feature> -> Clarify WHAT to build (spec skill)
/tdd <feature> -> Implement HOW with test-first (TDD skill)
The spec output feeds directly into TDD phases:
| Spec Output | TDD Input |
|---|---|
| User Stories | Phase 1: Analysis scope |
| Acceptance Criteria | Phase 3: Pre-test assertions |
| Edge Cases | Phase 3: Additional test cases |
| Affected Components | Phase 2: Chunk decomposition starting point |
| Assumptions | Phase 1: Architecture decision context |
The handoff includes the spec file path so TDD can load it:
Next: /tdd implement <feature> (spec: docs/specs/<feature>.md)
spec-skill/
├── SKILL.md # Core spec process (project-agnostic)
├── PROJECT.md # Template, copy and customize
├── references/
│ ├── spec-template.md # Feature spec output format
│ ├── clarification-taxonomy.md # 9-category ambiguity detection
│ └── validation-checklist.md # 8-point spec quality criteria
└── project-configs/ # Example PROJECT.md for common stacks
├── android-kotlin.md # Android / Kotlin / Compose / Hilt
├── typescript-node.md # TypeScript / Node.js / Express
├── python-django.md # Python / Django / DRF
└── rust-cli.md # Rust / CLI / clap
Note on naming: The TDD skill's project-configs use test framework names (python-pytest.md, rust-cargo.md). The spec skill uses application type names (python-django.md, rust-cli.md) because specifications describe user behavior, not test infrastructure.
| Content | Location | Committed? |
|---|---|---|
| Spec process (universal) | SKILL.md |
Yes |
| Project config (your stack) | PROJECT.md |
Your choice |
| Spec output template | references/spec-template.md |
Yes |
| Ambiguity categories | references/clarification-taxonomy.md |
Yes |
| Quality checklist | references/validation-checklist.md |
Yes |
| Active spec (feature in progress) | Your specs directory | Your choice |
Every user story gets testable acceptance criteria in Given/When/Then format. This format converts directly to test cases during TDD:
Given a user with 50 events,
When they search for "meeting",
Then they see only events with "meeting" in the title (parameterized)
Each criterion includes a test complexity hint: (1 test) for simple
assertions, (parameterized) for multiple input/output combinations,
(integration) for multi-step or external dependency tests.
Instead of open-ended "any questions?", the skill uses 9 structured categories to systematically detect ambiguity:
- Scope & Boundaries
- Users & Roles
- Data & State
- Interaction Flow
- Error Handling
- Edge Cases & Boundaries
- Integration & Dependencies
- Non-Functional Concerns (performance, security, accessibility, observability)
- Domain-Specific Concerns (from PROJECT.md)
Questions are asked one at a time, with recommendations, max 5 total. See clarification-taxonomy.md.
Before handoff, the spec is validated against 8 quality criteria: testability, completeness, clarity, scope, independence, priority, edge cases, and resolution of open questions. Plus project-specific checks from PROJECT.md. See validation-checklist.md.
Small well-understood changes skip Phase 3 (Clarify) entirely. The process collapses to: Explore -> Specify -> Validate -> Handoff. Use when the feature can be described in one sentence and has no competing interpretations.
Running /spec on a feature that already has a spec file gives you
a choice: update the existing spec (preserving clarification history)
or start fresh (archiving the old one with .bak).
The skill includes 9 lessons from real specification sessions. Things like scope creep starting in the spec, "simple" features hiding domain complexity, and acceptance criteria that can't fail. Applied every time.
Add lessons to your PROJECT.md or append to your copy of
SKILL.md in the Lessons Learned section. Keep the format:
10. **Your lesson title** --
What happened, why it matters, what to do differentlyThe 8 points are intentionally generic. Customize them in your
PROJECT.md under "Domain-Specific Concerns" and "Quality
Standards". The checklist references those sections.
The 9 categories cover most projects. Customize category 9
(Domain-Specific Concerns) in your PROJECT.md. If your project
has unique ambiguity patterns not covered by categories 1-8, add
them to category 9.
Extracted from an open source calendar app where it was used to specify CalDAV sync features, widget behavior, and recurring event handling. Inspired by patterns from spec-kit (structured ambiguity taxonomy, interactive clarification with recommendations, self-validation checklists), adapted to be lightweight and single-file.
Apache License 2.0. See LICENSE.