The intuitive project spec. One interview, every AI tool.
An AI-powered TUI that interviews you about your project, extracts a structured .spec file, and exports to any AI coding tool's native format. The interview is the product. The file is the artifact.
Every AI coding tool invented its own format for the same information: "here's what I'm building and how I want it built."
- AGENTS.md (OpenAI/Linux Foundation standard, 60k+ repos, 20+ tools)
- .cursorrules /
.cursor/rules/*.mdc(Cursor) - CLAUDE.md (Claude Code)
- .windsurfrules (Windsurf)
- .clinerules (Cline)
- copilot-instructions.md (GitHub Copilot)
- ...and 15+ more formats
Your project knowledge is locked into whichever tool you picked. Switch tools, rewrite everything. Use two tools, maintain two specs. New teammate clones the repo — the AI has no idea what the project is.
What exists today is all prose. No schema, no validation, no structure. You can't tell if your AGENTS.md is "complete." You can't validate your code against it. It's a sticky note, not a contract.
| What exists | What's missing |
|---|---|
| Free-form Markdown files (AGENTS.md) | Structured, schema-validated project intelligence |
| Manual authoring ("just write it") | Smart interview that extracts what you'd never think to write |
| Static files that rot | Living contract with drift detection |
| Per-tool configs | Universal format that generates all others |
| Code-style rules only | Architecture, domain context, planning, constraints |
| Solo developer focus | Team-portable project brain that travels with the repo |
An AI business analyst in your terminal. You answer questions about what you're building, and it produces a structured project definition that any AI tool can consume.
For vibe coders: Spec it out before writing a line of code. The interview helps you think through what you're building — architecture, constraints, stack choices — then hands that context to whatever AI tool builds it.
For experienced devs: Stop re-explaining your project to every AI tool and session. One interview, one file, every tool aligned. Push to repo, clone on your laptop, your AI immediately knows the project.
For tool authors: A structured, schema-validated format with a reference parser. Zero effort to support .spec in your tool.
The make-or-break moment. This is the product.
First 10 seconds: Gorgeous ASCII art + animation. Immediate visual wow — this is not another janky CLI tool.
AI detection: Scans the system for existing AI tool auth (API keys, Ollama, etc.). Shows what it found: "I see you have Claude and GPT-4 configured. Which should I use for this interview?" User picks, or sets up a new provider.
The interview flow:
-
Start with vision, not tech. "What are you building?" Not "what's your tech stack?" The interview begins where humans begin — with the idea. This is the opposite of every AI tool config that starts with framework choices.
-
Adaptive depth. Start quick (5-10 core questions). After each section: "Want to go deeper on [architecture / constraints / style]?" User controls how deep each section goes. A vibe coder speccing a landing page goes shallow. An engineer speccing a distributed system goes deep.
-
Hybrid intelligence. Scripted structure (defined sections and phases) with AI-generated questions within each section. The AI follows up on YOUR specific answers, not a fixed script. Says "you mentioned real-time updates — have you thought about WebSocket vs SSE vs polling?" because it heard you, not because it asks everyone.
-
Codebase-aware (if applicable). If run in an existing project directory, it detects what it can: "I see Next.js 14, TypeScript, Tailwind, PostgreSQL, a
src/directory with 47 files." Then asks about what it can't detect: architecture decisions, constraints, domain context. -
Greenfield-first. Equally powerful with NO existing codebase. For vibe coders, this IS the starting point — plan the project before the first line of code exists. The interview helps you think through what you're building.
-
Monorepo-adaptive. If you tell it you have multiple services, the interview structures accordingly — shared context + per-service sections. The spec format handles it; you don't need to know how.
-
Optional roadmap. After the project is described, offers: "Want me to build out a development roadmap?" If yes, generates phased build plan. User reviews, rearranges, approves, or skips entirely.
Output: A structured .spec file at project root.
Machine-readable, human-editable. Not free-form prose — structured sections with a JSON Schema for validation. But still editable by hand when needed.
Two views of the same file:
- Human reading order: Vision → goals → domain context → architecture → stack → style → constraints → phases
- Tool consumption: Any section queryable independently. Tools extract what they need.
Core sections:
project:
name, description, goals, non-goals
domain:
business context, key concepts, terminology
(the stuff AI needs to know that isn't code)
architecture:
patterns, directory structure, module boundaries
services (for monorepos/microservices)
data flow, dependencies between modules
stack:
languages, frameworks, databases, infrastructure
package manager, build tools, deployment target
style:
formatting, naming conventions, component patterns
file organization rules, import ordering
constraints:
"never do X", "always do Y"
security requirements, performance budgets
compliance rules, accessibility requirements
phases: # optional — generated by roadmap feature
ordered build plan with milestones
user-reviewed and approved
agents: # optional — for multi-agent setups
module ownership/boundaries
coordination rules
Extensible: Custom sections via plugin system. A plugin can add ecommerce: with payment, inventory, and shipping context.
Format: YAML or TOML (TBD — needs design spike on what feels best in the TUI editing experience).
Generates the native config for any supported AI coding tool from one .spec file.
specit export # exports to all detected tools
specit export --format agents.md # AGENTS.md specifically
specit export --format cursor # .cursor/rules/*.mdc
specit export --format claude # CLAUDE.mdAGENTS.md is an output, not a competitor. SpecIt sits above the AGENTS.md standard. The interview creates structured intelligence; AGENTS.md is one export format among many.
Community-contributed exporters via plugin system. One command to go from .spec → whatever your tool expects.
Checks your codebase against the spec. The living contract.
specit validate # local check
specit validate --ci # CI mode (exit code for pipelines)- "Spec says TypeScript everywhere, but
utils/helper.jsexists" - "Spec says max file length 300 lines, 4 files exceed this"
- "Spec says no class components, found 2 in
src/legacy/" - "Spec says REST API pattern, but 3 GraphQL resolvers appeared"
Machine-actionable constraints — not just prose suggestions, but rules that can be programmatically checked.
This is the wow feature. Your code evolves. SpecIt notices.
⚠ Drift detected:
Your spec says REST API, but 3 new GraphQL resolvers appeared
in src/api/graphql/ (added in last 2 weeks).
Your spec says "PostgreSQL only", but a Redis client was added
to package.json 4 days ago.
Want to update the spec? [y/n/review each]
Auto-detection of when code has diverged from the spec. This creates ongoing value that keeps people coming back — the spec isn't a write-once file, it's a living document that evolves with the project.
Full plugin ecosystem — not just exporters.
| Plugin Type | What It Does | Example |
|---|---|---|
| Exporters | Convert .spec to tool-specific formats |
.spec → .cursorrules, AGENTS.md |
| Interview plugins | Add domain-specific questions to the interview | E-commerce plugin asks about payments, inventory, shipping |
| Validators | Custom rules that check code against spec | Accessibility validator, security audit rules |
| Stack templates | Pre-built .spec starting points |
"Next.js + Supabase + Tailwind" template |
| Analyzers | Read codebase and suggest spec updates | "I detected you're using tRPC now — add to spec?" |
Community-contributed. Think npm ecosystem for project intelligence.
- Not a replacement for AGENTS.md — it generates AGENTS.md (and every other format)
- Not a build system — it defines WHAT you're building, not HOW to build it
- Not opinionated — captures YOUR decisions, doesn't make them for you
- Not another config file to write — the interview writes it for you
- Not tool-locked — works with any AI coding tool, current or future
| Decision | Choice | Rationale |
|---|---|---|
| Language | Go | Single binary, fast, Charm ecosystem for TUI |
| TUI framework | Charm (Bubble Tea + Lip Gloss + Huh) | Gold standard for beautiful terminal UIs |
| AI model | Auto-detect system auth | Scans for existing API keys/Ollama, user picks. BYOK model. |
| File format | .spec (YAML) |
Scales across AI agents. LLMs parse YAML reliably. Deep nesting for architecture specs. JSON Schema for validation. |
| Schema | Strict core, loose extensions | Core sections (project, stack, architecture) have strict JSON Schema. Plugin/tool sections are freeform. package.json model. |
| License | Open source (MIT or Apache 2.0) | Maximum adoption. Revenue from ecosystem, not the tool. |
| Distribution | go install, Homebrew, binary releases |
Single binary, no runtime dependency |
| Plugin distribution | Git repos (v1), registry (v2) | specit plugin add github.com/user/plugin. Central registry when ecosystem matures. |
| Drift detection | Smart threshold, configurable | Detects silently. Surfaces when drift exceeds threshold (e.g., 3+ deviations). User-configurable sensitivity. |
| Project | What It Is | How SpecIt Differs |
|---|---|---|
| AGENTS.md | Industry standard prose file (60k repos) | SpecIt generates AGENTS.md. We're the intelligence layer above it. |
| Ruler (2.5k stars) | Central rules → generates 30+ tool configs | Translation layer, no interview, no validation, no drift detection |
| FAF / .faf | Structured YAML, IANA-registered | Tried structured format without the experience. 16 stars. |
| CCS / .context/ | Directory-based context spec | Dead (archived Oct 2025). No tool adoption, no killer experience. |
| Tool-specific configs | .cursorrules, CLAUDE.md, etc. | Locked to one tool. SpecIt is the universal source. |
Our moat: The interview experience. Nobody else is building the extraction layer — they all assume you'll write the config manually.
SpecIt solves the "shared project brain" problem:
- Solo dev, multiple machines: Push
.specto repo → clone on laptop → AI immediately aligned - Teams: Every teammate clones, gets the same project context. No tribal knowledge.
- Multiple AI agents: All agents read from one
.spec. Consistent understanding. - Team overlays: Base
.specfor the project +.spec.local(gitignored) for personal preferences (future)
-
specit init— Adaptive AI interview with gorgeous TUI -
.specfile format — Schema-validated, structured sections -
specit export— AGENTS.md + 2-3 major tool formats -
specit validate— Check codebase against spec, CI mode - Plugin architecture — Extensible from day one
-
specit diff— Drift detection - 3-5 stack templates (Next.js, Rails, Go API, etc.)
- Interview plugins for 2-3 domains
- Beautiful error messages and help text
-
specit import— Reverse-engineer spec from codebase -
specit import --from agents.md— Import from existing configs - Team features (overlays, org-level inheritance)
- Privacy/encryption for sensitive spec sections
- Web UI companion (for non-terminal users)
Based on research into how universal standards succeed (LSP, OpenAPI, .editorconfig, devcontainer.json, package.json):
-
Experience-first, not standard-first. The interview is the forcing function. People adopt because the TUI is amazing, not because the format is correct.
-
Generate AGENTS.md from day one. Ride the existing standard. Don't compete with the Linux Foundation. Be the best way to CREATE an AGENTS.md.
-
CLI-first, tool-agnostic. No dependency on any AI coding tool. Standalone value from the interview + validation alone.
-
Find the second adopter. Before public launch, get one AI tool author to read
.specnatively. The announcement should be "SpecIt + [Tool] jointly support .spec" — not "SpecIt hopes tools adopt." -
Extensibility as adoption incentive. The plugin namespace means tools can add their own config to
.spec. Selfish reason to support the format. -
Schema + reference parser. Publish a Go library (and maybe a JS/Python one) that parses
.spec. Tool authors import it, zero effort to support.
Someone runs specit init on Monday, answers 10 minutes of questions, gets a .spec file. On Wednesday they switch from Cursor to Claude Code — specit export gives them a perfect CLAUDE.md instantly. Their teammate clones the repo Friday and the AI immediately knows the project. On the following Monday, specit diff notices they added GraphQL and offers to update the spec.
The .spec file becomes the first thing you create in a new project. Before the README. Before the first commit. You spec it, then you build it.
All open questions from the user interview have been resolved:
| Question | Decision | Rationale |
|---|---|---|
| YAML vs TOML | YAML | LLM familiarity, deep nesting support, AI tooling ecosystem is YAML-native, JSON Schema validation |
| Schema strictness | Strict core, loose extensions | Core sections get JSON Schema (IDE autocomplete, validation). Plugin sections are freeform. package.json model. |
| Plugin distribution | Git repos first, registry later | specit plugin add <repo>. Build a registry/site when there are enough plugins. |
| Drift detection behavior | Smart threshold, configurable | Detects silently. Only surfaces when drift exceeds configurable threshold. Not naggy. |
| Naming | SpecIt (90% locked) | specit CLI, .spec file. TUI-first identity. Final name confirmation before public launch. |