Skip to content

FuzulsFriend/agentic-project-planner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agentic Project Planner

Agentic Project Planner Pipeline

A Claude Code skill that turns feature ideas into implementation-ready plans using parallel research agents, structured brainstorming, spec writing with automated review, and agent team orchestration with skill/MCP assignments and continuous verification via Ralph Loop.

No more jumping straight to code. This skill enforces a proven pipeline:

Research (parallel agents) -> Brainstorm (interactive) -> Spec (reviewed) -> Plan (agent teams)

Each phase produces a deliverable. Each deliverable is reviewed. The result is a plan where every task has an assigned agent role, mapped skills and MCPs, and continuous verification.


What It Does

Phase 1: Parallel Research

The moment you describe a feature, the skill launches 3-5 background agents simultaneously — each researching a different aspect:

Agent Researches Example Tools Used
Codebase Explorer Your existing code patterns, auth, data model Grep, Glob, Read
API Researcher External API docs, SDK capabilities, rate limits read-website-fast, context7
Competitive Analyst How other products solve the same problem WebSearch, apify
Security Researcher Abuse prevention, compliance, policy constraints WebSearch, read-website-fast
Domain Expert Industry best practices, architecture patterns context7, openaiDeveloperDocs

All run in parallel while you continue designing.

Phase 2: Interactive Brainstorming

One question at a time, the skill helps you nail down requirements:

  • What exactly should this feature do?
  • Who uses it? (Free users? Premium? Admin?)
  • How does the user interact with it?
  • What happens when things go wrong?
  • How does this fit the business model?

Research results are woven into the conversation as agents complete: "My competitive analyst found that [Competitor X] charges $6.99/month for this..."

Phase 3: Spec Writing & Review

A comprehensive design spec is written, covering user journeys, database schema, API design, security, edge cases, and business logic.

Then an automated spec reviewer checks for:

  • Completeness gaps
  • Consistency issues
  • Security concerns
  • Implementation blockers
  • GDPR/compliance requirements

Issues are fixed before the plan begins.

Phase 4: Agent Team Plan

The implementation plan assigns specialized agents to each task:

## Phase 3: AI Intent Classification

**Agent:** AI Engineer (opus)
**Skills:** ai-sdk, test-driven-development
**MCPs:** openaiDeveloperDocs (search_openai_docs), context7

### Task 3.1: Intent Classifier
**Files:** Create: lib/whatsapp/intent.ts

- [ ] Step 1: Write the failing test
- [ ] Step 2: Implement classification with pre-filter cascade
- [ ] Step 3: Run type check: `npx tsc --noEmit`
- [ ] Step 4: Commit

Ralph Loop runs continuous verification throughout:

# Backend phases: type-check every 5 minutes
/ralph-loop 5m cd app && npx tsc --noEmit

# Frontend phases: build-check every 10 minutes
/ralph-loop 10m cd app && npm run build 2>&1 | tail -30

# Cancel when done
/cancel-ralph

Real Example: WhatsApp Bot Feature

Here's what the skill produced for a real project — an AI-powered WhatsApp bot for a SaaS app:

Research phase (4 parallel agents, ~3 minutes):

  • Codebase Explorer → mapped existing search architecture (3-tier FTS + ilike + tags), auth system (Google OAuth + Supabase), credit economy
  • WaSender API Researcher → documented full API capabilities, webhook format, rate limits, pricing ($6/mo)
  • Abuse Prevention Researcher → found Meta's 2026 ban on general-purpose AI chatbots, identified WhatsApp's 24-hour messaging window, recommended 5-layer prompt injection defense
  • Competitive Analyst → found Keepi.ai ($6.99/mo) as closest competitor, identified their weakness (no web dashboard), mapped UX patterns across 8 products

Brainstorming (7 questions → aligned design):

  • Intent detection: AI-powered cascade (code pre-filters + gpt-5-nano classifier)
  • Credit economy: 2 credits/search, 1 credit/capture, 5 messages/day cap
  • Admin gating: whitelist-only access
  • Provider abstraction: WaSender now, Twilio/Meta Cloud API later

Spec (20 sections, 987 lines, reviewed with 3 critical issues found and fixed):

  • Database: 4 new tables with RLS + SECURITY DEFINER RPCs
  • Webhook pipeline: fire-and-forget with serverless-safe state management
  • Security: GDPR compliance, prompt injection defense, ban avoidance

Plan (10 phases, 18 tasks, 8 agent roles):

Agent Role Skills MCPs
DB Architect Migrations, RLS, RPCs security-review supabase
Backend Architect Provider layer, webhook, routes test-driven-development context7, read-website-fast
AI Engineer Intent classification ai-sdk openaiDeveloperDocs
Search Specialist Search handler test-driven-development supabase
Capture Specialist Save handler test-driven-development read-website-fast
Security Engineer Rate limiting, credits security-review supabase
Frontend Engineer Settings UI, admin dashboard frontend-design, shadcn chrome-devtools
QA Engineer Integration tests systematic-debugging supabase, chrome-devtools

Installation

Claude Code (Recommended)

# Clone the repo
git clone https://github.com/tomereiges/agentic-project-planner.git

# Copy to your Claude Code skills directory
cp -r agentic-project-planner ~/.claude/skills/

# Or add via Claude Code CLI
claude skill add ./agentic-project-planner

Claude.ai

  1. Download the skill folder (or clone the repo)
  2. Zip the agentic-project-planner folder
  3. Go to Claude.ai → Settings → Capabilities → Skills
  4. Upload the zip file

Manual Installation

Place the folder anywhere and reference it in your Claude Code settings:

{
  "skills": [
    "/path/to/agentic-project-planner"
  ]
}

Recommended Setup

The skill adapts to what you have installed — but the more tools available, the better the output. Here's what to install for the best experience.

MCPs (Research & Verification)

These MCPs power the parallel research agents and implementation verification:

MCP What It Enables Install
read-website-fast Fetch live API docs, competitor analysis, best practices GitHub
context7 Look up SDK/library documentation in real-time GitHub
supabase Apply migrations, test RPCs, verify data directly Supabase MCP Plugin
chrome-devtools Visual verification of UI changes via screenshots GitHub
openaiDeveloperDocs Search OpenAI API docs (for AI/LLM features) GitHub
apify Web scraping for competitive research Apify MCP

Minimum recommended: read-website-fast + context7. These two cover 80% of research needs.

Skills (Agent Capabilities)

These skills are assigned to implementation agents:

Skill What It Enables Install
superpowers (brainstorming, writing-plans, subagent-driven-development, verification-before-completion, systematic-debugging, test-driven-development) Core planning and execution pipeline superpowers plugin
multi-agent-bus Parallel agent coordination via shared JSON bus Create locally or find in community skills
ralph-loop Continuous verification during development Ralph Loop plugin
frontend-design High-quality UI component generation Community skill
shadcn shadcn/ui component patterns Vercel plugin
security-review Security audit for auth, payments, PII features Community skill
ai-sdk Vercel AI SDK integration patterns Vercel plugin
web-design-guidelines Accessibility and design compliance Community skill

Minimum recommended: superpowers + ralph-loop. These two power the core pipeline.

What Happens Without MCPs/Skills?

The planner gracefully degrades:

Missing Impact Workaround
No research MCPs Research agents can't fetch live docs → plan based on training data User manually provides API docs
No supabase MCP DB Architect can't apply migrations directly → writes SQL files User applies migrations manually
No chrome-devtools Frontend Engineer can't visual-verify → relies on build checks User verifies UI manually
No superpowers skills Plan is written but lacks TDD flow and review checkpoints Agents still work, just less structured
No ralph-loop No continuous verification → errors caught later User runs type checks manually

Usage

Basic Usage

Just describe your feature:

Plan a WhatsApp bot that lets users search their saved items using natural language

The skill activates automatically and starts the pipeline.

With Context

Provide more context for better results:

I want to add a notifications system to my Next.js app. Users should get email
and push notifications when items they saved get updated. We use Supabase for
the database and Resend for email. I want to think through abuse prevention,
rate limiting, and how this fits our existing credit system before we code anything.

Trigger Phrases

The skill activates on:

  • "plan this feature"
  • "let's design this"
  • "I need a plan for..."
  • "brainstorm this with me"
  • "create a spec for..."
  • "plan implementation"
  • "architect this"
  • "think this through"
  • Or any complex feature description that needs design before code

How It Adapts

Small Feature (< 5 tasks)

  • 1-2 research agents (skip competitive analysis)
  • 2-3 agent roles (combine Backend + Security)
  • Simpler Ralph Loop (just tsc --noEmit)
  • No multi-agent bus needed

Medium Feature (5-15 tasks)

  • 3-4 research agents
  • 4-6 agent roles
  • Phase-appropriate Ralph Loop
  • Parallel tasks where possible

Large Feature (> 15 tasks)

  • 5+ research agents with specialized roles
  • Full 7-8 agent team
  • Multi-agent bus for parallel phases
  • Sub-project decomposition recommended
  • Review checkpoints between phases

Output Structure

The skill produces these deliverables:

docs/
├── superpowers/
│   ├── specs/
│   │   └── YYYY-MM-DD-<feature>-design.md    # Full design specification
│   └── plans/
│       └── YYYY-MM-DD-<feature>-implementation.md  # Agent team plan

The spec is reviewed automatically. The plan is ready for execution via superpowers:subagent-driven-development.


FAQ

Q: Does this work without any MCPs installed? Yes. The skill degrades gracefully — research agents use WebSearch and WebFetch instead of specialized MCPs. The plan is still created with agent roles, but skill/MCP assignments reflect what's available.

Q: How long does the full pipeline take? Depends on feature complexity. A medium feature typically takes 15-30 minutes of interactive brainstorming, with research running in parallel. The spec and plan writing take another 10-15 minutes.

Q: Can I skip phases? Yes. If you already have a spec, jump to Phase 4. If you already know what you want, skip the brainstorming questions and go straight to spec writing. The skill adapts.

Q: Does this work with non-JavaScript projects? Yes. The agent roles, research patterns, and planning structure are language-agnostic. Ralph Loop commands and skill assignments adapt to your stack (Python, Go, Rust, etc.).

Q: Can I use this with Claude.ai (not Claude Code)? The skill works in Claude.ai but without subagents, the parallel research phase runs sequentially and Ralph Loop isn't available. The brainstorming, spec writing, and plan structure still work.

Q: How is this different from just asking Claude to "make a plan"? Three key differences:

  1. Parallel research — 3-5 agents gather real data simultaneously instead of Claude guessing
  2. Structured review — Automated spec reviewer catches gaps before implementation
  3. Agent team assignments — Every task has the right agent role, skills, MCPs, and verification, not just "implement this somehow"

Contributing

Issues and PRs welcome. If you use this skill and find patterns that improve it, please share.

License

MIT

About

Claude Code skill: Turn feature ideas into implementation-ready plans with parallel research agents, agent teams, skill/MCP assignments, and Ralph Loop verification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors