Skip to content

Latest commit

 

History

History
330 lines (238 loc) · 14.2 KB

File metadata and controls

330 lines (238 loc) · 14.2 KB
name agentic-project-planner
description Orchestrates feature planning with parallel research agents, brainstorming, spec review, and implementation plans with agent team assignments, skill/MCP mapping, and Ralph Loop. Use when: "plan this feature", "let's design this", "I need a plan for", "brainstorm this with me", "create a spec", "architect this", "think this through". Also triggers when a feature idea is complex enough to need design before code.
metadata
author version license
Tomer Ezri
1.0.0
MIT

Agentic Project Planner

Turn feature ideas into implementation-ready plans using parallel research agents, structured design, and agent team orchestration — all before writing a single line of code.

Why This Exists

Complex features fail when you jump straight to code. This skill enforces a proven pipeline: Research (parallel) → Brainstorm (interactive) → Spec (reviewed) → Plan (agent teams + Ralph Loop)

Each phase produces a deliverable. Each deliverable is reviewed before the next phase begins. The result is an implementation plan where every task has an assigned agent role, the right skills and MCPs mapped to it, and continuous verification via Ralph Loop.

Prerequisites Check

Before starting, verify the user has what they need. Present this checklist early:

Required:

  • Claude Code CLI with Agent tool support
  • Git repository (for branch + commits)

Recommended (improves quality significantly):

  • read-website-fast or apify MCP — for web research
  • context7 MCP — for library/SDK documentation lookup
  • supabase or database MCP — if feature involves data modeling
  • chrome-devtools MCP — if feature involves UI work
  • openaiDeveloperDocs or equivalent — if feature involves AI/LLM calls

Recommended Skills:

  • superpowers:brainstorming — structured brainstorming flow
  • superpowers:writing-plans — plan document format
  • superpowers:subagent-driven-development — plan execution
  • multi-agent-bus — parallel agent coordination
  • ralph-loop — continuous verification loops
  • security-review — for auth/data features
  • frontend-design + shadcn — for UI features

If any recommended tools are missing, the skill still works — it adapts the plan to use what's available. Flag what's missing so the user can install later.

The Pipeline

Phase 1: RESEARCH          Phase 2: BRAINSTORM         Phase 3: SPEC              Phase 4: PLAN
─────────────────          ───────────────────         ──────────────             ──────────────
Parallel agents            Interactive Q&A             Write spec doc             Agent teams
gather data on:            with user:                  with full design:          with roles:

• Existing codebase        • One question at a time    • User journeys            • Skill assignments
• Competitor products      • Multiple choice when      • Data model               • MCP assignments
• API capabilities           possible                  • API design               • Ralph Loop params
• Abuse/security           • Propose 2-3 approaches    • Security                 • Parallel vs sequential
• Best practices           • Get approval per section  • Edge cases               • Review checkpoints

Deliverable:               Deliverable:                Deliverable:               Deliverable:
Research reports           Aligned design decisions    Reviewed spec doc          Implementation plan

Phase 1: Parallel Research

The moment a feature idea is described, identify 3-5 research questions that can be answered independently. Launch them as parallel background agents immediately — don't wait.

Choosing Research Agent Roles

Every feature needs different research. Pick from these archetypes:

Role When to Use Tools
Codebase Explorer Always — understand existing patterns Explore subagent type, Grep, Glob, Read
API/SDK Researcher Feature involves external APIs read-website-fast MCP, context7 MCP, WebSearch
Competitive Analyst Feature exists in other products WebSearch, read-website-fast MCP, apify MCP
Security Researcher Feature involves auth, payments, PII WebSearch, read-website-fast MCP
Domain Expert Feature needs specialized knowledge WebSearch, context7 MCP, openaiDeveloperDocs MCP

Research Agent Dispatch Pattern

Agent prompt template:
"You are a [ROLE] research agent. DO NOT write any code — only research.

Research [TOPIC] thoroughly. Use [SPECIFIC TOOLS] for each query:
1. Search: "[specific query 1]"
2. Search: "[specific query 2]"
3. Search: "[specific query 3]"

Report back with:
- [specific deliverable 1]
- [specific deliverable 2]
- [specific deliverable 3]
- Any limitations or gotchas discovered"

Launch all research agents with run_in_background: true. Continue to Phase 2 while they work. Their results feed into the design conversation as they complete.

Codebase Explorer Is Mandatory

Always launch one Explore subagent (subagent_type: "Explore") to understand:

  • How the existing feature area works (auth, data model, API patterns)
  • What can be reused vs. what needs to be built
  • File paths and code patterns the implementation must follow

This agent's findings directly shape the spec and plan — without it, you're designing blind.

In Claude.ai or environments without the Agent tool, perform codebase exploration directly using Grep, Glob, and Read.

If Research Returns Nothing

Empty research is still signal — it means the space is novel or poorly documented. Note this in the brainstorming phase and adjust: lean more heavily on the user's domain knowledge and make assumptions explicit in the spec.

Phase 2: Interactive Brainstorming

While research agents work in the background, start the brainstorming conversation with the user. This phase is interactive — one question at a time.

Question Strategy

  1. Start with "what" — What exactly should this feature do? What's the core value?
  2. Then "who" — Who uses it? Free users? Premium? Admin?
  3. Then "how" — How does the user interact with it? What's the flow?
  4. Then "edges" — What happens when things go wrong? Abuse? Limits?
  5. Then "money" — How does this fit the business model? Free vs paid?

Prefer multiple-choice questions. Only ask open-ended when the answer space is truly open. Never ask more than one question per message.

Incorporating Research Results

As background agents complete, weave their findings into the conversation:

  • "My research agent just found that [competitor X] does this differently..."
  • "The codebase explorer found that your existing [system] already has [capability]..."
  • "Important policy finding: [constraint that affects design]..."

This creates a rich, data-informed design conversation rather than guessing.

Approach Proposals

After enough questions (usually 3-7), propose 2-3 approaches with trade-offs:

### Approach A: [Name]
- How it works: [2-3 sentences]
- Pros: [bullets]
- Cons: [bullets]

### Approach B: [Name]
- How it works: [2-3 sentences]
- Pros: [bullets]
- Cons: [bullets]

**My recommendation: [A/B/C] because [reason]**

Get explicit approval before moving to the spec.

Phase 3: Spec Writing

Write the spec to docs/superpowers/specs/YYYY-MM-DD-<feature>-design.md (or user's preferred location).

Spec Structure

A good spec covers these sections (scale each to its complexity):

  1. Overview — What, why, scope boundaries
  2. User Journeys — Step-by-step flows for each actor
  3. Database Schema — New tables, columns, indexes, RLS policies, RPCs
  4. Architecture — Components, interfaces, data flow
  5. API Design — Routes, request/response shapes, auth
  6. Business Logic — Credit costs, rate limits, quotas, feature flags
  7. Security — Threat model, mitigations, GDPR
  8. Edge Cases — What happens when things fail
  9. Future Considerations — What's explicitly NOT in v1

Spec Review

After writing, dispatch a spec reviewer subagent:

"Review the design specification at [path] for [feature].
Check for: completeness, consistency, feasibility, security gaps, ambiguity.
Report issues with severity: Critical / Important / Minor / Suggestion."

Fix all Critical issues. Address Important issues that would block implementation. Commit the spec. Ask the user to review before proceeding to the plan.

Phase 4: Implementation Plan with Agent Teams

This is where the skill's unique value lives. The plan assigns specialized agent roles with the right skills and MCPs for each task.

Agent Role Assignment

For every task in the plan, assign an agent role based on the work:

Work Type Agent Role Model Key Skills Key MCPs
Database/migrations DB Architect sonnet security-review supabase
Backend/API routes Backend Architect opus test-driven-development context7, read-website-fast
AI/LLM integration AI Engineer opus ai-sdk openaiDeveloperDocs, context7
Search/data logic Data Specialist sonnet test-driven-development supabase
Auth/security/limits Security Engineer opus security-review supabase
Frontend UI Frontend Engineer sonnet frontend-design, shadcn, web-design-guidelines chrome-devtools
Testing/verification QA Engineer opus systematic-debugging, verification-before-completion supabase, chrome-devtools

These are defaults — adapt based on what the user's project actually uses. If they don't have supabase MCP but use Prisma, adjust. If they don't have chrome-devtools but have Playwright, adjust.

Model note: Model suggestions (sonnet/opus) reflect complexity tiers — mechanical tasks vs judgment-heavy tasks. Use the best model available to you; these are optimization hints, not hard requirements. On Claude.ai where you can't select per-agent models, ignore the model column.

Skill/MCP Assignment Rationale

For each agent, explain WHY specific skills and MCPs are assigned. This helps the user understand what's needed and what to install:

**Backend Architect** uses `read-website-fast` + `context7` to fetch live API
documentation for the external service being integrated. Without this, the agent
guesses at API signatures and gets them wrong.

Ralph Loop Integration

Ralph Loop provides continuous verification during implementation. Assign different checks per phase:

Phase Type Ralph Command Interval Why
Backend development npx tsc --noEmit 5m Catches type errors during rapid development
Frontend development npm run build 2>&1 | tail -30 10m Catches build errors and missing imports
Integration testing npm test 2>&1 | tail -20 3m Fast feedback on test regressions
API development curl -s localhost:3000/api/health 5m Verifies dev server stays healthy

Include Ralph Loop start/stop commands in the plan:

# Start for backend phases:
/ralph-loop 5m cd app && npx tsc --noEmit

# Switch for frontend phases:
/ralph-loop 10m cd app && npm run build 2>&1 | tail -30

# Cancel before final commit:
/cancel-ralph

Parallel vs Sequential Tasks

Use the multi-agent bus when tasks are truly independent:

Phase 5a: Search Handler  ──┐
                             ├── Both complete → Phase 6
Phase 5b: Save Handler    ──┘

Tasks that share files, types, or database tables must be sequential. Tasks that produce independent modules can run in parallel. The plan must explicitly state which.

Plan Document Format

Save to docs/superpowers/plans/YYYY-MM-DD-<feature>-implementation.md. Follow this header:

# [Feature] Implementation Plan

> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development
> to implement this plan task-by-task.

**Goal:** [one sentence]
**Architecture:** [2-3 sentences]
**Tech Stack:** [key technologies]
**Spec:** [path to spec document]

## Agent Team Roster
[Table with: Role, ID, Skills, MCPs, Phases, Model]

## Ralph Loop Integration
[Table with: Phase, Command, Interval, Purpose]

## Phase N: [Name]
**Agent:** [role]
**Skills:** [list]
**MCPs:** [list with specific tools]

### Task N.M: [Name]
**Files:** [create/modify paths]
- [ ] Step 1: ...
- [ ] Step 2: ...
- [ ] Step 3: Commit

Adapting to the Project

This skill is not one-size-fits-all. Every project has different:

  • Tech stack — Next.js vs Remix vs plain Node, Supabase vs Prisma vs raw Postgres
  • Available MCPs — What the user has installed determines research capability
  • Available skills — What the user has installed determines agent capability
  • Scale — A 2-task feature doesn't need 8 agent roles
  • Team size — Solo dev vs team affects review and branching strategy

Scaling Down

For small features (< 5 tasks):

  • Skip the multi-agent bus
  • Use 2-3 agent roles max
  • Ralph Loop still applies (it's cheap insurance)
  • Research phase can be 1-2 agents instead of 4-5

Scaling Up

For large features (> 20 tasks):

  • Break into sub-projects (each gets its own spec + plan cycle)
  • Use the multi-agent bus for parallel phase execution
  • Add review checkpoints between phases
  • Consider a dedicated QA phase with its own agent team

Anti-Patterns

Anti-Pattern Why It Fails Do Instead
Jumping to code without spec Rework, misalignment, wasted time Always spec first, even for "simple" features
Assigning all tasks to one agent Context pollution, no specialization Match agent role to work type
Skipping research phase Design based on assumptions, not data Always launch at least a codebase explorer
Making all phases sequential Slow, doesn't use parallel agent capability Identify independent tasks, run them in parallel
Over-engineering Ralph Loop Checking too much too often wastes resources Match check to phase: types for backend, build for frontend
Assigning MCPs the agent doesn't need Confusing prompts, wasted context Only assign MCPs the agent will actually use
Not explaining skill/MCP assignments User doesn't know what to install or why Always include rationale