A structured framework for vetting app, product, and startup ideas. Platform-agnostic — works with any AI agent (OpenClaw, Cowork, Codex, Claude Code) or as a standalone prompt.
Cuts through AI-generated hype with structured competitive research, tech feasibility analysis, unit economics validation, legal risk assessment, and real market signal detection. Delivers an honest Go / Caution / Pass verdict.
When someone shares an app idea (or an AI conversation about one), this framework guides research through six dimensions:
- Direct Competitors — Who's already doing this? (App Store, Product Hunt, YC)
- Adjacent Competitors — Who's solving the same problem differently?
- Tech Feasibility — Can it be built today? Per-component ✅/
⚠️ /❌ ratings. - Unit Economics — Real API costs, margin analysis, pricing reality check.
- Legal & Regulatory — HIPAA, deepfake laws, platform policies, etc.
- Market Signal — Reddit, HN, Twitter — do people actually want this?
Synthesizes into a scorecard and verdict with a recommended MVP if warranted.
Drop SKILL.md into your agent's context or project directory. Then:
- "Vet this app idea: [description]"
- "Is this a good product idea?"
- "My friend shared this concept, is anyone already doing this?"
- "Should we build this?"
- Quick Mode — Fast pass through sections 1-3 (~2 min). Good for "is this worth exploring?"
- Full Mode — All six sections with web research (~3-5 min). For serious vetting.
Includes a copy-paste task template for delegating to sub-agents, Codex tasks, or Cowork jobs.
Deliberately counters common AI assistant failure modes:
- ✅ Verifies cost claims with real API pricing pages
- ✅ Searches App Store / Play Store (not just web)
- ✅ Checks Reddit/HN for actual demand signal
- ✅ Flags "nobody is doing this" claims for extra scrutiny
- ✅ Calls out unrealistic timeline estimates
- ✅ Red-flags revenue projections that ignore churn
MIT