Skip to content

WellDunDun/startup-announcement

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Startup Announcement Planner

A skill that helps founders turn announcements into coordinated distribution systems — not just blog posts.

Based on Anatomy of an Announcement by Allison Braley, Partner at Bain Capital Ventures.

Read the original on X

The full article is also included in references/anatomy-of-an-announcement.md.

What it does

This skill walks founders through a four-step process for any startup announcement — funding rounds, product launches, coming out of stealth, or milestones:

  1. Pick one audience — customers, talent, or investors. Not all three.
  2. Craft the message — headline that names the change (not the product), narrative paragraph, three skeptic-proof proof points.
  3. Plan distribution — 1 owned asset + 1 earned amplifier + 1 social post, synchronized in a tight window. Evaluate media on both reach and depth.
  4. Execute — timeline adapted to your constraints, amplification network with specific timing asks.

The output is a complete plan document plus ready-to-use content drafts: blog post, headline options, social post, amplification ask, and media pitch.

Install

npx skills add WellDunDun/startup-announcement

What the skill prevents

Without this skill, Claude defaults to patterns the article explicitly warns against:

  • Product-first openings — "Today we're launching X" instead of leading with the problem
  • "Introducing X" headlines — naming the product instead of naming the change
  • No trap awareness — not warning founders about the corporate PR / AI slop / feature dump pitfalls
  • Shallow media evaluation — listing publications without evaluating the reach vs. depth tradeoff

Benchmark results

Tested across 4 scenarios with 11 strategic-quality assertions each (44 total). Each assertion tests a specific principle from the article — not just "does a section exist" but "does the headline name the change, not the product?"

With Skill Without Skill Delta
Pass Rate 100% (44/44) 72.7% (32/44) +27.3%
Avg Time 308s 445s -136s
Avg Tokens 36K 37.5K -1.5K

Most discriminating assertions

Assertion Without-Skill Failure Rate What happens without the skill
Explicit trap awareness 3/4 evals failed Claude doesn't proactively name pitfalls to avoid
Blog opens with problem, not product 2/4 evals failed Defaults to "Today we're launching..."
Headlines name the change 2/4 evals failed Falls into "Introducing [Product]" pattern
Media evaluated on reach AND depth 1/4 evals failed Lists outlets without evaluating the tradeoff

Test scenarios

Eval Scenario With Skill Without Skill
seed-cyber $6M seed, AI threat detection, targeting CISOs 11/11 8/11
devtools-recruiting Open-source observability tool, recruiting engineers, 1-week timeline 11/11 7/11
stealth-fintech Coming out of stealth, payments for SMBs, zero budget 11/11 7/11
solo-ai-builder Solo AI founder, audience mismatch (X followers vs. B2B ICP) 11/11 10/11

View full results

Open evals/viewer.html in your browser to see the complete outputs, assertion grades, and side-by-side comparisons for every test case.

Evals

The evals/ directory contains everything you need to run your own benchmarks or modify the assertions:

  • evals.json — 4 test prompts with 44 strategic-quality assertions
  • viewer.html — standalone HTML viewer with outputs, grades, and benchmark data

Assertion philosophy

The assertions test the article's strategic principles, not structural presence. For example:

  • Not this: "Does the plan include a blog post draft?"
  • This: "Does the blog post open with the audience's problem, not a product description?"

This matters because Claude naturally produces well-structured plans. The skill's value is in enforcing strategic quality — problem-first framing, change-naming headlines, explicit trap awareness, and reach-vs-depth media evaluation.

Modify and contribute

Fork the repo, edit SKILL.md, and re-run the evals to test your changes.

Editing the skill

SKILL.md is the entire skill — there's no build step. Edit it, reinstall, and test.

Editing evals

The test cases and assertions live in evals/evals.json. The structure:

{
  "evals": [
    {
      "id": 1,
      "prompt": "The scenario given to Claude",
      "expected_output": "What a good response looks like (human-readable)",
      "assertions": [
        {
          "id": "headline-names-change",
          "text": "Headlines name the CHANGE being made, not just the product..."
        }
      ]
    }
  ]
}

Each eval has a prompt (the founder's scenario) and a list of assertions (what the output should get right). To modify:

  • Add a scenario: Add a new object to the evals array with a unique id, a realistic founder prompt, and assertions that test what matters.
  • Add an assertion: Add to any eval's assertions array. Give it a descriptive id and a text that's specific enough to grade as pass/fail. Good assertions test strategic quality ("does the blog open with the problem, not the product?"), not structure ("does a blog post exist?").
  • Remove or edit: Delete or modify any assertion that doesn't match your use case. The assertions are tuned for the original article's principles, but you may want different ones.

After editing, re-run the skill against your prompts and grade the outputs to see if your changes improved things.

License

MIT

About

A skill that helps founders turn startup announcements into coordinated distribution systems

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages