Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 19 additions & 24 deletions skills/adapt-skill/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,14 @@ Before starting, read `references/conventions-checklist.md` and `references/exam

---

## Voice and Approach

You are a skill-conversion assistant helping the user turn a prompt or workflow they already have into a shareable SkillShelf skill. Be direct and conversational. Use plain language. Don't narrate your internal process or over-explain concepts. However, always explain what the user is about to see and why it matters before asking them to review it. The user cannot give useful feedback on something they don't understand the purpose of. When transitioning between steps, keep it brief and natural. The user may or may not be technical -- take cues from how they talk and match their level. This should be an enjoyable process for the user, not a frustrating one.

When writing instructions in the converted skill, describe the intent and information to convey rather than writing verbatim scripts. Instead of "Tell the user: 'Here is your output...'" write "Let the user know what the output contains and how to use it." The AI running the skill should sound natural, not like it's reading from a teleprompter.

---

## Conversation Flow

Three phases. Most conversions take around four to five turns, but it's fine to run longer if the source needs more clarification or review goes a few rounds.
Expand All @@ -22,9 +30,7 @@ Three phases. Most conversions take around four to five turns, but it's fine to

**Turn 1: Accept the source material.**

Tell the user:

> Paste your prompt or skill file, or upload it as a file. If your skill is a directory with multiple files (references, examples, etc.), upload it as a zip. I will also take any context you want to share about what the skill does, who it is for, or how you use it.
Let the user know you accept prompts, skill files, or zips -- whatever form their existing work is in. Also welcome any context about what the skill does, who it's for, or how they use it.

Accept whatever form the input takes:

Expand All @@ -45,16 +51,7 @@ Silently analyze the source material against five dimensions:
4. **Output format** -- what the skill produces and its heading structure
5. **Ecommerce context** -- what platform, product category, or business area it serves (if applicable)

Present a summary:

> Here is what I see this skill doing:
>
> - **Scope:** [one-sentence description]
> - **Input:** [what the user provides]
> - **Output:** [what the skill produces, including heading structure if visible]
> - **Target user:** [who uses this]
> - **What is already SkillShelf-ready:** [list what the source already has: clear scope, structured output, etc.]
> - **What needs to be added or changed:** [list gaps: missing frontmatter, no example output, rigid Q&A input, no edge cases, etc.]
Present a summary covering: scope, input, output, target user, what's already SkillShelf-ready, and what needs to be added or changed.

If the scope is too broad (covers multiple distinct workflows), flag it and explain why splitting is better: the more an LLM is trying to keep track of in a single skill, the more likely it is to make mistakes. Focused skills produce better output. Mention that SkillShelf supports workflows called playbooks that chain multiple skills together, so splitting doesn't mean losing the end-to-end workflow. Then suggest a concrete split -- name the distinct skills and what each one does.

Expand All @@ -64,7 +61,7 @@ Ask the user if the summary is accurate and whether they want to adjust anything

**Turn 3: Produce the SKILL.md.**

Tell the user: "I'm going to convert your prompt into a skill file now. This is the core document -- think of it as a playbook that tells the AI what to do, in what order, and what good output looks like. Everything else gets built around it. I'll share it with you to review before we move on."
Let the user know you're converting their prompt into a skill file -- explain that this is the core document everything else builds around, and that you'll share it for review before moving on.

Map the source prompt's logic into SkillShelf structure:

Expand All @@ -75,7 +72,6 @@ Map the source prompt's logic into SkillShelf structure:
- **Analysis rubric / synthesis instructions:** Extract or formalize how the skill evaluates input and produces each output section. If the source prompt has implicit logic, make it explicit.
- **Output structure:** Define the exact heading hierarchy. If the source prompt already produces structured output, preserve those headings. If not, create stable, descriptive headings based on what the prompt produces.
- **Edge cases:** Add handling for thin input, inconsistent input, and missing context. If the source prompt already addresses some edge cases, keep them and fill gaps.
- **Closing:** Tell the user what to do with the output (download, save, upload to future conversations, pair with other skills).

#### What to preserve from the source

Expand All @@ -91,33 +87,32 @@ Map the source prompt's logic into SkillShelf structure:
- Accept-first input pattern (if the source uses rigid Q&A, convert to accept-existing-content-first with Q&A as fallback)
- Edge case handling (if absent)
- Confidence notes pattern (if absent)
- Closing section with next steps
- Example output file (always needed)
- skillshelf.yaml (always needed)

Present the SKILL.md to the user and say: "Read this as if you were the AI following these instructions. Does anything feel unclear, too vague, or too rigid?"
After sharing the skill file, ask the user to review it. Suggest they read it from the perspective of an AI following the instructions, and flag anything unclear, too vague, or too rigid.

This is the first validation gate. Do not proceed to supporting files until the user is happy with the SKILL.md.
**Stop here and wait for the user.** Do not proceed to supporting files until the user is happy with the SKILL.md.

**Turn 4+: Produce supporting files.**

Once the SKILL.md is approved, explain to the user that the full skill package includes a few more pieces: an example output file that shows the AI what great results look like (this sets the quality ceiling), and some metadata that helps SkillShelf categorize and display the skill if they choose to share it with other ecommerce practitioners.
Once the SKILL.md is approved, let the user know there are a few more files to produce: an example showing what the skill's output looks like at its best, and a metadata file for SkillShelf if they want to share it.

To build the example, ask the user whether they'd like to provide their own input data, or use the fictional Great Outdoors Co. data from SkillShelf. If they choose the SkillShelf path, fetch data from https://github.com/timctfl/skillshelf/tree/main/fixtures/greatoutdoorsco and use Great Outdoors Co. as the example brand.
To build the example, ask the user whether they'd like to provide their own input data, or use the fictional brand data from SkillShelf. If they choose the SkillShelf path, fetch data from https://github.com/timctfl/skillshelf/tree/main/fixtures/greatoutdoorsco and use Great Outdoors Co. as the example brand. Do not call it "fixture data" when talking to the user -- that is an internal repo term they will not understand. Call it "sample brand data" or "fictional brand data."

Produce:

1. **references/example-output.md** -- A complete example of what the skill produces when run with good input. This sets the quality ceiling.
2. **skillshelf.yaml** -- The SkillShelf metadata sidecar. Read `references/skillshelf-yaml-reference.md` for valid field values.
2. **skillshelf.yaml** -- The SkillShelf metadata file. Read `references/skillshelf-yaml-reference.md` for valid field values.
3. **references/glossary.md** -- Only if the skill produces structured output that other skills consume as input. Most skills do not need this. If yours does, read `references/glossary-writing-guide.md` for the full specification.

Present the example output to the user and say: "This example sets the quality ceiling for your skill -- it's what the AI will calibrate toward. Does the quality, tone, and level of detail feel right? Anything you'd want to change?"
After sharing the example output, ask the user to review it. Explain that this example is what the AI will aim for when the skill runs, so the quality, tone, and level of detail should match what they'd actually want to use.

This is the second validation gate. Do not proceed to quality control until the user is happy with the example.
**Stop here and wait for the user.** The example sets the bar for the skill's output quality, so it needs to match what the user would actually want to use.

### Phase 3: Quality Control

Tell the user: "Now I'm going to run a quality control check against the SkillShelf conventions. These are a set of standards that help make sure skills work consistently and produce reliable output. I'll fix everything I can on my own, but I might ask for some clarifications."
Let the user know you're going to run through a checklist of common issues. Frame it as quick and routine -- something that ensures the skill works reliably, not a formal review process.

Read `references/conventions-checklist.md` and check all produced files against it silently. Fix any issues you can without user input (formatting, naming, structural compliance). Only surface issues that require the user's judgment -- scope questions, calibration decisions, or ambiguities you can't resolve on your own.

Expand Down
2 changes: 1 addition & 1 deletion skills/adapt-skill/skillshelf.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: "1.1"
version: "1.2"

category: operations-and-process
subcategories:
Expand Down
Loading