Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 25 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ For both options, the coach will ask for your resume, target role, and timeline
| `resume` | Resume optimization (3 depth levels, JD-targeted when available) | ATS audit, section-by-section assessment, bullet rewrites, seniority calibration, keyword analysis, storybank-to-bullet pipeline |
| `pitch` | Core positioning statement + context variants | Core statement, constraint ladder, context-specific pitches, positioning consistency check |
| `outreach` | Networking outreach coaching (3 depth levels, 9 message types) | Message frameworks, draft critique + rewrite, follow-up sequences, multi-channel campaign strategy |
| `apply [company]` | Draft written answers to job application screening questions | Story selection menu per question (with domain match flagging), ready-to-paste answers, prior answer reuse, flagged gaps |

### Pre-Conversation

Expand Down Expand Up @@ -333,7 +334,28 @@ Then specify message type (cold LinkedIn, warm intro, recruiter reply, etc.) and
- Follow-up sequence with timing
- Earned secret hooks pulled from your storybank

### 11) Post-offer negotiation
### 11) Answer job application screening questions

```text
apply HireRight
```

Then provide:

- List of application questions (paste them directly)
- Optional: JD (used for domain matching and "why us" questions)
- Optional: word or character limits per question

Expected output:

- Per-question story selection menu (2-3 options, with `[Domain match]` flag when a story comes from the same industry as the company)
- Prior answer suggestions when a similar question was answered for another company
- Ready-to-paste written answers in written register (150-200 words each, tighter than spoken interview answers)
- Flagged gaps where storybank or resume evidence is missing — no answer gets fabricated

Answers are saved to `job-search/[company]_application.md` and reused as a library across future applications.

### 12) Post-offer negotiation

```text
negotiate
Expand Down Expand Up @@ -412,6 +434,7 @@ interview-coach-skill/
│ ├── progress.md
│ ├── negotiate.md
│ ├── feedback.md
│ ├── apply.md
│ ├── reflect.md
│ └── help.md
├── cross-cutting.md # Shared modules: gap-handling, signal-reading, differentiation, cultural awareness, psychological readiness, cross-command dependencies
Expand Down Expand Up @@ -443,6 +466,7 @@ interview-coach-skill/
10. Run `decode` before applying — analyze the JD's language, assess your fit, and decide if the role is worth your time. Use batch triage to compare multiple JDs at once.
11. Run `salary` before your first recruiter call — the recruiter screen is the highest-leverage comp moment, not the offer negotiation.
12. Run `present` before a presentation round — structure your content and prepare for Q&A before you ever open PowerPoint.
13. Run `apply` when a job application includes screening questions — it surfaces domain-relevant stories, checks your storybank for evidence before drafting, and builds a reusable answer library across applications.

---

Expand Down
5 changes: 4 additions & 1 deletion SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -345,6 +345,7 @@ Write to `coaching_state.md` whenever:
- prep starts a new company loop or updates interviewer intel, round formats, fit verdict, fit confidence, and structural gaps (add to Interview Loops)
- negotiate receives an offer (add to Outcome Log with Result: offer)
- reflect archives the coaching state (add Status: Archived header)
- apply saves drafted answers to `job-search/[company]_application.md` and logs to Session Log
- Meta-check conversations (record candidate's response and any coaching adjustment to Meta-Check Log)
- Any session where the candidate reveals coaching-relevant personal context — preferences, emotional patterns, interview anxieties, scheduling preferences, etc. (add to Coaching Notes)

Expand Down Expand Up @@ -399,6 +400,7 @@ Execute commands immediately when detected. Before executing, **read the referen
| `negotiate` | Post-offer negotiation coaching |
| `reflect` | Post-search retrospective + archive |
| `feedback` | Capture recruiter feedback, report outcomes, correct assessments, add context |
| `apply [company]` | Draft written answers to job application screening questions |
| `help` | Show this command list |

### File Routing
Expand Down Expand Up @@ -512,7 +514,8 @@ Use first match:
17. Progress/pattern intent -> `progress`
18. "I got an offer" / offer details present -> `negotiate`
19. "I'm done" / "accepted" / "wrapping up" -> `reflect`
20. Otherwise -> ask whether to run `kickoff` or `help`
20. Application questions present (list of screening questions from a job posting, with or without a JD) -> `apply`
21. Otherwise -> ask whether to run `kickoff` or `help`

### Multi-Step Intent Detection

Expand Down
149 changes: 149 additions & 0 deletions references/commands/apply.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
# apply — Job Application Question Drafting

### Inputs

- Required: Company name + list of application questions
- Optional: Word or character limits per question
- Optional: JD (used for "why us" tailoring and domain matching)
- Optional: Resume text (used as fallback for tools/experience questions not covered by storybank)

---

### Sequence

**Step 1: Parse and classify questions**

For each question, assign one type:
- **Behavioral**: "Tell me about a time...", "Describe a situation where..."
- **Process/method**: "How do you [prioritize / use data / manage stakeholders]..."
- **Tools/experience**: "Do you have experience with [tool/domain]..."
- **Why us**: "Why this company / role / industry..."
- **Other**: Hypotheticals, case-style, or open-ended

State the classification before drafting each answer.

---

**Step 2: Check for prior answers**

Before drafting, scan `job-search/` for existing application files from previous companies. For each question, check whether a semantically similar question was answered before. If yes:
- Surface the prior answer
- Ask: "I answered a similar question for [Company] — want me to adapt that, or draft fresh for [New Company]?"

This builds a reusable answer library across applications over time.

---

**Step 3: Gap check before drafting**

For each question, verify that the storybank (in `coaching_state.md`) or the provided resume contains evidence to support an answer.

**If evidence is found**: proceed to Step 4.

**If evidence is not found** (e.g., a tool never used, a domain never worked in, an experience not in the storybank or resume):
- Do not invent or imply the experience.
- Flag it explicitly: "I don't see evidence in your storybank or resume for [X]. Can you tell me about a time you [Y]? Or should we note this gap and move on?"
- Don't refuse to proceed — draft what's supportable and mark the flagged question clearly.

---

**Step 4: Story selection (behavioral and process/method questions only)**

For each behavioral or process/method question, do not auto-select a story. Instead:

1. Identify 2-3 candidate stories from the storybank that could answer the question. Use the Quick Reference table in the storybank as the starting point.

2. **Check for domain relevance.** If the JD or company has a clear industry or domain (e.g., background verification, fintech, identity, edtech, marketplace), flag any story from a matching domain with a `[Domain match]` marker. Domain-relevant stories create an implicit signal of industry familiarity — a story from a cognate domain told to a company in that domain lands differently than the same structural story from an unrelated domain. Weight domain match as a tiebreaker when two stories are otherwise equal.

3. Present the options to the candidate with a one-line rationale for each, and the domain match flag where applicable:

```
For Q1 ("describe a time market insights influenced your strategy"):
a) [Story title] — [one-line rationale]
b) [Story title] — [one-line rationale] [Domain match — background verification / identity]
c) [Story title] — [one-line rationale]

Note: [Company] is in [domain] — story (b) signals direct domain familiarity. Worth considering.

Which story do you want to use? Or should I draft using [b] and you can swap later?
```

Wait for the candidate's choice before drafting. If they say "you choose" or don't respond with a preference, default to the domain-matched story if one exists, otherwise the Quick Reference recommendation. Note which story was used.

This step applies to behavioral and process/method questions only. For tools/experience and "why us" questions, proceed directly to Step 5.

---

**Step 5: Draft answers**

Map each question type to its source:
- **Behavioral**: Use the chosen story from Step 4. Write in the written register — tighter than spoken, no filler phrases. Pull the core STAR spine from the story but compress. Application fields are read, not heard.
- **Process/method**: Use the chosen story from Step 4 as the anchor example. State the principle first, then ground it in the story with a metric.
- **Tools/experience**: Storybank first; resume as fallback. Name the tools directly, state scope of use. If a specific tool is not evidenced in either source, say so honestly rather than implying familiarity.
- **Why us**: Requires company context. If `research` or `prep` has been run for this company, pull from `coaching_state.md`. If not, ask the candidate for 1-2 genuine reasons before drafting — do not generate a generic answer.

**Default length**: 150-200 words per answer unless a limit is specified.

**Tone**: Written, not spoken. No filler transitions ("I think", "So basically", "At the end of the day"). Structured but not robotic. First-person, active voice.

---

**Step 6: Output**

Present all answers as a ready-to-paste block, labeled by question number and type:

```
## Q1 [Behavioral] — Story used: [Story title]
[Answer]

## Q2 [Process/Method] — Story used: [Story title]
[Answer]

## Q3 [Tools/Experience]
[Answer]
```

Flag any questions where evidence was missing and the candidate needs to provide context.

---

**Step 7: Save**

Save to `job-search/[company]_application.md` with:
- Company name and role at the top
- Date drafted
- Each question and its answer (including story label for behavioral/process answers)
- Any flagged gaps

Add to Session Log in `coaching_state.md`.

---

### Output Schema

```markdown
# [Company] Application — [Role]
Date: [date]

## Q1 — [Question] [Type: Behavioral / Process / Tools / Why Us]
Story used: [Story title] (for Behavioral/Process only)
[Answer — 150-200 words unless limit specified]

## Q2 — [Question] [Type]
[Answer]

## Flagged Gaps
- Q[N]: [What's missing — what the candidate needs to provide before this answer can be drafted]
```

---

### Notes

- Never fabricate an experience. If the storybank and resume don't support an answer, flag it and ask.
- For "why us" questions: don't generate a generic answer. Either pull from `prep`/`research` output in `coaching_state.md`, or ask the candidate directly before drafting.
- Written answers read differently than spoken answers. Avoid narrative warmup ("So, this was a project where..."). Start at the situation or the insight.
- Proprietary tools (Aha, Productboard, Salesforce, niche platforms) may not be in the storybank. Check the provided resume explicitly before claiming familiarity.
- The domain match flag is a tiebreaker, not a mandate. A structurally weaker story from a matching domain is not better than a structurally stronger story from a different domain. Use judgement.

**Next commands**: `prep [company]` if not yet run, `concerns` if moving to interview stage, `practice` to prepare for follow-up questions on your written answers