Skip to content

Releases: REMvisual/claude-handoff

v1.4.1 — Hotfix: genericize example bead IDs

22 Mar 17:05

Choose a tag to compare

Fixed

  • Replaced project-specific example bead IDs with generic ones in skill template
  • Updated sync script to catch project-specific patterns automatically

v1.4.0 — Two-Phase Write + Parent Cross-Referencing

22 Mar 05:12

Choose a tag to compare

What's new

Two-phase write process — The single biggest improvement. Phase 1 writes the handoff (~350-450 lines). Phase 2 reads it back, scans the conversation for uncaptured data, and uses Edit to expand toward the 800-line ceiling. In testing, Phase 2 added 91 lines of evidence tables (verification audits, test suites, 3-way comparisons, local model assessments).

Parent cross-referencing — New "Since Last Handoff" section compares the parent handoff's plan vs what actually happened. Answers the parent's open questions. Shows trajectory shifts. Gives the next session a sense of momentum, not just a snapshot.

Reference document scanning — Auto-detects project bibles, architecture docs, and CLAUDE.md. Lists them in a "Reference Documents" section so the next session knows where to look.

Raised ceilings — Extended target: 500-800 lines (was 300-600). Tier 3 target: 800. The instruction is now "target the CEILING, not the floor."

Validated

7 A/B test iterations on the same 550K-token session:

Version Lines Tables Data retention
Baseline (old skill) 443 6 100% (reference)
V5 (floor enforcement) 448 9 76%
V7 (two-phase) 536 7+ 88%

Install / Update

git clone https://github.com/REMvisual/claude-handoff.git
cp -r claude-handoff/skills/handoff ~/.claude/skills/
cp -r claude-handoff/skills/handoffplan ~/.claude/skills/

Full changelog: CHANGELOG.md

v1.3.0 — Tiered Context Mining

22 Mar 04:36

Choose a tag to compare

What's new

Context-aware mining — The skill now automatically detects how much context you've used and selects the right extraction strategy:

  • Tier 1 (<100K tokens): Single checklist pass
  • Tier 2 (100K–500K): Two passes with middle-content gap-fill
  • Tier 3 (500K+): Map-reduce — chunks conversation, extracts per-chunk, merges and validates

This compensates for the "lost in the middle" problem where LLMs miss 30%+ of information from the middle of long contexts.

Required User Feedback section — The user's voice (corrections, preferences, frustrations, feature requests) is now a mandatory section that can never be omitted. A/B testing showed complete data loss when this was optional.

Raw data inlining — Small data blocks (<20 lines) like ground truth annotations and reference configs are now included inline as primary evidence, not just referenced by path.

Validated through 5 A/B iterations on the same 550K-token session, comparing output quality across each skill revision.

Fixes

  • Premature file splitting (agents pre-split into narrative + evidence files even under threshold)
  • Trigger substring matching ("close this session" accidentally firing the skill)
  • Tier 3 line floor (450 min) now in the validation check where the agent actually looks

Install / Update

git clone https://github.com/REMvisual/claude-handoff.git
cp -r claude-handoff/skills/handoff ~/.claude/skills/
cp -r claude-handoff/skills/handoffplan ~/.claude/skills/

Full changelog: CHANGELOG.md