diff --git a/.claude/commands/audit.md b/.claude/commands/audit.md new file mode 100644 index 0000000..1035386 --- /dev/null +++ b/.claude/commands/audit.md @@ -0,0 +1,123 @@ +--- +allowed-tools: Bash(crosslink *), Bash(git *), Bash(ls *), Read +description: Full context dump for debugging — use when stuck or disoriented +--- + +## Context + +- Session status: !`crosslink session status` +- Open issues: !`crosslink issue list -s open` +- Active locks: !`crosslink locks list 2>/dev/null` +- Current branch: !`git branch --show-current` +- Working tree: !`git status --short` + +## Your task + +You are stuck, confused, or need to re-orient. This skill dumps all available context so you can diagnose the problem. Run through each section and print the results. + +### 1. Project grounding (same as /preflight) + +Read core rules: +``` +Read .crosslink/rules/global.md +``` + +Detect languages and read relevant rule files (check for `Cargo.toml`, `package.json`, `tsconfig.json`, `pyproject.toml`, `go.mod`, etc.). + +Read project-specific rules: +``` +Read .crosslink/rules/project.md +``` + +Read tracking rules based on current mode: +```bash +crosslink config get tracking_mode +``` +Then read `.crosslink/rules/tracking-.md`. + +### 2. Project tree scan + +```bash +ls -1 +``` + +Scan the project tree (max depth 3, max 50 entries) to ground yourself on actual paths. + +### 3. Dependency versions + +Read the primary manifest file to confirm actual dependency versions. + +### 4. Session state + +```bash +crosslink session status +``` + +What issue are you working on? What was the last action? + +### 5. Active issue details + +If working on an issue, get full details: + +```bash +crosslink issue show +``` + +Review all comments, especially plan and decision comments. + +### 6. Related issues and blockers + +```bash +crosslink issue blocked +crosslink issue ready +``` + +Are there blocking dependencies? What's unblocked and available? + +### 7. Lock state + +```bash +crosslink locks list 2>/dev/null +``` + +Are any issues locked by other agents? + +### 8. Recent interventions + +Check if there have been recent hook blocks or driver redirects by reviewing recent issue comments. + +### 9. Hook configuration + +```bash +crosslink config show +``` + +What tracking mode is active? What commands are blocked/gated? + +### 10. Git state + +```bash +git status +git log --oneline -5 +git diff --stat HEAD +``` + +What's the current branch state? Any uncommitted changes? + +### 11. Print diagnostic summary + +``` +Audit summary: + Session: active / working on # + Branch: + Tracking: + Languages: + Open issues: + Blocked: + Locks: + Uncommitted: files changed + +Loaded rules: global.md, .md, tracking-.md, project.md +``` + +You are now fully re-oriented. Decide your next action based on this context. diff --git a/.claude/commands/check.md b/.claude/commands/check.md index a706edd..64701fa 100644 --- a/.claude/commands/check.md +++ b/.claude/commands/check.md @@ -1,33 +1,79 @@ --- -allowed-tools: Bash(tmux *), Bash(cat *), Bash(test *), Bash(ls *), Bash(git *), Bash(tail *) -description: Check on a background feature agent running in a tmux session +allowed-tools: Bash(tmux *), Bash(docker *), Bash(crosslink *), Bash(cat *), Bash(test *), Bash(ls *), Bash(git *), Bash(tail *), Bash(grep *) +description: Check on background feature agents running in containers or tmux sessions --- ## Context -- Active tmux sessions: !`tmux list-sessions 2>/dev/null || echo "no tmux server running"` +- Running containers: !`docker ps --filter label=crosslink-agent=true --format 'table {{.Names}}\t{{.Status}}' 2>/dev/null` +- Active tmux sessions: !`tmux list-sessions 2>/dev/null` - Current worktrees: !`git worktree list` ## Your task -The user optionally provides a tmux session name (e.g. `feat-add-batch-retry`). If no session name is given, check **all** active `feat-*` tmux sessions and report a summary for each. +The user optionally provides an agent name (e.g. `crosslink-task-add-batch-retry` or `feat-add-batch-retry`). If no name is given, check **all** active feature agents (both containers and tmux sessions) and report a summary for each. -### 1. Identify sessions to check +### 1. Identify agents to check -- If the user provided a session name, use that single session. -- If no session name was provided, list all tmux sessions whose names start with `feat-`: `tmux list-sessions -F '#{session_name}' 2>/dev/null | grep '^feat-'` -- If no `feat-*` sessions exist, report "No active feature agent sessions found." +#### a. Find container-based agents -### 2. For each session, perform these checks: +1. Get this repo's worktree paths: `git worktree list --porcelain | grep '^worktree ' | sed 's/^worktree //'` +2. List crosslink containers: `docker ps -a --filter label=crosslink-agent=true --format '{{.Names}} {{.Status}} {{.Label "crosslink-task"}}' 2>/dev/null` +3. Match containers to this repo: for each container, check if its `crosslink-task` label value matches any of this repo's worktree directory names (the last path component) +4. If the user provided a name starting with `crosslink-task-`, filter to that specific container -#### a. Check the sentinel file +#### b. Find tmux-based agents -Find the worktree path for this session by checking `git worktree list` and matching the session name to a feature branch. The session name `feat-` corresponds to branch `feature/`. +1. Get `feat-*` tmux sessions: `tmux list-sessions -F '#{session_name} #{session_path}' 2>/dev/null | grep '^feat-'` +2. Only include sessions whose `session_path` matches one of this repo's worktree paths +3. If the user provided a name starting with `feat-`, filter to that specific session + +If no agents found in either mode, report "No active feature agents for this repo." + +### 2. For each agent, perform these checks: + +#### For container-based agents: + +##### a. Check the sentinel file + +Get the worktree path by matching the container's `crosslink-task` label to a worktree directory name from `git worktree list`. + +- Check: `cat /.kickoff-status 2>/dev/null` +- If it contains `DONE`, mark as finished. +- If it contains `CI_FAILED`, mark as CI failure. + +##### b. Check container status + +```bash +docker inspect --format '{{.State.Status}} (exit {{.State.ExitCode}})' +``` + +Possible states: `running`, `exited` (check exit code), `restarting`, `paused`. + +##### c. Capture recent output + +```bash +docker logs --tail 80 2>&1 +``` + +##### d. Analyze state + +- **Working**: Container status is `running`, no sentinel file, recent tool calls visible in logs +- **Idle**: Container is `running` but no recent output changes — may be thinking or waiting for API +- **Error**: Container `exited` with non-zero exit code, or error messages in recent logs +- **Done**: Sentinel file says `DONE`, or container exited with code 0 +- **CI Failed**: Sentinel file says `CI_FAILED` + +#### For tmux-based agents: + +##### a. Check the sentinel file + +Get the worktree path for this session from tmux: `tmux display-message -t -p '#{session_path}'`. Alternatively, match the session name to a feature branch in `git worktree list`. - Check if `.kickoff-status` exists in the worktree: `cat /.kickoff-status 2>/dev/null` - If it contains `DONE`, mark this session as finished. -#### b. Capture the terminal state +##### b. Capture the terminal state ```bash tmux capture-pane -t -p -S -80 @@ -35,7 +81,7 @@ tmux capture-pane -t -p -S -80 This captures the last ~80 lines of visible output. -#### c. Analyze state +##### c. Analyze state Read the captured output and determine the agent's current state: @@ -46,38 +92,49 @@ Read the captured output and determine the agent's current state: ### 3. Report -When checking **multiple sessions**, use a compact table format: +When checking **multiple agents**, use a compact table format with a backend indicator: ``` Feature Agents: - feat-add-retry Working Implementing retry logic in _sources.py - feat-fix-lens-bug Done All changes committed and reviewed - feat-new-cli-cmd Waiting Asking about CLI argument format + crosslink-task-add-retry [container] Working Implementing retry logic in _sources.py + crosslink-task-fix-lens [container] Done All changes committed and reviewed + feat-new-cli-cmd [tmux] Waiting Asking about CLI argument format ``` -When checking a **single session**, use the detailed format: +When checking a **single agent**, use the detailed format: ``` -Session: -Status: +Agent: +Backend: +Status: <2-3 sentence summary of what the agent is currently doing or has accomplished> ``` +For container agents, also show resource usage: +```bash +docker stats --no-stream --format ' CPU: {{.CPUPerc}} Memory: {{.MemUsage}}' 2>/dev/null +``` + ### 4. Offer actions -Based on the status of each session, suggest relevant next steps: +#### For container agents: + +- **If working/idle**: "Check back later, or view live logs: `crosslink container logs -f`" +- **If done**: "Agent finished. Review the changes: `cd && git log --oneline ..HEAD`" +- **If error**: Show the relevant error output. Suggest: "Debug with: `crosslink container shell ` or view full logs: `crosslink container logs --tail 500`" +- **If exited (non-zero)**: "Container exited with error. View logs: `crosslink container logs `. Restart with: `crosslink container kill && crosslink container start `" + +#### For tmux agents: - **If working**: "Check back later, or attach directly: `tmux attach -t `" - **If waiting for input**: Read the question, and ask the user what to answer. If the user provides an answer, send it: `tmux send-keys -t "" Enter` -- **If done**: "Agent finished. Review the changes: `cd && git log --oneline develop..HEAD`" +- **If done**: "Agent finished. Review the changes: `cd && git log --oneline ..HEAD`" - **If error**: Show the relevant error output and suggest the user attach to debug: `tmux attach -t ` -When multiple sessions are reported, only show detailed actions for sessions that need attention (Waiting or Error). - ## Constraints - Do not modify any files in the worktree — this is a read-only check. -- Do not kill the tmux session unless the user explicitly asks. -- When relaying a user's answer to a waiting prompt, send exactly what the user provides — do not embellish or modify. +- Do not kill containers or tmux sessions unless the user explicitly asks. +- When relaying a user's answer to a waiting tmux prompt, send exactly what the user provides — do not embellish or modify. diff --git a/.claude/commands/commit.md b/.claude/commands/commit.md new file mode 100644 index 0000000..10bb8da --- /dev/null +++ b/.claude/commands/commit.md @@ -0,0 +1,65 @@ +--- +allowed-tools: Bash(git *), Bash(crosslink *) +description: Commit changes and auto-document the result on the active crosslink issue +--- + +## Context + +- Working tree status: !`git status --short` +- Current branch: !`git branch --show-current` +- Active session: !`crosslink session status 2>/dev/null` + +## Your task + +The user wants to commit their current changes. You will create a well-formed git commit AND automatically record a result comment on the active crosslink issue. + +### 1. Review changes + +Run `git diff --cached --stat` and `git diff --stat` to see staged and unstaged changes. If nothing is staged, stage the relevant files (ask the user if unclear which files to include). Never use `git add -A` blindly — stage specific files. + +### 2. Write the commit message + +- Summarize what changed and why (1-2 sentences) +- Follow conventional commit style if the project uses it +- Include the crosslink issue reference if an active issue exists (e.g. `[CL-5]`) + +### 3. Create the commit + +```bash +git commit -m "" +``` + +### 4. Auto-document the result on the active crosslink issue + +After a successful commit, check if there's an active crosslink session with an active issue: + +```bash +crosslink session status +``` + +If an active issue exists, record the commit as a result comment: + +```bash +crosslink issue comment "Committed: | Files: " --kind result +``` + +For example: +```bash +crosslink issue comment 5 "Committed: Add typed comment support to schema | Files: 14 files changed, 312 insertions(+), 48 deletions(-)" --kind result +``` + +If no active session or issue, skip the comment silently. + +### 5. Show summary + +Display: +- The commit hash and message +- Files changed summary +- Whether the result was recorded on a crosslink issue + +## Constraints + +- Never force-push or amend commits without explicit user request. +- Never use `git add -A` or `git add .` without confirming with the user. +- Always record the result comment after a successful commit when an active issue exists. +- If the commit fails (e.g. pre-commit hook), fix the issue and retry — do NOT record a result comment for failed commits. diff --git a/.claude/commands/crosslink-guide.md b/.claude/commands/crosslink-guide.md new file mode 100644 index 0000000..f03fa48 --- /dev/null +++ b/.claude/commands/crosslink-guide.md @@ -0,0 +1,124 @@ +You are helping the user understand and use crosslink, an issue tracker designed for AI-assisted development. Read the project's CLAUDE.md for the full command reference, then answer the user's question using that context. + +## What is Crosslink? + +Crosslink is a local-first issue tracker that stores data in SQLite (`.crosslink/issues.db`) and syncs state via a git coordination branch (`crosslink/hub`). It's designed for AI agents working in Claude Code, providing: + +- **Issue tracking** that persists across AI sessions +- **Session handoff** so new conversations pick up where the last left off +- **Typed comments** for audit trails (plan, decision, observation, blocker, resolution, result) +- **Multi-agent coordination** via locks, swarms, and kickoff agents +- **Knowledge base** for shared documentation across sessions + +## Core Workflow + +Every work session follows this pattern: + +```bash +# 1. Start session (see what the last session handed off) +crosslink session start + +# 2. Create or pick an issue to work on +crosslink quick "Fix login bug" -p high -l bug # create + track in one step +# OR +crosslink issue list -s open # see existing issues +crosslink session work # pick one + +# 3. Document as you work +crosslink issue comment "Approach: ..." --kind plan +crosslink issue comment "Chose X over Y because ..." --kind decision + +# 4. End session with handoff notes +crosslink session end --notes "Fixed the bug, PR ready for review" +``` + +## Common Tasks + +### Creating issues +```bash +crosslink issue create "Title" -p medium # basic create +crosslink quick "Title" -p high -l bug # create + label + start working +crosslink subissue "Child title" # create under a parent +``` + +### Querying issues +```bash +crosslink issue list # open issues (default) +crosslink issue list -s all # all issues including closed +crosslink issue list -l bug -p high # filter by label and priority +crosslink issue search "keyword" # full-text search +crosslink issue show # full details +crosslink issue tree # hierarchy view +crosslink issue next # suggest what to work on +``` + +### Issue lifecycle +```bash +crosslink issue close # close when done +crosslink issue reopen # reopen if needed +crosslink issue delete --force # permanent delete +``` + +### Comments (typed for audit trails) +```bash +crosslink issue comment "text" --kind plan # what you intend to do +crosslink issue comment "text" --kind decision # why you chose this approach +crosslink issue comment "text" --kind observation # something you discovered +crosslink issue comment "text" --kind blocker # what's blocking progress +crosslink issue comment "text" --kind resolution # how a blocker was resolved +crosslink issue comment "text" --kind result # what was delivered +``` + +### Labels and dependencies +```bash +crosslink issue label bug # add a label +crosslink issue block # mark dependency +crosslink issue blocked # show blocked issues +crosslink issue ready # show unblocked issues +``` + +### Sessions +```bash +crosslink session start # begin work, see last handoff +crosslink session work # set current focus +crosslink session status # check what you're working on +crosslink session action "did X" # breadcrumb before compression +crosslink session end --notes "context for next session" +``` + +### Multi-agent (kickoff) +```bash +crosslink kickoff run # launch agent in worktree +crosslink kickoff status # check running agents +crosslink kickoff logs # view agent output +crosslink kickoff stop # stop an agent +crosslink kickoff list # list all worktrees +``` + +## Issue ID Formats + +- `#42` — hub-synced issue with positive display ID +- `L3` — local-only issue (not yet pushed to remote) +- Both formats work in all commands: `crosslink show 42`, `crosslink show L3` + +## Priorities + +`critical` > `high` > `medium` > `low` + +## Global Flags + +- `--quiet` / `-q` — minimal output (for scripting) +- `--json` — machine-readable JSON output + +## Troubleshooting + +```bash +crosslink sync # re-sync from remote +crosslink integrity counters --repair # fix counter drift +crosslink integrity hydration --repair # re-hydrate SQLite from JSON +crosslink compact # compact event logs +``` + +## Now Answer the User's Question + +Read the project's CLAUDE.md file for any project-specific crosslink configuration, then answer the user's question about crosslink usage. diff --git a/.claude/commands/design.md b/.claude/commands/design.md new file mode 100644 index 0000000..9038d83 --- /dev/null +++ b/.claude/commands/design.md @@ -0,0 +1,204 @@ +--- +allowed-tools: Bash(crosslink *), Bash(git *), Bash(gh *), Bash(ls *), Bash(mkdir *), Read, Grep, Glob, Write +description: Interactive, iterative design document authoring grounded in codebase exploration +--- + +## Context + +- Current repo root: !`git rev-parse --show-toplevel` +- Current branch: !`git branch --show-current` +- Active session: !`crosslink session status` +- Existing design docs: !`ls .design/*.md 2>/dev/null` +- Architecture files: !`ls README.md CLAUDE.md ARCHITECTURE.md ADR.md 2>/dev/null` + +## Your task + +You are an interactive design document author. You help the user go from a rough feature idea to a validated, codebase-grounded design document ready for `crosslink kickoff --doc`. + +### Arguments + +The user may pass these after `/design`: + +- A quoted feature description: `/design "add batch retry logic for sync"` +- `--issue `: Pull context from a crosslink issue +- `--gh-issue `: Pull context from a GitHub issue +- `--continue `: Resume iteration on an existing draft in `.design/.md` + +If no arguments are given, ask the user what feature they want to design. + +### Phase 1: Explore & Interview (skip if `--continue`) + +1. **Gather context** from all available sources: + - If `--issue `: run `crosslink issue show ` to read the issue + - If `--gh-issue `: run `gh issue view ` + - Read architecture files (README.md, CLAUDE.md, ARCHITECTURE.md) if they exist + - Search for related code using `Grep` and `Glob` — find modules, types, functions, and test patterns related to the feature + - Check existing knowledge: `crosslink knowledge search ""` + - Check existing design docs in `.design/` + +2. **Ask 3-5 clarifying questions** grounded in what you found: + - Reference specific files, functions, or patterns you discovered + - Ask about ambiguities that affect architecture decisions + - Ask about scope boundaries + - Do NOT ask generic questions — every question must reference something concrete from the codebase + +3. **Wait for the user to answer** before proceeding to Phase 2. + +### Phase 2: Draft + +4. **Create the `.design/` directory** if it doesn't exist: + ```bash + mkdir -p .design + ``` + +5. **Derive the slug** from the feature title: lowercase, spaces to hyphens, strip special chars. + Example: "Add batch retry logic" → `add-batch-retry-logic` + +6. **Write the design document** to `.design/.md` using this exact format: + +```markdown +# Feature: + +## Summary +1-3 sentence overview of what this feature does and why. + +## Requirements +- REQ-1: <specific, measurable requirement grounded in codebase> +- REQ-2: ... + +## Acceptance Criteria +- [ ] AC-1: <mechanically testable criterion> +- [ ] AC-2: ... + +## Architecture +Freeform prose referencing actual files, modules, types, and patterns. +Describes what gets modified, how it fits existing architecture, key +data structures, and error handling approach. + +## Open Questions + +<!-- OPEN: Q1 --> +### Q1: <question title> +<context and options> +**To resolve**: Edit this section with your decision and remove the `<!-- OPEN -->` marker. +<!-- /OPEN --> + +## Out of Scope +- <explicit exclusion to prevent scope creep> +``` + +**Quality standards — enforce all of these:** +- Requirements reference real codebase concepts (not generic "should handle errors") +- Acceptance criteria are mechanically testable (a CI system could verify) +- Architecture references actual file paths discovered during exploration +- No placeholder text (`<...>`, `TODO`, `TBD`) +- Every requirement maps to at least one acceptance criterion +- Genuine ambiguities become `<!-- OPEN -->` blocks, not guesses + +### Phase 3: Resolve open questions (interactive) + +After the initial draft, if there are `<!-- OPEN -->` blocks: + +7. **Present each open question to the user directly** using conversational text output. For each `<!-- OPEN: question -->` block, ask the user to answer. Do NOT require the user to edit the file manually. + +8. **Collect answers**: Wait for the user's response to each question. If the user says "skip" or "later", leave the `<!-- OPEN -->` block in place. + +9. **Update the document** with the user's answers: + - Replace the `<!-- OPEN: question -->` block with the resolved content + - Adjust requirements and acceptance criteria based on the answers + - If answers change scope, re-explore the codebase for newly relevant code + +### Phase 4: Iterate (when `--continue` is used) + +10. **Read the existing draft**: `Read .design/<slug>.md` + +11. **Detect remaining open questions**: Scan for `<!-- OPEN: ... -->` blocks. If any remain, present them interactively (Phase 3 flow). If all resolved, proceed to strengthening. + +12. **Update the document**: + - Strengthen sections based on resolved questions + - Update requirements and acceptance criteria if scope changed + - Add new `<!-- OPEN -->` blocks if new ambiguities surfaced + +13. **Write the updated document** back to `.design/<slug>.md` + +### Validation + +After writing (or updating) the document, run validation and print results: + +``` +Design doc validation: + [PASS] Summary present + [PASS] Requirements: N items + [PASS] Acceptance Criteria: N items + [PASS] Architecture references real files + [WARN] REQ-X has no matching acceptance criterion (if applicable) + [PASS] No placeholder text + [OPEN] N unresolved open questions remain (if applicable) +``` + +Check these: +- Summary section is non-empty +- At least 2 requirements exist +- At least 2 acceptance criteria exist +- Architecture section references at least one real file path (verify with `ls`) +- No `<...>`, `TODO`, or `TBD` in the document +- Each REQ-N has at least one AC-N that addresses it +- Count remaining `<!-- OPEN -->` blocks + +### Knowledge Integration + +After validation: + +1. **Store as knowledge page**: + ```bash + crosslink knowledge add "<slug>" --from-doc .design/<slug>.md --tag design-doc + ``` + +2. **If `--issue` was provided**, comment on the issue: + ```bash + crosslink issue comment <id> "Design doc drafted: .design/<slug>.md" --kind plan + ``` + +### Pipeline State Initialization + +After writing the design document, create the pipeline state file so the kickoff wizard can track it: + +```bash +cat > .design/<slug>.pipeline.json << 'PIPELINE_EOF' +{ + "schema_version": 1, + "design_doc": ".design/<slug>.md", + "doc_hash": "<sha256 hash of the design doc content>", + "stage": "designed", + "plans": [], + "runs": [] +} +PIPELINE_EOF +``` + +Compute the `doc_hash` as the SHA-256 hex digest of the design doc file content, prefixed with `sha256:`. You can compute it with: `shasum -a 256 .design/<slug>.md | awk '{print "sha256:" $1}'` + +### Summary Output + +Print this summary after every invocation: + +``` +Design document written: .design/<slug>.md + +Validation: N requirements, N acceptance criteria, N open questions +Knowledge: Stored as "<slug>" (tagged: design-doc) +Issue: Commented on #<id> (if applicable) + +Next steps: + - Edit in your editor: $EDITOR .design/<slug>.md + - Continue iterating: /design --continue <slug> + - Launch pipeline: crosslink kickoff .design/<slug>.md +``` + +### Rules + +- Do NOT modify any source code files. You only write to `.design/`. +- Do NOT automatically run `kickoff plan` or `kickoff run`. Suggest them in the output. +- Do NOT auto-create crosslink issues. The user manages issue lifecycle. +- Every question you ask must be grounded in specific codebase findings. +- A document with unresolved `<!-- OPEN -->` blocks is valid but flagged. diff --git a/.claude/commands/dev-release.md b/.claude/commands/dev-release.md new file mode 100644 index 0000000..3d1261f --- /dev/null +++ b/.claude/commands/dev-release.md @@ -0,0 +1,293 @@ +--- +allowed-tools: Bash(git *), Bash(crosslink *), Bash(cargo *), Bash(npm *), Bash(gh *), Bash(ls *), Bash(cat *), Bash(grep *), Bash(wc *), Read, Grep, Glob +description: Guided release automation — version bump, changelog, docs, tests, PR, tag, publish +--- + +## Context + +- Project root: !`git rev-parse --show-toplevel` +- Current branch: !`git branch --show-current` +- Latest tag: !`git describe --tags --abbrev=0 2>/dev/null` +- Active session: !`crosslink session status 2>/dev/null` +- Remote: !`git remote get-url origin 2>/dev/null` +- Main branch: !`git rev-parse --abbrev-ref origin/HEAD 2>/dev/null` + +## Your task + +You are a release automation assistant. You guide the user through a complete release flow, from creating the release branch through to a published GitHub release. The flow has both **agent steps** (you do them) and **user steps** (blocked by policy — you print the exact command and wait). + +### Arguments + +The user may pass these after `/dev-release`: + +- A version string: `/dev-release v0.6.0` or `/dev-release 0.6.0` +- `--from <branch>`: Source branch to release from (default: current branch, or `develop` if on `main`) +- `--to <branch>`: Target branch for the PR (default: `main`) +- `--skip-docs`: Skip the documentation review step +- `--skip-tests`: Skip the test/lint step (not recommended) + +If no version is given, ask the user what version this release should be. + +### Phase overview + +Print this at the start so the user knows what's coming: + +``` +Release pipeline for v<VERSION>: + + Phase 1: Branch setup [agent] + Phase 2: Version bump [agent] + Phase 3: Changelog [agent] + Phase 4: Documentation review [agent] + Phase 5: Test & lint [agent] + Phase 6: Commit & push [agent + user] + Phase 7: Pull request [agent] + Phase 8: CI & merge [user] + Phase 9: Tag & publish [user] + Phase 10: Back-merge [user] + Phase 11: GitHub release [agent] + +Phases marked [user] require manual commands (git push, merge, tag). +I'll print the exact commands when we get there. +``` + +--- + +### Phase 1: Branch setup + +1. Determine the source branch (from `--from` flag, or ask the user). +2. Determine the target branch (from `--to` flag, default `main`). +3. Check if a `release/v<VERSION>` branch already exists: + - If yes, ask whether to continue on it or start fresh. + - If no, create it: `git branch release/v<VERSION> <source-branch>` +4. Check out the release branch. +5. If the source branch has commits ahead of the release branch, inform the user and offer to merge. + +**User action if merge needed:** +``` +⏸ The source branch has commits not on the release branch. + Run: git merge <source-branch> + Then tell me to continue. +``` + +--- + +### Phase 2: Version bump + +Detect the project type and bump the version: + +**Rust** (if `Cargo.toml` exists): +- Edit `version = "..."` in `Cargo.toml` +- Run `cargo generate-lockfile` to update `Cargo.lock` +- Verify with `cargo check` + +**Node** (if `package.json` exists): +- Edit `"version": "..."` in `package.json` +- Run `npm install --package-lock-only` if `package-lock.json` exists + +**Python** (if `pyproject.toml` exists): +- Edit `version = "..."` in `pyproject.toml` + +**Elixir** (if `mix.exs` exists): +- Edit `version: "..."` in `mix.exs` + +**Go** (if `go.mod` exists): +- Go uses git tags for versioning — skip file edits, note this for Phase 9. + +If multiple project files exist (monorepo), bump all of them. + +After bumping, verify the project compiles/builds. + +--- + +### Phase 3: Changelog + +1. Gather all commits since the last tag: + ```bash + git log --oneline <last-tag>..HEAD --no-merges + ``` +2. Read the existing `CHANGELOG.md` (if it exists). +3. Categorize commits into Added, Fixed, Changed, Security, Deprecated, Removed — based on conventional commit prefixes (`feat:`, `fix:`, `refactor:`, etc.) or crosslink issue labels. +4. Write a human-readable changelog section for this version. +5. Clean up any auto-generated bookkeeping entries (e.g., "Create GH issue for...", "File GH bug for...") that aren't user-visible features. +6. Update the `## [Unreleased]` section to `## [<VERSION>] - <DATE>`. +7. Show the user the draft changelog and ask for approval before writing. + +--- + +### Phase 4: Documentation review (skip with `--skip-docs`) + +1. Read `README.md` and scan for feature descriptions that may be outdated. +2. If a `docs/` or `docs_src/` directory exists, scan for pages that reference features from the changelog — check if they're documented. +3. Report gaps: features in the changelog that aren't mentioned in docs. +4. Ask the user which gaps to address now vs. defer. +5. Make the agreed-upon doc updates. + +--- + +### Phase 5: Test & lint (skip with `--skip-tests`) + +Detect the project toolchain and run: + +**Rust**: `cargo test` and `cargo clippy -- -D warnings` +**Node**: `npm test` and `npm run lint` (if available) +**Python**: `pytest` and `ruff check .` (if available) +**Go**: `go test ./...` and `go vet ./...` +**Elixir**: `mix test` and `mix credo --strict` + +Report results. If there are failures: +- **Lint/format issues**: fix them automatically. +- **Test failures**: report them and ask the user how to proceed (fix, skip, or abort). + +--- + +### Phase 6: Commit & push + +1. Stage all release prep files (version files, CHANGELOG, docs). +2. Commit with message: `release: prepare v<VERSION> — bump version and update CHANGELOG` +3. Print the push command for the user: + +``` +⏸ Ready to push the release branch. + Run: git push -u origin release/v<VERSION> + Tell me when it's done. +``` + +Wait for the user to confirm before proceeding. + +--- + +### Phase 7: Pull request + +After the user confirms the push: + +1. Create a PR using `gh pr create`: + - Title: `Release v<VERSION>` + - Base: target branch (default `main`) + - Body: changelog summary + test results +2. Print the PR URL. + +--- + +### Phase 8: CI & merge + +This phase is entirely user-driven. Print: + +``` +⏸ Waiting for CI and merge. + + 1. Check CI status: gh run list --branch release/v<VERSION> + 2. Review the PR: <PR URL> + 3. Merge when ready: (merge via GitHub UI or CLI) + + Tell me when the PR is merged. +``` + +Wait for the user to confirm the merge. + +--- + +### Phase 9: Tag & publish + +After the user confirms the merge, print the tagging commands: + +``` +⏸ Tag the release on <target-branch>. + Run these commands: + + git checkout <target-branch> + git pull + git tag v<VERSION> + git push origin v<VERSION> + + Tell me when the tag is pushed. +``` + +If the project has a CI publish workflow (check for `.github/workflows/publish.yml` or similar), note that the tag push will trigger it. + +--- + +### Phase 10: Back-merge + +Print the back-merge commands: + +``` +⏸ Merge <target-branch> back into <source-branch>. + Run these commands: + + git checkout <source-branch> + git pull + git merge <target-branch> + git push origin <source-branch> + + Tell me when it's done. +``` + +--- + +### Phase 11: GitHub release + +After the user confirms the back-merge (or tag push if back-merge is deferred): + +1. Check if `gh` CLI is available. +2. Generate release notes from the changelog section for this version. +3. Write to a temp file and create the release: + ```bash + gh release create v<VERSION> --title "v<VERSION>" --notes-file <tempfile> + ``` +4. Print the release URL. + +--- + +### Completion + +Print a final summary: + +``` +✓ Release v<VERSION> complete! + + Branch: release/v<VERSION> + PR: <PR URL> + Tag: v<VERSION> + Release: <release URL> + Publish: <crates.io / npm / etc. if applicable> + + Changelog, docs, and version files are updated. + <source-branch> has been back-merged from <target-branch>. +``` + +--- + +## State tracking + +At each phase transition, print a progress tracker: + +``` + ✓ Phase 1: Branch setup + ✓ Phase 2: Version bump + ✓ Phase 3: Changelog + → Phase 4: Documentation review + ○ Phase 5: Test & lint + ○ Phase 6: Commit & push + ○ Phase 7: Pull request + ○ Phase 8: CI & merge + ○ Phase 9: Tag & publish + ○ Phase 10: Back-merge + ○ Phase 11: GitHub release +``` + +Use `✓` for completed, `→` for current, `○` for pending. + +--- + +## Constraints + +- **Never run `git push`, `git merge`, `git tag`, or any blocked git command.** Always print the command for the user to run manually. +- **Never amend existing commits** — always create new commits. +- **Never force-push** — warn the user if they suggest it. +- **Always show the changelog draft** before writing it to the file. +- **Always show the version bump** before committing. +- If the user says "abort" at any point, stop immediately and print what's been done so far. +- If you detect the project is not a git repository, stop and tell the user. +- Be language-agnostic — detect the project type from files present, don't assume any particular stack. +- When waiting for user action, be explicit about what command to run and what to tell you when done. Use the `⏸` marker so it's visually distinct. diff --git a/.claude/commands/featree.md b/.claude/commands/featree.md index c80a021..de049b6 100644 --- a/.claude/commands/featree.md +++ b/.claude/commands/featree.md @@ -1,5 +1,5 @@ --- -allowed-tools: Bash(git *), Bash(crosslink *), Bash(uuidgen), Bash(ls *), Bash(ln *), Bash(rm *), Skill +allowed-tools: Bash(git *), Bash(uuidgen), Bash(ls *), Bash(ln *), Bash(rm *), Bash(test *), Bash(mkdir *), Bash(grep *), Bash(echo *), Bash(crosslink *), Skill description: Create a feature branch and move it to a new git worktree --- @@ -17,38 +17,45 @@ The user provides a human-readable feature description (e.g. "add batch retry lo ### 1. Create the feature branch - Invoke the `/feature` skill with the user's description as the argument. -- This creates the `feature/<slug>` branch and a crosslink issue. +- This creates the `feature/<slug>` branch. - Note the branch name that was created. ### 2. Generate worktree path -- Generate a short random suffix using `uuidgen | cut -c1-8` (8-char hex). -- The worktree path is `<repo-root>--<suffix>` as a sibling of the current repo directory. - - Example: if repo root is `/Users/max/git-forecast/atdata`, the worktree is `/Users/max/git-forecast/atdata--a1b2c3d4`. +- The worktree directory is `<repo-root>/.worktrees/<slug>` (inside the repo, gitignored). +- Extract the slug from the branch name by stripping the `feature/` prefix. +- Create the `.worktrees` directory if it doesn't exist: `mkdir -p <repo-root>/.worktrees` +- Ensure `.worktrees/` is gitignored: check if it's already in `.gitignore`, and if not, append it. ### 3. Create the worktree - Switch back to the previous branch (the one we were on before `/feature` created the new branch): `git checkout -` - Create the worktree pointing at the feature branch: `git worktree add <worktree-path> feature/<slug>` -### 4. Symlink crosslink issues database +### 4. Initialize crosslink in the worktree -Replace the worktree's issues database with a symlink to the base clone's copy so all worktrees share a single authoritative crosslink database (the one on `develop`): +After creating the worktree, initialize crosslink so the child agent has proper hooks, skills, and access to shared state: ```bash -rm <worktree-path>/.crosslink/issues.db -ln -s <repo-root>/.crosslink/issues.db <worktree-path>/.crosslink/issues.db -``` +# In the worktree directory: +cd <worktree-path> -### 5. Exclude symlink from git staging +# Set up crosslink hooks and skills in the worktree +crosslink init --force -Prevent the symlink from being accidentally committed and merged (which would overwrite the real SQLite database on the target branch): +# Initialize agent identity for this worktree +# Format: <parent-agent>--<feature-slug> +crosslink agent init <parent-agent>--<feature-slug> -```bash -echo ".crosslink/issues.db" >> <worktree-path>/.git/info/exclude +# Sync latest issues from the coordination branch +crosslink sync ``` -### 6. Report to user +The agent ID should be derived from the parent agent name and the feature slug. For example, if the parent agent is `m1` and the feature slug is `add-retry`, the agent ID would be `m1--add-retry`. + +To get the parent agent name, check `crosslink agent status --json` in the parent repo, or default to the machine hostname. + +### 5. Report to user Print a summary: ``` @@ -63,3 +70,4 @@ To start working: - Never force-push or delete branches. - Do not push the branch to a remote — the user will do that when ready. +- Worktrees MUST be placed inside `<repo-root>/.worktrees/` to inherit the project's Claude Code trust scope and settings hierarchy. diff --git a/.claude/commands/feature.md b/.claude/commands/feature.md index 1fdb7ce..b507c8e 100644 --- a/.claude/commands/feature.md +++ b/.claude/commands/feature.md @@ -6,8 +6,8 @@ description: Create a feature branch from a human-readable description ## Context - Current branch: !`git branch --show-current` -- Recent release branches: !`git branch --list 'release/*' | tail -5` -- Existing feature branches: !`git branch --list 'feature/*' | tail -10` +- Recent release branches: !`git branch --list 'release/*'` +- Existing feature branches: !`git branch --list 'feature/*'` - Working tree status: !`git status --short` - Remotes: !`git remote -v` @@ -24,11 +24,10 @@ The user will provide a human-readable description of the feature (e.g. "add bat ### 2. Validate preconditions - Confirm there are no uncommitted changes (other than `.crosslink/issues.db`). If there are, warn the user and ask whether to stash or abort. -- Identify the base branch. Default is `develop`. If the user provides a `--from <ref>` argument, use that instead. +- Identify the base branch. Default to the current branch. If the user provides a `--from <ref>` argument, use that instead. ### 3. Create the branch -- If not already on the base branch, `git checkout <base>` first. - `git checkout -b feature/<slug>` (from the resolved base) - Print the created branch name so the user can confirm. @@ -36,6 +35,7 @@ The user will provide a human-readable description of the feature (e.g. "add bat - Create a crosslink issue for the feature work with the user's original description as the title. - Set priority to `medium` (unless the user specifies otherwise). +- Use: `crosslink issue create "<description>" -p medium --label feature` ## Constraints diff --git a/.claude/commands/kickoff.md b/.claude/commands/kickoff.md index 5cabfcc..57893b2 100644 --- a/.claude/commands/kickoff.md +++ b/.claude/commands/kickoff.md @@ -1,5 +1,5 @@ --- -allowed-tools: Bash(git *), Bash(crosslink *), Bash(uuidgen), Bash(ls *), Bash(ln *), Bash(rm *), Bash(tmux *), Bash(claude *), Bash(cat *), Bash(grep *), Bash(which *), Read, Write, Edit, Glob, Grep, Skill, Task +allowed-tools: Bash(crosslink *), Bash(which *), Bash(tmux *) description: Create a worktree and launch a background claude agent in tmux to implement a feature --- @@ -7,126 +7,61 @@ description: Create a worktree and launch a background claude agent in tmux to i - Current repo root: !`git rev-parse --show-toplevel` - Current branch: !`git branch --show-current` -- Existing worktrees: !`git worktree list` -- Working tree status: !`git status --short` - tmux available: !`which tmux` -- CLAUDE.md head: !`head -5 CLAUDE.md 2>/dev/null || echo "no CLAUDE.md"` -- Project root files: !`ls -1` +- claude available: !`which claude` +- gh available: !`which gh` ## Your task -The user provides a feature description (e.g. "add batch retry logic") and optionally additional context, file references, or constraints. You will create an isolated worktree, then launch a background `claude` process in a tmux session to implement the feature autonomously. +The user provides a feature description (e.g. "add batch retry logic") and optionally additional context. You will delegate to the `crosslink kickoff run` CLI command which handles worktree creation, agent prompt generation, and tmux session launch. -### 1. Validate prerequisites +### Arguments -- Confirm `tmux` is installed (`which tmux`). If not, abort with a message telling the user to install it. -- Confirm `claude` CLI is installed (`which claude`). If not, abort. +The user may pass these flags after the feature description: -### 2. Create the worktree via /featree +- `--verify <level>`: Controls post-implementation verification depth. + - `local` (default): Local tests + self-review checklist only. + - `ci`: Push branch, open draft PR, wait for CI to pass, fix failures. + - `thorough`: Everything in `ci` plus a structured adversarial self-review. +- `--issue <id>`: Use an existing crosslink issue instead of creating a new one. +- `--container <runtime>`: Use `docker` or `podman` instead of local tmux. Default: `none`. +- `--model <model>`: LLM model to use. Default: `opus`. +- `--timeout <duration>`: Max runtime (e.g. `1h`, `30m`). Default: `1h`. +- All other text is the feature description. -- Invoke the `/featree` skill with the user's feature description. -- Capture the worktree path and branch name from the output. -- The worktree will be at `<repo-root>/.worktrees/<slug>` (inside the repo, inheriting trust scope). +**Parsing**: Split ARGUMENTS on whitespace. Extract recognized `--flag value` pairs. Everything remaining is the feature description. -### 3. Detect project conventions +### Steps -Before writing the prompt, detect what tools the project uses so the child agent gets appropriate instructions: +1. **Validate prerequisites**: Check that `tmux` and `claude` are available (for local mode). If `--verify ci` or `--verify thorough`, check that `gh` is available. If missing, tell the user what to install and stop. -- **Test runner**: Check for `justfile` (`just test`), `Makefile`, `package.json` (`npm test`), `pyproject.toml` (`uv run pytest` or `pytest`), `Cargo.toml` (`cargo test`), etc. -- **Linter**: Check for `ruff.toml`, `.eslintrc`, `clippy`, etc. -- **Task runner**: `just`, `make`, `npm run`, `cargo`, etc. -- **CLAUDE.md**: If present, the child agent will read it for full project conventions. - -### 4. Prepare the agent prompt - -Build a detailed prompt for the child agent. The prompt must be self-contained — the child has no access to this conversation's context. Include: - -- The feature description from the user -- Any specific files, modules, or code areas the user mentioned -- Any constraints or requirements the user specified -- Instructions to: - 1. **Read the project's CLAUDE.md** (if it exists) for conventions before starting - 2. Explore relevant code before making changes - 3. Implement the feature fully (no stubs or placeholders) - 4. **Run the project's test suite** to verify changes don't break anything (use the detected test command) - 5. Use `/commit` to commit the work when implementation is complete - 6. Review the diff of all changes and fix any issues found - 7. Use `/commit` again after any fixes - 8. When completely finished, write the word `DONE` to a file called `.kickoff-status` in the worktree root - -Write the prompt to `KICKOFF.md` in the worktree root. Ensure it's excluded from git by adding to the **main repo's** `.git/info/exclude` (worktrees share `info/exclude` from `$GIT_COMMON_DIR`): +2. **Build the crosslink kickoff command**: Map parsed arguments to CLI flags: ```bash -# Get the common git dir (main repo's .git/) -common_dir=$(git -C <worktree-path> rev-parse --git-common-dir) -# Add KICKOFF.md and .kickoff-status if not already present -grep -qxF 'KICKOFF.md' "$common_dir/info/exclude" || echo "KICKOFF.md" >> "$common_dir/info/exclude" -grep -qxF '.kickoff-status' "$common_dir/info/exclude" || echo ".kickoff-status" >> "$common_dir/info/exclude" +crosslink kickoff run "<feature description>" \ + --verify <level> \ + --container <runtime> \ + --model <model> \ + --timeout <duration> ``` -### 5. Derive the tmux session name - -- Use the feature branch slug as the tmux session name (e.g. `feat-add-batch-retry-logic`). -- Prefix with `feat-` and truncate to 50 characters if needed. -- Replace any characters invalid for tmux session names (periods, colons) with hyphens. +Add `--issue <id>` if the user specified one. Add `--dry-run` if the user asked for a dry run. -### 6. Launch the tmux session +3. **Run the command**: Execute `crosslink kickoff run` with all flags. The CLI handles: + - Creating the feature branch and worktree + - Creating or assigning the crosslink issue + - Initializing the agent identity + - Detecting project conventions + - Building the self-contained KICKOFF.md prompt + - Launching the tmux session (or container) -```bash -tmux new-session -d -s <session-name> -c <worktree-path> -``` - -Then send the claude command into the session. **Important:** -- Prefix with `env -u CLAUDECODE` to clear the nested-session detection variable inherited from the parent shell. -- Use `--allowedTools` to auto-approve the listed tools so the agent can work without interactive permission prompts. -- Do **NOT** use `--dangerously-skip-permissions` — the agent operates within the normal permission model with `--allowedTools` for auto-approval. - -**Critical quoting rules for tmux send-keys:** -- `--allowedTools` is variadic (consumes all subsequent space-separated args). You MUST pass tools as a **single comma-separated string** so the prompt isn't consumed as a tool name. -- Use `--` before the positional prompt argument to terminate option parsing. -- The prompt file content must be passed as a single shell argument. Use `"$(cat KICKOFF.md)"` with proper quoting. Since the tmux session's working directory is already the worktree, `KICKOFF.md` resolves correctly. - -```bash -tmux send-keys -t <session-name> "env -u CLAUDECODE claude --model opus --allowedTools 'Read,Write,Edit,Glob,Grep,Skill,Task,WebSearch,WebFetch,Bash(git *),Bash(ls *),Bash(mkdir *),Bash(test *),Bash(which *),Bash(touch *),Bash(cat *),Bash(head *),Bash(tail *),Bash(wc *),Bash(diff *),Bash(echo *),<project-specific-tools>' -- \"\$(cat KICKOFF.md)\"" Enter -``` - -**Project-specific tool additions** (add to the comma-separated list based on what was detected): -- Python/uv project: `Bash(uv *)` -- Node project: `Bash(npm *),Bash(npx *)` -- Rust project: `Bash(cargo *)` -- Just task runner: `Bash(just *)` -- Make task runner: `Bash(make *)` -- Crosslink present: `Bash(crosslink *)` - -**Permission rationale:** -- `Read`, `Write`, `Edit`, `Glob`, `Grep` — full file operations for implementation -- `Skill` — enables `/commit` and other skill workflows -- `Task` — subagent exploration and code search -- `WebSearch`, `WebFetch` — look up docs and API references -- `Bash(git *)` — version control operations -- Project-specific tools — test runner, package manager, task runner -- Standard utilities (`ls/mkdir/test/which/touch/cat/head/tail/wc/diff/echo`) — file inspection -- **Excluded**: unrestricted `Bash`, `NotebookEdit`, destructive commands (`rm`, `curl|sh`, etc.) - -### 7. Report to user - -Print a summary: - -``` -Feature agent launched. - - Worktree: <path> - Branch: feature/<slug> - Session: <tmux-session-name> - - Check status: /check <tmux-session-name> - Attach directly: tmux attach -t <tmux-session-name> -``` +4. **Report**: The CLI prints the summary. Relay it to the user. Remind them to: + - Approve trust: `tmux attach -t <session-name>` + - Check status: `crosslink kickoff status <agent-id>` or `/check <session-name>` ## Constraints - Never force-push or delete branches. -- Do not push the branch to a remote. -- The child agent prompt must be fully self-contained — assume it has zero context from this conversation beyond what you explicitly include. -- Leave `KICKOFF.md` in the worktree for reference (it's git-excluded). -- If a tmux session with the same name already exists, append a short random suffix rather than failing. +- Do not push the branch to a remote from this skill. (The child agent handles pushing when `--verify ci` or `--verify thorough`.) +- All prompt building and agent lifecycle is handled by `crosslink kickoff run`. +- If a tmux session with the same name already exists, the CLI appends a random suffix automatically. diff --git a/.claude/commands/maintain.md b/.claude/commands/maintain.md new file mode 100644 index 0000000..bdfba60 --- /dev/null +++ b/.claude/commands/maintain.md @@ -0,0 +1,184 @@ +--- +allowed-tools: Bash(crosslink *), Bash(git *), Bash(cargo *), Bash(npm *), Bash(npx *), Bash(uv *), Bash(ruff *), Bash(go *), Bash(mix *), Bash(ls *), Read, Grep +description: Codebase maintenance — dependency audit, lint health, dead code, test gaps, issue hygiene +--- + +## Context + +- Project root: !`git rev-parse --show-toplevel` +- Current branch: !`git branch --show-current` +- Active session: !`crosslink session status` +- Open issues: !`crosslink issue list -s open 2>/dev/null` + +## Your task + +Run a structured codebase maintenance pass. This is a periodic health check — not a pre-commit review. Check each section, report findings, and fix what you can. + +### 1. Dependency health + +Detect the project's toolchain and audit dependencies: + +**Rust** (if `Cargo.toml` exists): +```bash +cargo update --dry-run 2>&1 | head -30 +``` +Check for outdated or yanked crates. If `cargo-audit` is available: +```bash +cargo audit 2>/dev/null || echo "cargo-audit not installed — skip" +``` + +**Node** (if `package.json` exists): +```bash +npm outdated 2>/dev/null || echo "npm outdated not available" +npm audit --audit-level=moderate 2>/dev/null || echo "npm audit not available" +``` + +**Python** (if `pyproject.toml` or `requirements.txt` exists): +```bash +uv pip list --outdated 2>/dev/null || pip list --outdated 2>/dev/null || echo "skip" +``` + +**Elixir** (if `mix.exs` exists): +```bash +mix hex.outdated 2>/dev/null || echo "mix hex.outdated not available" +mix deps.audit 2>/dev/null || echo "mix deps.audit not available" +``` + +Report: list any dependencies with known vulnerabilities or major version bumps available. + +### 2. Lint and format check + +Run the full lint suite without fixing — report-only mode: + +**Rust**: +```bash +cargo clippy -- -D warnings 2>&1 +cargo fmt --check 2>&1 +``` + +**Node/TypeScript**: +```bash +npx eslint . 2>/dev/null || npm run lint 2>/dev/null +``` + +**Python**: +```bash +ruff check . 2>/dev/null || uv run ruff check . 2>/dev/null +``` + +**Go**: +```bash +go vet ./... 2>/dev/null +gofmt -l . 2>/dev/null +``` + +**Elixir**: +```bash +mix format --check-formatted 2>&1 +mix credo --strict 2>&1 +``` + +Count warnings and errors. If any are found, fix them. + +### 3. Test suite health + +Run the full test suite and assess health: + +**Rust**: `cargo test 2>&1` +**Node**: `npm test 2>&1` +**Python**: `uv run pytest 2>/dev/null || pytest 2>/dev/null` +**Go**: `go test ./... 2>/dev/null` +**Elixir**: `mix test 2>&1` + +Report: +- Total tests, passed, failed, skipped +- Any flaky tests (if visible from output) +- Tests that are unusually slow + +### 4. Dead code and stale patterns + +Search the codebase for patterns that indicate maintenance debt: + +``` +TODO, FIXME, HACK, XXX, DEPRECATED +``` + +Also search for: +- `#[allow(dead_code)]` or `#[allow(unused)]` in Rust +- `// eslint-disable` or `// @ts-ignore` in TypeScript/JavaScript +- Unused imports (from lint output in step 2) +- Empty `catch` blocks or swallowed errors + +For each finding, decide: +- **Fix now** if it's a quick cleanup (remove unused import, delete dead code) +- **File issue** if it requires more work: `crosslink issue create "<description>" -p low --label maintenance` + +### 5. Documentation freshness + +Check that key documentation files exist and aren't stale: + +- `README.md` — exists? +- `CHANGELOG.md` — has entries for recent work? +- `CLAUDE.md` — exists and reflects current project structure? + +Read the first 20 lines of each to assess whether they're current. Flag any that reference features or structures that no longer exist. + +### 6. Crosslink issue hygiene + +Audit the issue tracker: + +```bash +crosslink issue list -s open +``` + +Check for: +- **Stale issues**: open issues that haven't been updated recently and may be obsolete +- **Duplicate issues**: multiple issues describing the same work +- **Missing labels**: open issues without category labels +- **Orphaned subissues**: subissues whose parent is already closed + +For each finding, suggest an action (close, merge, label, etc.) but do not close issues without user confirmation. + +### 7. Build artifact cleanup + +Check for build artifacts or temp files that shouldn't be tracked: + +```bash +git status --ignored --short 2>/dev/null | head -20 +``` + +Verify `.gitignore` covers common patterns for the detected languages. + +### 8. Print maintenance report + +Print a summary using this format: + +``` +Maintenance Report +================== + +Dependencies: [OK | N outdated | N vulnerable] +Lint: [OK | N warnings | N errors] +Format: [OK | N files need formatting] +Tests: [N passed, N failed, N skipped] +Dead code/TODOs: [OK | N items found] +Documentation: [OK | N files stale] +Issue hygiene: [OK | N issues need attention] +Build artifacts: [OK | N items to clean] + +Actions taken: + - Fixed N lint warnings + - Removed N dead code items + - Created N maintenance issues + +Recommended follow-ups: + - <list of items that need human attention> +``` + +## Constraints + +- Do not make breaking changes. Maintenance is conservative — fix warnings, remove dead code, update docs. +- Do not update dependency versions without checking for breaking changes first. Report outdated deps; only update patch versions automatically. +- Do not close crosslink issues without user confirmation — only suggest closures. +- Do not modify test behavior — only fix test infrastructure issues (imports, configs). +- If a fix would touch more than 10 lines, create a crosslink issue for it instead of fixing inline. diff --git a/.claude/commands/preflight.md b/.claude/commands/preflight.md new file mode 100644 index 0000000..a6f33e8 --- /dev/null +++ b/.claude/commands/preflight.md @@ -0,0 +1,84 @@ +--- +allowed-tools: Bash(crosslink *), Bash(git status *), Bash(ls *), Read +description: Load project rules and grounding context before implementation +--- + +## Context + +- Project root: !`git rev-parse --show-toplevel` +- Active session: !`crosslink session status` +- Detected files: !`ls -1` + +## Your task + +You are preparing to implement code. Load the project's rules and grounding context so you have the right constraints in mind. This replaces the heavy context that would otherwise be injected on every prompt. + +### 1. Read core rules + +Read the project's global rules file: + +``` +Read .crosslink/rules/global.md +``` + +### 2. Detect active languages and load rules + +Check for project manifests and load the corresponding rule files: + +| Manifest | Language | Rule file | +|----------|----------|-----------| +| `Cargo.toml` | Rust | `.crosslink/rules/rust.md` | +| `package.json` | JavaScript | `.crosslink/rules/javascript.md` | +| `tsconfig.json` | TypeScript | `.crosslink/rules/typescript.md` | +| `pyproject.toml` / `requirements.txt` | Python | `.crosslink/rules/python.md` | +| `go.mod` | Go | `.crosslink/rules/go.md` | +| `pom.xml` / `build.gradle` | Java | `.crosslink/rules/java.md` | +| `Gemfile` | Ruby | `.crosslink/rules/ruby.md` | +| `composer.json` | PHP | `.crosslink/rules/php.md` | +| `Package.swift` | Swift | `.crosslink/rules/swift.md` | +| `mix.exs` | Elixir | `.crosslink/rules/elixir.md` | + +Also load `.crosslink/rules/elixir-phoenix.md` if `mix.exs` contains `:phoenix`. + +Only read rule files for languages actually detected in this project. + +### 3. Read project-specific and tracking rules + +``` +Read .crosslink/rules/project.md +``` + +Check the tracking mode from hook-config.json and read the matching tracking rules: + +```bash +crosslink config get tracking_mode +``` + +Then read `.crosslink/rules/tracking-<mode>.md` (e.g., `tracking-strict.md`). + +### 4. Scan project tree + +Build spatial awareness of the project (max depth 3, max 50 entries): + +```bash +ls -R --max-depth=3 | head -50 +``` + +### 5. Check dependency versions + +Read the primary manifest file (e.g., `Cargo.toml`, `package.json`) to ground yourself on actual dependency versions. Do not guess versions. + +### 6. Print summary + +Print a brief summary of what you loaded: + +``` +Preflight loaded: + - Global rules + - Language rules: Rust, TypeScript + - Tracking mode: strict + - Project tree scanned (N entries) + - Dependencies: Cargo.toml (N deps) +``` + +You are now grounded. Proceed with implementation. diff --git a/.claude/commands/qa.md b/.claude/commands/qa.md new file mode 100644 index 0000000..23b8fe5 --- /dev/null +++ b/.claude/commands/qa.md @@ -0,0 +1,90 @@ +# SKILL: Automated Architectural Code Review (QA Agent) + +## 1. Role Definition +You are an autonomous, paranoid, and highly rigorous Quality Assurance (QA) System Architect. Your primary function is to evaluate pull requests, code snippets, and architectural plans against strict, language-agnostic software engineering principles. You prioritize long-term system viability, structural modularity, and security over immediate functional output. + +## 2. Trigger Rules +**Activate this skill when:** +* A pull request or code diff is submitted for review. +* You are asked to refactor, evaluate, or optimize existing code. +* You are tasked with designing a new feature or microservice architecture. + +## 3. Core Architectural Directives (The Heuristics) + +### 3.1 Macroscopic Architecture (Decoupling) +* **Enforce Dependency Inversion:** The core domain must NEVER reference external libraries, databases, or UI frameworks. Flag any infrastructural concerns (e.g., ORM mappings, HTTP requests) inside core business logic. +* **Reject Anti-Patterns:** * *Big Ball of Mud:* Reject dense, bidirectional dependency graphs. + * *God Objects:* Reject classes with excessive dependencies or lines of code. Force division into granular services. + * *Stovepipe Systems:* Identify duplicated logic across isolated modules and demand shared abstractions. + * *Diamond Inheritance:* Favor composition over inheritance. Reject deep inheritance trees. + +### 3.2 Microscopic Integrity (SOLID & Complexity) +* **SOLID Enforcement:** * *SRP:* Modules must have one reason to change. Reject merged UI/DB/Logic blocks. + * *OCP:* Reject deeply nested `if-else` or massive `switch` statements. Demand polymorphism. + * *LSP:* Subclasses must not throw "Not Supported" exceptions for parent methods. + * *ISP:* Reject monolithic interfaces. Demand role-based micro-interfaces. + * *DIP:* Mandate dependency injection; reject hardcoded internal instantiations. +* **Complexity Reduction:** + * *DRY:* Flag and consolidate cloned/duplicated logic. + * *KISS:* Reject obscure language features or overly "clever" algorithms. Favor readability. + * *YAGNI:* Strip out boilerplate, speculative abstractions, and dead code built for "future use." + +### 3.3 Semantic & Micro-Architecture Rules +* **Naming:** Use pronounceable, searchable names. Booleans must be binary noun-verb structures (e.g., `isReady`, `hasData`). Prohibit single-letter variables (except loop counters `i`, `j`, but never `l` or `O`). Constants must be `UPPER_SNAKE_CASE`. +* **Functions:** Limit to 0-2 parameters. **Reject boolean flag arguments immediately** (split into two distinct functions). Functions must be pure; do not mutate input objects or global state. +* **State Management:** + * *No Nulls:* Reject endless `if (obj == null)` chains. Demand the Null Object Pattern. + * *Tell, Don't Ask:* Logic that acts on an object's state must live *inside* that object. + * *Temporal Coupling:* Reject undocumented sequential execution requirements (e.g., `open()` -> `process()` -> `close()`). Demand closure blocks or command handlers. + +### 3.4 Defensive Programming vs. Contracts +* **Perimeter Defense:** Fail fast and uniformly at system boundaries (APIs, DBs, File Parsers). Reject invalid inputs and throw immediate exceptions. Do not return default/null values to mask errors. +* **Internal Contracts:** Do NOT clutter internal domain logic with excessive `try-catch` blocks or defensive null checks. Use Contract-Based Design (Preconditions, Postconditions, Invariants) and assertion libraries. Let fatal errors bubble up to a unified top-level handler. + +### 3.5 Quantitative Complexity Thresholds +Automatically reject code that violates the following mathematical thresholds: +* **Cyclomatic Complexity ($V(G) = E - N + 2P$):** Reject any function scoring > 15. Demand extraction of helper functions to match unit test path requirements. +* **Cognitive Complexity:** Penalize code that interrupts linear reading (e.g., deeply nested loops, chained un-parenthesized booleans). Priority trigger for "Extract Method" refactoring. +* **ABC Metric (Assignments, Branches, Conditions):** Flag high-volume functions that violate SRP, regardless of branching depth. + +### 3.6 Universal Security Framework (OWASP/NIST) +* **Input Handling:** Enforce absolute sanitization/allowlisting. Reject direct interpolation (SQLi, XSS risks). Mandate parameterized queries and encoding. +* **Least Privilege:** Flag over-permissioned containers (root), excessive file access, or broad IAM roles. +* **Cryptography:** Reject custom or deprecated algorithms (MD5, SHA-1). Mandate SHA-256, Argon2, or equivalent modern standards. +* **Secrets:** Aggressively scrub any hardcoded API keys, passwords, or DB URIs. + +--- + +## 4. Execution Workflow: The ADIHQ Framework + +For every review or generation task, you MUST process your response through the following Chain-of-Thought (CoT) sequence: + +1. **[Analyze]:** Extract core requirements. State the architectural constraints violated or required. +2. **[Design]:** Evaluate algorithmic approaches (Time/Space complexity). Select the optimal, decoupled structure. +3. **[Evaluate/Implement]:** Critique the specific lines of code or generate the replacement logic, strictly applying the Micro-Architecture rules. +4. **[Handle]:** Identify edge cases, perimeter defense needs, and necessary contract assertions. +5. **[Quality]:** Perform a SELF-REFINE check. Verify the output against SOLID, DRY, and Complexity metrics. + +## 5. The Review Matrix Checklist + +Structure your final output by evaluating the code against these 9 dimensions. Only include dimensions in your output where violations are found. + +1. **Functional Correctness:** Edge cases, off-by-one errors, illogical inputs. +2. **Architectural Alignment:** Layer boundary breaches, external DB/UI imports in domain. +3. **Structural Modularity:** God objects, function size, boolean flag parameters. +4. **Cognitive Readability:** Deep nesting, binary boolean naming, temporal coupling. +5. **Security and Access:** Hardcoded secrets, raw string queries, bad crypto, loose permissions. +6. **Performance & Complexity:** Big O efficiency, duplicated traversals. +7. **Error Orchestration:** Swallowed internal exceptions vs. top-level logging. +8. **Testability Coverage:** Cyclomatic complexity check. Can this be easily mocked/tested? +9. **Documentation & Intent:** Self-documenting semantics vs. redundant comments. YAGNI violations. + +## 6. Output Format +Provide your review using strictly formatted Markdown. Use severe, objective engineering language. Do not provide conversational filler. + +**Format:** +* **Severity:** [CRITICAL | WARNING | NITPICK] +* **Dimension:** [From Matrix] +* **Location:** [File/Function] +* **Violation:** [Brief description of the anti-pattern] +* **Mandated Refactor:** [Actionable code suggestion or architectural shift] diff --git a/.claude/commands/review.md b/.claude/commands/review.md new file mode 100644 index 0000000..53b3565 --- /dev/null +++ b/.claude/commands/review.md @@ -0,0 +1,129 @@ +--- +allowed-tools: Bash(crosslink *), Bash(git *), Bash(cargo *), Bash(npm *), Bash(npx *), Bash(uv *), Bash(ruff *), Bash(go *), Bash(mix *), Read +description: Pre-commit quality gate — review changes before committing +--- + +## Context + +- Working tree status: !`git status --short` +- Current branch: !`git branch --show-current` +- Active session: !`crosslink session status` +- Files changed: !`git diff --stat HEAD` + +## Your task + +You are about to commit. Run through this structured review to catch issues before they land. + +### 1. Review full diff + +```bash +git diff HEAD +``` + +Read through every change. Look for: +- Correctness: does the logic do what it should? +- Edge cases: are boundary conditions handled? +- Security: any injection, hardcoded secrets, or unsafe patterns? + +### 2. Scan for stub patterns + +Search the diff for patterns that indicate incomplete work: + +- `TODO`, `FIXME`, `HACK`, `XXX` +- `pass` (Python), `unimplemented!()` / `todo!()` (Rust) +- `throw new Error("not implemented")` +- Empty function bodies or placeholder returns +- `...` as implementation + +If found: fix them now. Do not commit stubs. + +### 3. Check for debug leftovers + +Search for debug code that shouldn't be committed: + +- `dbg!()` (Rust) +- `console.log` used for debugging (not logging) +- `print(` / `println!` used for debugging +- Commented-out code blocks (more than 2 consecutive commented lines) + +If found: remove them. + +### 4. Run lint and format checks + +Detect the project's toolchain and run the appropriate checks: + +**Rust** (if `Cargo.toml` exists): +```bash +cargo clippy -- -D warnings +cargo fmt --check +``` + +**Node/TypeScript** (if `package.json` exists): +```bash +npx eslint . 2>/dev/null || npm run lint 2>/dev/null +``` + +**Python** (if `pyproject.toml` or `requirements.txt` exists): +```bash +ruff check . 2>/dev/null || uv run ruff check . 2>/dev/null +``` + +**Go** (if `go.mod` exists): +```bash +go vet ./... +gofmt -l . +``` + +**Elixir** (if `mix.exs` exists): +```bash +mix format --check-formatted +mix credo --strict +``` + +Fix any issues found before proceeding. + +### 5. Run test suite + +Run the project's test suite: + +- Rust: `cargo test` +- Node: `npm test` +- Python: `uv run pytest` or `pytest` +- Go: `go test ./...` +- Elixir: `mix test --seed 0` + +All tests must pass before committing. + +### 6. Verify crosslink issue documentation + +Check that the active crosslink issue has appropriate documentation: + +```bash +crosslink session status +``` + +If working on an issue, verify it has a plan comment and will get a result comment: + +```bash +crosslink issue show <issue-id> +``` + +### 7. Print checklist + +Print a pass/fail summary: + +``` +Review checklist: + [PASS] No stub patterns + [PASS] No debug leftovers + [PASS] Lint clean + [PASS] Format clean + [PASS] Tests pass + [PASS] Issue documented + +Ready to commit. Proceeding to /commit. +``` + +If any items fail, fix them first, then re-run the failed checks. + +Once all checks pass, proceed to `/commit`. diff --git a/.claude/commands/workflow.md b/.claude/commands/workflow.md new file mode 100644 index 0000000..63f5e1c --- /dev/null +++ b/.claude/commands/workflow.md @@ -0,0 +1,76 @@ +You are conducting a guided policy review of this project's crosslink configuration. Walk the user through each section, explain what's configured, and suggest improvements. + +## Step 1: Gather Current State + +First, run these commands to understand the current configuration: + +```bash +crosslink workflow diff +``` + +Then read the key policy files: + +- `.crosslink/hook-config.json` — tracking mode and command restrictions +- `.crosslink/rules/global.md` — core behavioral rules +- `.crosslink/rules/project.md` — project-specific conventions + +## Step 2: Review Tracking Mode + +Read `.crosslink/hook-config.json` and the active tracking mode rule file (`.crosslink/rules/tracking-strict.md`, `tracking-normal.md`, or `tracking-relaxed.md`). + +Ask the user: +- Is the current tracking mode (`strict`/`normal`/`relaxed`) appropriate for your workflow? +- Are there git commands in `blocked_git_commands` that should be allowed, or allowed commands that should be blocked? +- Are the `allowed_bash_prefixes` complete for your toolchain? + +## Step 3: Review Security Policies + +Read `.crosslink/rules/global.md` (the security section), `.crosslink/rules/web.md`, and `.crosslink/rules/sanitize-patterns.txt`. + +Ask the user: +- Are the OWASP/injection prevention rules current for your stack? +- Does `sanitize-patterns.txt` cover your application's sensitive patterns? +- For web projects: are the RFIP (Recursive Framing Interdiction Protocol) rules in `web.md` appropriate? + +## Step 4: Review Language Rules + +List all `.md` files in `.crosslink/rules/` and identify which languages are relevant to this project (check for source files in the repo). + +Ask the user: +- Are rules deployed for all languages used in this project? +- Are there language rules deployed that aren't relevant (unnecessary noise)? +- Do any language-specific rules need updates for newer framework versions? + +## Step 5: Review Hook Implementations + +Read each hook file in `.claude/hooks/`: +- `work-check.py` — enforces issue tracking before code changes +- `session-start.py` — loads context on session start +- `prompt-guard.py` — guards against prompt injection +- `post-edit-check.py` — validates edits +- `pre-web-check.py` — validates web requests + +For any files that `crosslink workflow diff` flagged as customized, highlight the differences and ask if they're still needed. + +## Step 6: Review Workflow Conventions + +Read `.crosslink/rules/global.md` (workflow sections) and `.crosslink/rules/project.md`. + +Ask the user: +- Are the commit message conventions right? +- Are the code review and testing expectations appropriate? +- Should any workflow rules be added or relaxed? + +## Step 7: Summary and Recommendations + +Summarize findings: +1. List any files that have drifted from defaults (from `crosslink workflow diff`) +2. Recommend specific changes based on the discussion +3. Offer to apply approved changes using `crosslink init --force` (resets to defaults) or targeted edits + +If the user wants to reset customized files to defaults: +```bash +crosslink init --force +``` + +If they want targeted edits, make the specific changes they approve. diff --git a/.claude/hooks/post-edit-check.py b/.claude/hooks/post-edit-check.py deleted file mode 100644 index b821690..0000000 --- a/.claude/hooks/post-edit-check.py +++ /dev/null @@ -1,380 +0,0 @@ -#!/usr/bin/env python3 -""" -Post-edit hook that detects stub patterns, runs linters, and reminds about tests. -Runs after Write/Edit tool usage. -""" - -import json -import sys -import os -import re -import subprocess -import glob -import time - -# Stub patterns to detect (compiled regex for performance) -STUB_PATTERNS = [ - (r'\bTODO\b', 'TODO comment'), - (r'\bFIXME\b', 'FIXME comment'), - (r'\bXXX\b', 'XXX marker'), - (r'\bHACK\b', 'HACK marker'), - (r'^\s*pass\s*$', 'bare pass statement'), - (r'^\s*\.\.\.\s*$', 'ellipsis placeholder'), - (r'\bunimplemented!\s*\(\s*\)', 'unimplemented!() macro'), - (r'\btodo!\s*\(\s*\)', 'todo!() macro'), - (r'\bpanic!\s*\(\s*"not implemented', 'panic not implemented'), - (r'raise\s+NotImplementedError\s*\(\s*\)', 'bare NotImplementedError'), - (r'#\s*implement\s*(later|this|here)', 'implement later comment'), - (r'//\s*implement\s*(later|this|here)', 'implement later comment'), - (r'def\s+\w+\s*\([^)]*\)\s*:\s*(pass|\.\.\.)\s*$', 'empty function'), - (r'fn\s+\w+\s*\([^)]*\)\s*\{\s*\}', 'empty function body'), - (r'return\s+None\s*#.*stub', 'stub return'), -] - -COMPILED_PATTERNS = [(re.compile(p, re.IGNORECASE | re.MULTILINE), desc) for p, desc in STUB_PATTERNS] - - -def check_for_stubs(file_path): - """Check file for stub patterns. Returns list of (line_num, pattern_desc, line_content).""" - if not os.path.exists(file_path): - return [] - - try: - with open(file_path, 'r', encoding='utf-8', errors='ignore') as f: - content = f.read() - lines = content.split('\n') - except (OSError, Exception): - return [] - - findings = [] - for line_num, line in enumerate(lines, 1): - for pattern, desc in COMPILED_PATTERNS: - if pattern.search(line): - if 'NotImplementedError' in line and re.search(r'NotImplementedError\s*\(\s*["\'][^"\']+["\']', line): - continue - findings.append((line_num, desc, line.strip()[:60])) - - return findings - - -def find_project_root(file_path, marker_files): - """Walk up from file_path looking for project root markers.""" - current = os.path.dirname(os.path.abspath(file_path)) - for _ in range(10): # Max 10 levels up - for marker in marker_files: - if os.path.exists(os.path.join(current, marker)): - return current - parent = os.path.dirname(current) - if parent == current: - break - current = parent - return None - - -def run_linter(file_path, max_errors=10): - """Run appropriate linter and return first N errors.""" - ext = os.path.splitext(file_path)[1].lower() - errors = [] - - try: - if ext == '.rs': - # Rust: run cargo clippy from project root - project_root = find_project_root(file_path, ['Cargo.toml']) - if project_root: - result = subprocess.run( - ['cargo', 'clippy', '--message-format=short', '--quiet'], - cwd=project_root, - capture_output=True, - text=True, - timeout=30 - ) - if result.stderr: - for line in result.stderr.split('\n'): - if line.strip() and ('error' in line.lower() or 'warning' in line.lower()): - errors.append(line.strip()[:100]) - if len(errors) >= max_errors: - break - - elif ext == '.py': - # Python: try flake8, fall back to py_compile - try: - result = subprocess.run( - ['flake8', '--max-line-length=120', file_path], - capture_output=True, - text=True, - timeout=10 - ) - for line in result.stdout.split('\n'): - if line.strip(): - errors.append(line.strip()[:100]) - if len(errors) >= max_errors: - break - except FileNotFoundError: - # flake8 not installed, try py_compile - result = subprocess.run( - ['python', '-m', 'py_compile', file_path], - capture_output=True, - text=True, - timeout=10 - ) - if result.stderr: - errors.append(result.stderr.strip()[:200]) - - elif ext in ('.js', '.ts', '.tsx', '.jsx'): - # JavaScript/TypeScript: try eslint - project_root = find_project_root(file_path, ['package.json', '.eslintrc', '.eslintrc.js', '.eslintrc.json']) - if project_root: - try: - result = subprocess.run( - ['npx', 'eslint', '--format=compact', file_path], - cwd=project_root, - capture_output=True, - text=True, - timeout=30 - ) - for line in result.stdout.split('\n'): - if line.strip() and (':' in line): - errors.append(line.strip()[:100]) - if len(errors) >= max_errors: - break - except FileNotFoundError: - pass - - elif ext == '.go': - # Go: run go vet - project_root = find_project_root(file_path, ['go.mod']) - if project_root: - result = subprocess.run( - ['go', 'vet', './...'], - cwd=project_root, - capture_output=True, - text=True, - timeout=30 - ) - if result.stderr: - for line in result.stderr.split('\n'): - if line.strip(): - errors.append(line.strip()[:100]) - if len(errors) >= max_errors: - break - - except subprocess.TimeoutExpired: - errors.append("(linter timed out)") - except (OSError, Exception) as e: - pass # Linter not available, skip silently - - return errors - - -def is_test_file(file_path): - """Check if file is a test file.""" - basename = os.path.basename(file_path).lower() - dirname = os.path.dirname(file_path).lower() - - # Common test file patterns - test_patterns = [ - 'test_', '_test.', '.test.', 'spec.', '_spec.', - 'tests.', 'testing.', 'mock.', '_mock.' - ] - # Common test directories - test_dirs = ['test', 'tests', '__tests__', 'spec', 'specs', 'testing'] - - for pattern in test_patterns: - if pattern in basename: - return True - - for test_dir in test_dirs: - if test_dir in dirname.split(os.sep): - return True - - return False - - -def find_test_files(file_path, project_root): - """Find test files related to source file.""" - if not project_root: - return [] - - ext = os.path.splitext(file_path)[1] - basename = os.path.basename(file_path) - name_without_ext = os.path.splitext(basename)[0] - - # Patterns to look for - test_patterns = [] - - if ext == '.rs': - # Rust: look for mod tests in same file, or tests/ directory - test_patterns = [ - os.path.join(project_root, 'tests', '**', f'*{name_without_ext}*'), - os.path.join(project_root, '**', 'tests', f'*{name_without_ext}*'), - ] - elif ext == '.py': - test_patterns = [ - os.path.join(project_root, '**', f'test_{name_without_ext}.py'), - os.path.join(project_root, '**', f'{name_without_ext}_test.py'), - os.path.join(project_root, 'tests', '**', f'*{name_without_ext}*.py'), - ] - elif ext in ('.js', '.ts', '.tsx', '.jsx'): - base = name_without_ext.replace('.test', '').replace('.spec', '') - test_patterns = [ - os.path.join(project_root, '**', f'{base}.test{ext}'), - os.path.join(project_root, '**', f'{base}.spec{ext}'), - os.path.join(project_root, '**', '__tests__', f'{base}*'), - ] - elif ext == '.go': - test_patterns = [ - os.path.join(os.path.dirname(file_path), f'{name_without_ext}_test.go'), - ] - - found = [] - for pattern in test_patterns: - found.extend(glob.glob(pattern, recursive=True)) - - return list(set(found))[:5] # Limit to 5 - - -def get_test_reminder(file_path, project_root): - """Check if tests should be run and return reminder message.""" - if is_test_file(file_path): - return None # Editing a test file, no reminder needed - - ext = os.path.splitext(file_path)[1] - code_extensions = ('.rs', '.py', '.js', '.ts', '.tsx', '.jsx', '.go') - - if ext not in code_extensions: - return None - - # Check for marker file - marker_dir = project_root or os.path.dirname(file_path) - marker_file = os.path.join(marker_dir, '.crosslink', 'last_test_run') - - code_modified_after_tests = False - - if os.path.exists(marker_file): - try: - marker_mtime = os.path.getmtime(marker_file) - file_mtime = os.path.getmtime(file_path) - code_modified_after_tests = file_mtime > marker_mtime - except OSError: - code_modified_after_tests = True - else: - # No marker = tests haven't been run - code_modified_after_tests = True - - if not code_modified_after_tests: - return None - - # Find test files - test_files = find_test_files(file_path, project_root) - - # Generate test command based on project type - test_cmd = None - if ext == '.rs' and project_root: - if os.path.exists(os.path.join(project_root, 'Cargo.toml')): - test_cmd = 'cargo test' - elif ext == '.py': - if project_root and os.path.exists(os.path.join(project_root, 'pytest.ini')): - test_cmd = 'pytest' - elif project_root and os.path.exists(os.path.join(project_root, 'setup.py')): - test_cmd = 'python -m pytest' - elif ext in ('.js', '.ts', '.tsx', '.jsx') and project_root: - if os.path.exists(os.path.join(project_root, 'package.json')): - test_cmd = 'npm test' - elif ext == '.go' and project_root: - test_cmd = 'go test ./...' - - if test_files or test_cmd: - msg = "🧪 TEST REMINDER: Code modified since last test run." - if test_cmd: - msg += f"\n Run: {test_cmd}" - if test_files: - msg += f"\n Related tests: {', '.join(os.path.basename(t) for t in test_files[:3])}" - return msg - - return None - - -def main(): - try: - input_data = json.load(sys.stdin) - except (json.JSONDecodeError, Exception): - sys.exit(0) - - tool_name = input_data.get("tool_name", "") - tool_input = input_data.get("tool_input", {}) - - if tool_name not in ("Write", "Edit"): - sys.exit(0) - - file_path = tool_input.get("file_path", "") - - code_extensions = ( - '.rs', '.py', '.js', '.ts', '.tsx', '.jsx', '.go', '.java', - '.c', '.cpp', '.h', '.hpp', '.cs', '.rb', '.php', '.swift', - '.kt', '.scala', '.zig', '.odin' - ) - - if not any(file_path.endswith(ext) for ext in code_extensions): - sys.exit(0) - - if '.claude' in file_path and 'hooks' in file_path: - sys.exit(0) - - # Find project root for linter and test detection - project_root = find_project_root(file_path, [ - 'Cargo.toml', 'package.json', 'go.mod', 'setup.py', - 'pyproject.toml', '.git' - ]) - - # Check for stubs - stub_findings = check_for_stubs(file_path) - - # Run linter - linter_errors = run_linter(file_path) - - # Check for test reminder - test_reminder = get_test_reminder(file_path, project_root) - - # Build output - messages = [] - - if stub_findings: - stub_list = "\n".join([f" Line {ln}: {desc} - `{content}`" for ln, desc, content in stub_findings[:5]]) - if len(stub_findings) > 5: - stub_list += f"\n ... and {len(stub_findings) - 5} more" - messages.append(f"""⚠️ STUB PATTERNS DETECTED in {file_path}: -{stub_list} - -Fix these NOW - replace with real implementation.""") - - if linter_errors: - error_list = "\n".join([f" {e}" for e in linter_errors[:10]]) - if len(linter_errors) > 10: - error_list += f"\n ... and more" - messages.append(f"""🔍 LINTER ISSUES: -{error_list}""") - - if test_reminder: - messages.append(test_reminder) - - if messages: - output = { - "hookSpecificOutput": { - "hookEventName": "PostToolUse", - "additionalContext": "\n\n".join(messages) - } - } - else: - output = { - "hookSpecificOutput": { - "hookEventName": "PostToolUse", - "additionalContext": f"✓ {os.path.basename(file_path)} - no issues detected" - } - } - - print(json.dumps(output)) - sys.exit(0) - - -if __name__ == "__main__": - main() diff --git a/.claude/hooks/prompt-guard.py b/.claude/hooks/prompt-guard.py deleted file mode 100644 index 91dc031..0000000 --- a/.claude/hooks/prompt-guard.py +++ /dev/null @@ -1,513 +0,0 @@ -#!/usr/bin/env python3 -""" -Crosslink behavioral hook for Claude Code. -Injects best practice reminders on every prompt submission. -Loads rules from .crosslink/rules/ markdown files. -""" - -import json -import sys -import os -import io -import subprocess -import hashlib -from datetime import datetime - -# Fix Windows encoding issues with Unicode characters -sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8') - - -def find_crosslink_dir(): - """Find the .crosslink directory by walking up from cwd.""" - current = os.getcwd() - for _ in range(10): - candidate = os.path.join(current, '.crosslink') - if os.path.isdir(candidate): - return candidate - parent = os.path.dirname(current) - if parent == current: - break - current = parent - return None - - -def load_rule_file(rules_dir, filename): - """Load a rule file and return its content, or empty string if not found.""" - if not rules_dir: - return "" - path = os.path.join(rules_dir, filename) - try: - with open(path, 'r', encoding='utf-8') as f: - return f.read().strip() - except (OSError, IOError): - return "" - - -def load_all_rules(crosslink_dir): - """Load all rule files from .crosslink/rules/.""" - if not crosslink_dir: - return {}, "", "" - - rules_dir = os.path.join(crosslink_dir, 'rules') - if not os.path.isdir(rules_dir): - return {}, "", "" - - # Load global rules - global_rules = load_rule_file(rules_dir, 'global.md') - - # Load project rules - project_rules = load_rule_file(rules_dir, 'project.md') - - # Load language-specific rules - language_rules = {} - language_files = [ - ('rust.md', 'Rust'), - ('python.md', 'Python'), - ('javascript.md', 'JavaScript'), - ('typescript.md', 'TypeScript'), - ('typescript-react.md', 'TypeScript/React'), - ('javascript-react.md', 'JavaScript/React'), - ('go.md', 'Go'), - ('java.md', 'Java'), - ('c.md', 'C'), - ('cpp.md', 'C++'), - ('csharp.md', 'C#'), - ('ruby.md', 'Ruby'), - ('php.md', 'PHP'), - ('swift.md', 'Swift'), - ('kotlin.md', 'Kotlin'), - ('scala.md', 'Scala'), - ('zig.md', 'Zig'), - ('odin.md', 'Odin'), - ] - - for filename, lang_name in language_files: - content = load_rule_file(rules_dir, filename) - if content: - language_rules[lang_name] = content - - return language_rules, global_rules, project_rules - - -# Detect language from common file extensions in the working directory -def detect_languages(): - """Scan for common source files to determine active languages.""" - extensions = { - '.rs': 'Rust', - '.py': 'Python', - '.js': 'JavaScript', - '.ts': 'TypeScript', - '.tsx': 'TypeScript/React', - '.jsx': 'JavaScript/React', - '.go': 'Go', - '.java': 'Java', - '.c': 'C', - '.cpp': 'C++', - '.cs': 'C#', - '.rb': 'Ruby', - '.php': 'PHP', - '.swift': 'Swift', - '.kt': 'Kotlin', - '.scala': 'Scala', - '.zig': 'Zig', - '.odin': 'Odin', - } - - found = set() - cwd = os.getcwd() - - # Check for project config files first (more reliable than scanning) - config_indicators = { - 'Cargo.toml': 'Rust', - 'package.json': 'JavaScript', - 'tsconfig.json': 'TypeScript', - 'pyproject.toml': 'Python', - 'requirements.txt': 'Python', - 'go.mod': 'Go', - 'pom.xml': 'Java', - 'build.gradle': 'Java', - 'Gemfile': 'Ruby', - 'composer.json': 'PHP', - 'Package.swift': 'Swift', - } - - # Check cwd and immediate subdirs for config files - check_dirs = [cwd] - try: - for entry in os.listdir(cwd): - subdir = os.path.join(cwd, entry) - if os.path.isdir(subdir) and not entry.startswith('.'): - check_dirs.append(subdir) - except (PermissionError, OSError): - pass - - for check_dir in check_dirs: - for config_file, lang in config_indicators.items(): - if os.path.exists(os.path.join(check_dir, config_file)): - found.add(lang) - - # Also scan for source files in src/ directories - scan_dirs = [cwd] - src_dir = os.path.join(cwd, 'src') - if os.path.isdir(src_dir): - scan_dirs.append(src_dir) - # Check nested project src dirs too - for check_dir in check_dirs: - nested_src = os.path.join(check_dir, 'src') - if os.path.isdir(nested_src): - scan_dirs.append(nested_src) - - for scan_dir in scan_dirs: - try: - for entry in os.listdir(scan_dir): - ext = os.path.splitext(entry)[1].lower() - if ext in extensions: - found.add(extensions[ext]) - except (PermissionError, OSError): - pass - - return list(found) if found else ['the project'] - - -def get_language_section(languages, language_rules): - """Build language-specific best practices section from loaded rules.""" - sections = [] - for lang in languages: - if lang in language_rules: - content = language_rules[lang] - # If the file doesn't start with a header, add one - if not content.startswith('#'): - sections.append(f"### {lang} Best Practices\n{content}") - else: - sections.append(content) - - if not sections: - return "" - - return "\n\n".join(sections) - - -# Directories to skip when building project tree -SKIP_DIRS = { - '.git', 'node_modules', 'target', 'venv', '.venv', 'env', '.env', - '__pycache__', '.crosslink', '.claude', 'dist', 'build', '.next', - '.nuxt', 'vendor', '.idea', '.vscode', 'coverage', '.pytest_cache', - '.mypy_cache', '.tox', 'eggs', '*.egg-info', '.sass-cache' -} - - -def get_project_tree(max_depth=3, max_entries=50): - """Generate a compact project tree to prevent path hallucinations.""" - cwd = os.getcwd() - entries = [] - - def should_skip(name): - if name.startswith('.') and name not in ('.github', '.claude'): - return True - return name in SKIP_DIRS or name.endswith('.egg-info') - - def walk_dir(path, prefix="", depth=0): - if depth > max_depth or len(entries) >= max_entries: - return - - try: - items = sorted(os.listdir(path)) - except (PermissionError, OSError): - return - - # Separate dirs and files - dirs = [i for i in items if os.path.isdir(os.path.join(path, i)) and not should_skip(i)] - files = [i for i in items if os.path.isfile(os.path.join(path, i)) and not i.startswith('.')] - - # Add files first (limit per directory) - for f in files[:10]: # Max 10 files per dir shown - if len(entries) >= max_entries: - return - entries.append(f"{prefix}{f}") - - if len(files) > 10: - entries.append(f"{prefix}... ({len(files) - 10} more files)") - - # Then recurse into directories - for d in dirs: - if len(entries) >= max_entries: - return - entries.append(f"{prefix}{d}/") - walk_dir(os.path.join(path, d), prefix + " ", depth + 1) - - walk_dir(cwd) - - if not entries: - return "" - - if len(entries) >= max_entries: - entries.append(f"... (tree truncated at {max_entries} entries)") - - return "\n".join(entries) - - -# Cache directory for dependency snapshots -CACHE_DIR = os.path.join(os.getcwd(), '.crosslink', '.cache') - - -def get_lock_file_hash(lock_path): - """Get a hash of the lock file for cache invalidation.""" - try: - mtime = os.path.getmtime(lock_path) - return hashlib.md5(f"{lock_path}:{mtime}".encode()).hexdigest()[:12] - except OSError: - return None - - -def run_command(cmd, timeout=5): - """Run a command and return output, or None on failure.""" - try: - result = subprocess.run( - cmd, - capture_output=True, - text=True, - timeout=timeout, - shell=True - ) - if result.returncode == 0: - return result.stdout.strip() - except (subprocess.TimeoutExpired, OSError, Exception): - pass - return None - - -def get_dependencies(max_deps=30): - """Get installed dependencies with versions. Uses caching based on lock file mtime.""" - cwd = os.getcwd() - deps = [] - - # Check for Rust (Cargo.toml) - cargo_toml = os.path.join(cwd, 'Cargo.toml') - if os.path.exists(cargo_toml): - # Parse Cargo.toml for direct dependencies (faster than cargo tree) - try: - with open(cargo_toml, 'r') as f: - content = f.read() - in_deps = False - for line in content.split('\n'): - if line.strip().startswith('[dependencies]'): - in_deps = True - continue - if line.strip().startswith('[') and in_deps: - break - if in_deps and '=' in line and not line.strip().startswith('#'): - parts = line.split('=', 1) - name = parts[0].strip() - rest = parts[1].strip() if len(parts) > 1 else '' - if rest.startswith('{'): - # Handle { version = "x.y", features = [...] } format - import re - match = re.search(r'version\s*=\s*"([^"]+)"', rest) - if match: - deps.append(f" {name} = \"{match.group(1)}\"") - elif rest.startswith('"') or rest.startswith("'"): - version = rest.strip('"').strip("'") - deps.append(f" {name} = \"{version}\"") - if len(deps) >= max_deps: - break - except (OSError, Exception): - pass - if deps: - return "Rust (Cargo.toml):\n" + "\n".join(deps[:max_deps]) - - # Check for Node.js (package.json) - package_json = os.path.join(cwd, 'package.json') - if os.path.exists(package_json): - try: - with open(package_json, 'r') as f: - pkg = json.load(f) - for dep_type in ['dependencies', 'devDependencies']: - if dep_type in pkg: - for name, version in list(pkg[dep_type].items())[:max_deps]: - deps.append(f" {name}: {version}") - if len(deps) >= max_deps: - break - except (OSError, json.JSONDecodeError, Exception): - pass - if deps: - return "Node.js (package.json):\n" + "\n".join(deps[:max_deps]) - - # Check for Python (requirements.txt or pyproject.toml) - requirements = os.path.join(cwd, 'requirements.txt') - if os.path.exists(requirements): - try: - with open(requirements, 'r') as f: - for line in f: - line = line.strip() - if line and not line.startswith('#') and not line.startswith('-'): - deps.append(f" {line}") - if len(deps) >= max_deps: - break - except (OSError, Exception): - pass - if deps: - return "Python (requirements.txt):\n" + "\n".join(deps[:max_deps]) - - # Check for Go (go.mod) - go_mod = os.path.join(cwd, 'go.mod') - if os.path.exists(go_mod): - try: - with open(go_mod, 'r') as f: - in_require = False - for line in f: - line = line.strip() - if line.startswith('require ('): - in_require = True - continue - if line == ')' and in_require: - break - if in_require and line: - deps.append(f" {line}") - if len(deps) >= max_deps: - break - except (OSError, Exception): - pass - if deps: - return "Go (go.mod):\n" + "\n".join(deps[:max_deps]) - - return "" - - -def build_reminder(languages, project_tree, dependencies, language_rules, global_rules, project_rules): - """Build the full reminder context.""" - lang_section = get_language_section(languages, language_rules) - lang_list = ", ".join(languages) if languages else "this project" - current_year = datetime.now().year - - # Build tree section if available - tree_section = "" - if project_tree: - tree_section = f""" -### Project Structure (use these exact paths) -``` -{project_tree} -``` -""" - - # Build dependencies section if available - deps_section = "" - if dependencies: - deps_section = f""" -### Installed Dependencies (use these exact versions) -``` -{dependencies} -``` -""" - - # Build global rules section (from .crosslink/rules/global.md) - global_section = "" - if global_rules: - global_section = f"\n{global_rules}\n" - else: - # Fallback to hardcoded defaults if no rules file - global_section = f""" -### Pre-Coding Grounding (PREVENT HALLUCINATIONS) -Before writing code that uses external libraries, APIs, or unfamiliar patterns: -1. **VERIFY IT EXISTS**: Use WebSearch to confirm the crate/package/module exists and check its actual API -2. **CHECK THE DOCS**: Fetch documentation to see real function signatures, not imagined ones -3. **CONFIRM SYNTAX**: If unsure about language features or library usage, search first -4. **USE LATEST VERSIONS**: Always check for and use the latest stable version of dependencies (security + features) -5. **NO GUESSING**: If you can't verify it, tell the user you need to research it - -Examples of when to search: -- Using a crate/package you haven't used recently → search "[package] [language] docs {current_year}" -- Uncertain about function parameters → search for actual API reference -- New language feature or syntax → verify it exists in the version being used -- System calls or platform-specific code → confirm the correct API -- Adding a dependency → search "[package] latest version {current_year}" to get current release - -### General Requirements -1. **NO STUBS - ABSOLUTE RULE**: - - NEVER write `TODO`, `FIXME`, `pass`, `...`, `unimplemented!()` as implementation - - NEVER write empty function bodies or placeholder returns - - NEVER say "implement later" or "add logic here" - - If logic is genuinely too complex for one turn, use `raise NotImplementedError("Descriptive reason: what needs to be done")` and create a crosslink issue - - The PostToolUse hook WILL detect and flag stub patterns - write real code the first time -2. **NO DEAD CODE**: Discover if dead code is truly dead or if it's an incomplete feature. If incomplete, complete it. If truly dead, remove it. -3. **FULL FEATURES**: Implement the complete feature as requested. Don't stop partway or suggest "you could add X later." -4. **ERROR HANDLING**: Proper error handling everywhere. No panics/crashes on bad input. -5. **SECURITY**: Validate input, use parameterized queries, no command injection, no hardcoded secrets. -6. **READ BEFORE WRITE**: Always read a file before editing it. Never guess at contents. - -### Conciseness Protocol -Minimize chattiness. Your output should be: -- **Code blocks** with implementation -- **Tool calls** to accomplish tasks -- **Brief explanations** only when the code isn't self-explanatory - -NEVER output: -- "Here is the code" / "Here's how to do it" (just show the code) -- "Let me know if you need anything else" / "Feel free to ask" -- "I'll now..." / "Let me..." (just do it) -- Restating what the user asked -- Explaining obvious code -- Multiple paragraphs when one sentence suffices - -When writing code: write it. When making changes: make them. Skip the narration. - -### Large File Management (500+ lines) -If you need to write or modify code that will exceed 500 lines: -1. Create a parent issue for the overall feature: `crosslink issue create "<feature name>" -p high` -2. Break down into subissues: `crosslink issue subissue <parent_id> "<component 1>"`, etc. -3. Inform the user: "This implementation will require multiple files/components. I've created issue #X with Y subissues to track progress." -4. Work on one subissue at a time, marking each complete before moving on. - -### Context Window Management -If the conversation is getting long OR the task requires many more steps: -1. Create a crosslink issue to track remaining work: `crosslink issue create "Continue: <task summary>" -p high` -2. Add detailed notes as a comment: `crosslink issue comment <id> "<what's done, what's next>"` -3. Inform the user: "This task will require additional turns. I've created issue #X to track progress." - -Use `crosslink session work <id>` to mark what you're working on. -""" - - # Build project rules section (from .crosslink/rules/project.md) - project_section = "" - if project_rules: - project_section = f"\n### Project-Specific Rules\n{project_rules}\n" - - reminder = f"""<chainlink-behavioral-guard> -## Code Quality Requirements - -You are working on a {lang_list} project. Follow these requirements strictly: -{tree_section}{deps_section}{global_section}{lang_section}{project_section} -</chainlink-behavioral-guard>""" # XML tag names kept for backward compat with existing system prompts - - return reminder - - -def main(): - try: - # Read input from stdin (Claude Code passes prompt info) - input_data = json.load(sys.stdin) - except json.JSONDecodeError: - # If no valid JSON, still inject reminder - pass - except Exception: - pass - - # Find crosslink directory and load rules - crosslink_dir = find_crosslink_dir() - language_rules, global_rules, project_rules = load_all_rules(crosslink_dir) - - # Detect languages in the project - languages = detect_languages() - - # Generate project tree to prevent path hallucinations - project_tree = get_project_tree() - - # Get installed dependencies to prevent version hallucinations - dependencies = get_dependencies() - - # Output the reminder as plain text (gets injected as context) - print(build_reminder(languages, project_tree, dependencies, language_rules, global_rules, project_rules)) - sys.exit(0) - - -if __name__ == "__main__": - main() diff --git a/.claude/hooks/session-start.py b/.claude/hooks/session-start.py deleted file mode 100644 index 63daa78..0000000 --- a/.claude/hooks/session-start.py +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env python3 -""" -Session start hook that loads crosslink context and reminds about session workflow. -""" - -import json -import subprocess -import sys -import os - - -def run_crosslink(args): - """Run a crosslink command and return output.""" - try: - result = subprocess.run( - ["crosslink"] + args, - capture_output=True, - text=True, - timeout=5 - ) - return result.stdout.strip() if result.returncode == 0 else None - except (subprocess.TimeoutExpired, FileNotFoundError, Exception): - return None - - -def check_crosslink_initialized(): - """Check if .crosslink directory exists.""" - cwd = os.getcwd() - current = cwd - - while True: - candidate = os.path.join(current, ".crosslink") - if os.path.isdir(candidate): - return True - parent = os.path.dirname(current) - if parent == current: - break - current = parent - - return False - - -def main(): - if not check_crosslink_initialized(): - # No crosslink repo, skip - sys.exit(0) - - context_parts = ["<chainlink-session-context>"] - - # Try to get session status - session_status = run_crosslink(["session", "status"]) - if session_status: - context_parts.append(f"## Current Session\n{session_status}") - - # Get ready issues (unblocked work) - ready_issues = run_crosslink(["issue", "ready"]) - if ready_issues: - context_parts.append(f"## Ready Issues (unblocked)\n{ready_issues}") - - # Get open issues summary - open_issues = run_crosslink(["issue", "list", "-s", "open"]) - if open_issues: - context_parts.append(f"## Open Issues\n{open_issues}") - - context_parts.append(""" -## Chainlink Workflow Reminder -- Use `crosslink session start` at the beginning of work -- Use `crosslink session work <id>` to mark current focus -- Add comments as you discover things: `crosslink issue comment <id> "..."` -- End with handoff notes: `crosslink session end --notes "..."` -</chainlink-session-context>""") - - print("\n\n".join(context_parts)) - sys.exit(0) - - -if __name__ == "__main__": - main() diff --git a/.claude/settings.json b/.claude/settings.json index 5f9a49b..140c984 100644 --- a/.claude/settings.json +++ b/.claude/settings.json @@ -1,36 +1,72 @@ { + "allowedTools": [ + "Bash(tmux *)", + "Bash(git worktree *)" + ], + "enableAllProjectMcpServers": true, "hooks": { - "UserPromptSubmit": [ + "PostToolUse": [ + { + "hooks": [ + { + "command": "HOOK=\"$(git rev-parse --show-toplevel 2>/dev/null)/.claude/hooks/post-edit-check.py\"; if [ -f \"$HOOK\" ]; then uv run python3 \"$HOOK\"; else exit 0; fi", + "timeout": 5, + "type": "command" + } + ], + "matcher": "Write|Edit" + }, { "hooks": [ { - "type": "command", - "command": "uv run -- python .claude/hooks/prompt-guard.py", - "timeout": 5 + "command": "HOOK=\"$(git rev-parse --show-toplevel 2>/dev/null)/.claude/hooks/heartbeat.py\"; if [ -f \"$HOOK\" ]; then uv run python3 \"$HOOK\"; else exit 0; fi", + "timeout": 3, + "type": "command" } ] } ], - "PostToolUse": [ + "PreToolUse": [ { - "matcher": "Write|Edit", "hooks": [ { - "type": "command", - "command": "uv run -- python .claude/hooks/post-edit-check.py", - "timeout": 5 + "command": "HOOK=\"$(git rev-parse --show-toplevel 2>/dev/null)/.claude/hooks/pre-web-check.py\"; if [ -f \"$HOOK\" ]; then uv run python3 \"$HOOK\"; else exit 0; fi", + "timeout": 5, + "type": "command" } - ] + ], + "matcher": "WebFetch|WebSearch" + }, + { + "hooks": [ + { + "command": "HOOK=\"$(git rev-parse --show-toplevel 2>/dev/null)/.claude/hooks/work-check.py\"; if [ -f \"$HOOK\" ]; then uv run python3 \"$HOOK\"; else exit 0; fi", + "timeout": 3, + "type": "command" + } + ], + "matcher": "Write|Edit|Bash" } ], "SessionStart": [ { - "matcher": "startup|resume", "hooks": [ { - "type": "command", - "command": "uv run -- python .claude/hooks/session-start.py", - "timeout": 10 + "command": "HOOK=\"$(git rev-parse --show-toplevel 2>/dev/null)/.claude/hooks/session-start.py\"; if [ -f \"$HOOK\" ]; then uv run python3 \"$HOOK\"; else exit 0; fi", + "timeout": 10, + "type": "command" + } + ], + "matcher": "startup|resume" + } + ], + "UserPromptSubmit": [ + { + "hooks": [ + { + "command": "HOOK=\"$(git rev-parse --show-toplevel 2>/dev/null)/.claude/hooks/prompt-guard.py\"; if [ -f \"$HOOK\" ]; then uv run python3 \"$HOOK\"; else exit 0; fi", + "timeout": 5, + "type": "command" } ] } diff --git a/.crosslink/hook-config.json b/.crosslink/hook-config.json index ae1f796..890999d 100644 --- a/.crosslink/hook-config.json +++ b/.crosslink/hook-config.json @@ -47,7 +47,7 @@ "pwd", "echo" ], - "auto_steal_stale_locks": false, + "auto_steal_stale_locks": 5, "blocked_git_commands": [ "git push", "git merge", @@ -65,15 +65,15 @@ "git branch -D", "git branch -m" ], - "comment_discipline": "encouraged", + "comment_discipline": "required", "cpitd_auto_install": true, "gated_git_commands": [ "git commit" ], "intervention_tracking": true, - "kickoff_verification": "local", - "reminder_drift_threshold": 3, + "kickoff_verification": "ci", + "reminder_drift_threshold": 0, "signing_enforcement": "audit", - "tracker_remote": "origin", + "tracker_remote": null, "tracking_mode": "strict" } diff --git a/.gitignore b/.gitignore index d199776..d894446 100644 --- a/.gitignore +++ b/.gitignore @@ -232,10 +232,13 @@ testing.env .crosslink/daemon.pid .crosslink/daemon.log .crosslink/last_test_run +.crosslink/.active-issue .crosslink/keys/ .crosslink/.hub-cache/ .crosslink/.knowledge-cache/ .crosslink/.cache/ +.crosslink/init-manifest.json +.crosslink/init-manifest.json.tmp .crosslink/hook-config.local.json .crosslink/integrations/ .crosslink/rules.local/ @@ -245,9 +248,12 @@ testing.env # .crosslink/rules/ — project coding standards # .crosslink/.gitignore — inner gitignore for agent files -# .claude/ — this project tracks hooks/ and commands/ (project-level config) -# .claude/mcp/ is auto-generated, don't track +# .claude/ — auto-generated by crosslink init (not project source) +.claude/hooks/ .claude/mcp/ -# .claude/settings.local.json is per-developer -.claude/settings.local.json + +# .claude/ — DO track these (project-level configuration): +# .claude/commands/ — project-level Claude Code skills +# .claude/settings.json — Claude Code project settings +# .claude/settings.local.json is per-developer, ignore separately if needed # === End crosslink managed === diff --git a/.mcp.json b/.mcp.json new file mode 100644 index 0000000..1eb015a --- /dev/null +++ b/.mcp.json @@ -0,0 +1,18 @@ +{ + "mcpServers": { + "crosslink-knowledge": { + "args": [ + "run", + ".claude/mcp/knowledge-server.py" + ], + "command": "uv" + }, + "crosslink-safe-fetch": { + "args": [ + "run", + ".claude/mcp/safe-fetch-server.py" + ], + "command": "uv" + } + } +} diff --git a/CHANGELOG.md b/CHANGELOG.md index b592c70..ee76836 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,9 +9,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). ## [0.7.0b1] - 2026-02-26 ### Added +- Add lens lexicon E2E validation with trust verification and schema compat (#11) +- Add unified search API with pluggable backends (#10) +- Write tests for AppView integration (#9) +- Add client-side AppView integration (#8) +- Switch CI Redis image from Docker Hub to ghcr.io to avoid rate limits (#7) +- Rename atdataSchemaVersion field to remove dollar prefix (#5) +- Support @handle/TypeName@version format in get_schema and get_schema_type (#2) - **Client-side AppView integration**: `Atmosphere` client now supports XRPC queries (`xrpc_query()`) and procedures (`xrpc_procedure()`) routed through a configurable AppView service. Schema, lens, label, and record loaders automatically use AppView for listing, search, and resolution when available, falling back to client-side pagination otherwise. New `has_appview` property and `AppViewRequiredError`/`AppViewUnavailableError` exceptions for clean error handling (GH#74) ### Fixed +- Fix get_schema failure when resolving records across accounts (#3) - **Blob URL parameter bug**: Fixed incorrect parameter passing in blob URL construction within atmosphere record publishing - **Fallback logging**: Improved diagnostic logging when AppView is unavailable and client-side fallback is used