Conversation
Register the git-ai-search skill alongside the existing prompt-analysis skill so it deploys automatically when git-ai installs into Claude Code environments. The SKILL.md provides workflow-oriented documentation with decision tables, seven usage patterns, command references, and integration examples for CI/CD and shell scripting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
| // Cap concurrent fetches at 3 | ||
|
|
||
| for (const { promptId, record } of toFetch) { |
There was a problem hiding this comment.
🔴 Undefined variable toFetch and missing concurrency-cap logic in triggerCASFetches
The triggerCASFetches method collects prompts into promptsToFetch (line 1525) but the for loop on line 1535 iterates over toFetch, which is never declared. The comment on line 1533 says "Cap concurrent fetches at 3" but the line that should slice the array and assign it to toFetch (e.g. const toFetch = promptsToFetch.slice(0, 3)) is entirely missing — only a blank line remains.
Root Cause and Impact
The variable promptsToFetch is built up correctly at agent-support/vscode/src/blame-lens-manager.ts:1525-1531, but the code that was supposed to cap the batch and assign it to toFetch was never written. As a result:
- The code references
toFetchon line 1535 which does not exist, preventing compilation. - Even if the variable name were corrected to
promptsToFetch, the intended concurrency cap of 3 concurrent CAS fetches would be missing — all prompts would be fetched simultaneously, which could spawn many parallelgit-ai show-promptprocesses.
This breaks the entire CAS prompt-fetching feature that is a core part of this PR. The VSCode extension will never asynchronously load prompt messages from the cloud for the hover content.
| // Cap concurrent fetches at 3 | |
| for (const { promptId, record } of toFetch) { | |
| // Cap concurrent fetches at 3 | |
| const toFetch = promptsToFetch.slice(0, 3); | |
| for (const { promptId, record } of toFetch) { |
Was this helpful? React with 👍 or 👎 to provide feedback.
Extending on @jwiegley's search + continue work, this PR introduces a skill that lets any agent read previous prompts for any LoC it's looking at.
It's useful when trying to figure out why AI-code is the way it is and I've found it very useful during planning. I though there'd be some crazy work required to make agents call it, but this seems to do the trick:
Claude.md|AGENTS.mdAlso included
[ ] updates to VSCode plugin empty prompt state
[ ] reading prompts for logged in teams from CaS Cache, then CaS, then Local DB, then notes
[ ] CaS read cache
Note: This should ship with a VSCode update, but they are technically independent and won't break if out of sync
Separate PR with big doc updates + readme updates in the works