Skip to content

ship the /ask skill #532

Open
acunniffe wants to merge 14 commits intomainfrom
feat/aidan-ask-skill
Open

ship the /ask skill #532
acunniffe wants to merge 14 commits intomainfrom
feat/aidan-ask-skill

Conversation

@acunniffe
Copy link
Collaborator

@acunniffe acunniffe commented Feb 16, 2026

Extending on @jwiegley's search + continue work, this PR introduces a skill that lets any agent read previous prompts for any LoC it's looking at.

It's useful when trying to figure out why AI-code is the way it is and I've found it very useful during planning. I though there'd be some crazy work required to make agents call it, but this seems to do the trick:

Claude.md|AGENTS.md

- In plan mode, always use the /ask skill so you can read the code and the original prompts that generated it. Intent will help you write a better plan

Also included
[ ] updates to VSCode plugin empty prompt state
[ ] reading prompts for logged in teams from CaS Cache, then CaS, then Local DB, then notes
[ ] CaS read cache

Note: This should ship with a VSCode update, but they are technically independent and won't break if out of sync

Separate PR with big doc updates + readme updates in the works


Open with Devin

jwiegley and others added 10 commits February 12, 2026 11:16
Register the git-ai-search skill alongside the existing prompt-analysis
skill so it deploys automatically when git-ai installs into Claude Code
environments. The SKILL.md provides workflow-oriented documentation with
decision tables, seven usage patterns, command references, and integration
examples for CI/CD and shell scripting.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@git-ai-cloud-dev
Copy link

git-ai-cloud-dev bot commented Feb 16, 2026

Stats powered by Git AI

🧠 you    ███░░░░░░░░░░░░░░░░░  17%
🤖 ai     ░░░█████████████████  83%
More stats
  • 1.2 lines generated for every 1 accepted
  • 0 seconds waiting for AI
  • Top model: claude::claude-opus-4-6 (802 accepted lines, 933 generated lines)

AI code tracked with git-ai

devin-ai-integration[bot]

This comment was marked as resolved.

acunniffe and others added 2 commits February 16, 2026 08:57
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 11 additional findings in Devin Review.

Open in Devin Review

Comment on lines +1533 to +1535
// Cap concurrent fetches at 3

for (const { promptId, record } of toFetch) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Undefined variable toFetch and missing concurrency-cap logic in triggerCASFetches

The triggerCASFetches method collects prompts into promptsToFetch (line 1525) but the for loop on line 1535 iterates over toFetch, which is never declared. The comment on line 1533 says "Cap concurrent fetches at 3" but the line that should slice the array and assign it to toFetch (e.g. const toFetch = promptsToFetch.slice(0, 3)) is entirely missing — only a blank line remains.

Root Cause and Impact

The variable promptsToFetch is built up correctly at agent-support/vscode/src/blame-lens-manager.ts:1525-1531, but the code that was supposed to cap the batch and assign it to toFetch was never written. As a result:

  1. The code references toFetch on line 1535 which does not exist, preventing compilation.
  2. Even if the variable name were corrected to promptsToFetch, the intended concurrency cap of 3 concurrent CAS fetches would be missing — all prompts would be fetched simultaneously, which could spawn many parallel git-ai show-prompt processes.

This breaks the entire CAS prompt-fetching feature that is a core part of this PR. The VSCode extension will never asynchronously load prompt messages from the cloud for the hover content.

Suggested change
// Cap concurrent fetches at 3
for (const { promptId, record } of toFetch) {
// Cap concurrent fetches at 3
const toFetch = promptsToFetch.slice(0, 3);
for (const { promptId, record } of toFetch) {
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants