- MCP Configuration Path Issue
- Problem: Used
"command": "knowcode"(not in PATH) - Solution: Changed to
"/home/deeog/Desktop/KnowCode/.venv/bin/knowcode" - File:
/home/deeog/.gemini/antigravity/mcp_config.json
- Problem: Used
- MCP configuration file updated with absolute path
- Knowledge store exists (1.1M, 1 day old)
- KnowCode CLI working (v0.2.1)
- Virtual environment configured
- Agent rules defined in
.agent/context.md
- Semantic index missing - Will use lexical search only
- Knowledge store is 1 day old - Consider re-analyzing
- Stop the manual MCP server (Ctrl+C in terminal)
- Restart Antigravity IDE
- Test the workflow (see test_mcp_workflow.md)
User asks: "How does search work in KnowCode?"
↓
Agent calls: retrieve_context_for_query(
query="How does search work in KnowCode?",
task_type="auto",
max_tokens=3000,
limit_entities=3,
expand_deps=true
)
↓
KnowCode MCP Server returns:
{
"context_text": "...",
"sufficiency_score": 0.92,
"evidence": [...],
...
}
↓
Agent checks: sufficiency_score >= 0.8?
↓
YES → Answer from context_text only (no external LLM)
NO → Use external LLM (Claude Sonnet 4.5)
verify_mcp_connection.sh- Check MCP setup statustest_mcp_workflow.md- Test questions after restartdocs/MCP_SETUP.md- Complete setup documentationREADME_MCP.md- This quick reference (you are here)
./verify_mcp_connection.shps aux | grep "knowcode mcp-server"cat ~/.gemini/antigravity/mcp_config.jsonsource .venv/bin/activate
knowcode analyze . -o .Note: This rebuilds the knowledge store and attempts semantic indexing. If indexing is skipped, run knowcode index . after configuring embeddings.
-
Check server is running:
ps aux | grep "knowcode mcp-server"
-
Check configuration:
cat ~/.gemini/antigravity/mcp_config.json -
Restart IDE again
-
Verify index exists (should be created by analyze):
ls -la knowcode_index/
-
If missing, run a dedicated index build:
knowcode index . --output knowcode_index -
Increase token budget in
.agent/context.md:Use max_tokens=6000, limit_entities=5
After setup, you should see:
- ✅ 70%+ queries with
sufficiency_score >= 0.8 - ✅ Faster responses for codebase questions
- ✅ 50%+ reduction in external LLM token usage
- ✅ Accurate answers from local context
- Full Setup Guide:
docs/MCP_SETUP.md - Test Plan:
test_mcp_workflow.md - KnowCode Docs:
README.md
Sufficiency Score: Confidence that retrieved context is enough to answer the query
>= 0.8→ Answer locally< 0.8→ Use external LLM
Retrieval Modes:
- Semantic: Uses embeddings + vector search (better)
- Lexical: Uses keyword matching (fallback)
Dependency Expansion: Includes related code (callees, callers) for complete context
- Build semantic index - Much better than lexical
- Keep knowledge store updated - Re-analyze after major changes
- Tune parameters - Adjust max_tokens and limit_entities
- Monitor scores - Track sufficiency_score distribution
- MCP server runs locally (no external data transmission)
- Knowledge store contains your source code (keep secure)
- Embeddings may be sent to external providers (VoyageAI, OpenAI)
- Store API keys in
.env(never commit)
Just need to:
- Stop the manual MCP server (Ctrl+C)
- Restart Antigravity IDE
- Ask a test question
Good luck! 🚀
Last updated: 2026-01-13