A powerful VS Code extension that provides AI chat functionality in the sidebar. The AI features are implemented through Python scripts, supporting RAG (Retrieval-Augmented Generation) technology, a tool calling system, automatic codebase indexing for AI context, and direct code file modification capabilities.
- π± Sidebar Chat Interface - Clean and beautiful conversation interface
- π¬ Chat with AI - Supports multi-turn conversations and history
- π RAG Code Indexing - Automatically indexes workspace code to provide context for AI
- π Auto-update Index - Detects file changes through snapshot system and automatically updates RAG index
- π Markdown Rendering - Supports Markdown format message rendering
- π§Ή Clear Chat History - One-click to clear conversation records
- βοΈ Configurable Python Path and Script Path
- π Output Logging - Detailed log output for debugging and monitoring
AI can call various tools to complete complex tasks:
- π§ Code Patch Application (
apply_patch) - AI can generate and apply code patches - π» Command Execution (
command) - Execute system commands (such as git, npm, etc.) - π Web Page Fetching (
fetch_url) - Fetch web page content - π Code Linting (
lint) - Perform syntax checking on code - π Web Search (
web_search) - Search for information on the web - π Workspace RAG (
workspace_rag) - Use RAG to retrieve code context - π Workspace Structure (
workspace_structure) - Get workspace file structure - π€ Send Report (
send_report) - Send final report to user
- π Patch Preview - After AI generates code patches, automatically displays diff preview in VS Code
- β Accept/Reject Buttons - Provides buttons in the chat panel for users to choose whether to accept changes
- π Auto-apply Preview - Patches are automatically applied to code after generation (preview mode)
- β©οΈ Revert Changes - If rejected, can revert applied changes
- π€ Intelligent Agent - Uses Flow Agent for multi-turn iterative tool calling
- π Auto Iteration - Supports up to 10 iterations, automatically completing complex tasks
- πΎ Memory System - Saves tool call history to provide context for subsequent decisions
- Node.js and npm
- Python 3.9 or higher
- uv (Recommended Python package manager)
- Install Node.js dependencies:
npm install- Set up Python environment (using uv):
# Install uv (if not already installed)
# macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or using pip:
# pip install uv
# Create virtual environment
uv venv
# Activate virtual environment
# macOS/Linux:
source .venv/bin/activate
# Windows:
# .venv\Scripts\activate
# Install Python dependencies
uv sync- Configure VS Code extension:
In VS Code settings, set aiChat.pythonPath to the Python interpreter path in the uv virtual environment:
- macOS/Linux:
.venv/bin/python(relative to project root) - Windows:
.venv\Scripts\python.exe(relative to project root)
Or use an absolute path.
- Compile TypeScript:
npm run compile- Press
F5to run in Extension Development Host
Configure the following options in VS Code settings:
aiChat.pythonPath: Python interpreter path- Default:
python3 - When using uv environment:
.venv/bin/python(Linux/macOS) or.venv\Scripts\python.exe(Windows) - Can use absolute path or path relative to project root
- Default:
aiChat.aiScriptPath: AI service script path (default:python/ai_service.py)
Make sure to create a .env file in the python/ directory and configure necessary environment variables:
# OpenAI Configuration
OPENAI_API_KEY=your_api_key_here
OPENAI_MODEL=your_model_name # e.g., gpt-4, gpt-3.5-turbo, etc.
OPENAI_BASE_URL=your_base_url # Optional, for custom API endpoint
OPENAI_PROXY=your_proxy_url # Optional, proxy configuration
# RAG Configuration
RAG_ENABLED=true # Whether to enable RAG index building and updating, default: true. Set to false to disable RAG functionality
RAG_UPDATE_INTERVAL_SECONDS=60 # Minimum update interval for RAG update service (seconds), default: 60
RAG_DESCRIPTION_CONCURRENCY=2 # Concurrency for description generation, default: 2
RAG_INDEXING_CONCURRENCY=2 # Concurrency for index building, default: 2Note: The
.envfile should be placed in thepython/directory, not the project root.
Run the code with Run and Debug mode:
-
Via Activity Bar Icon (Recommended):
- Find the π¬ chat icon in the VS Code left activity bar
- Click the icon to open the "AI Chat" view (see image below)
-
Via Command Palette:
- Press
Ctrl+Shift+P(Windows/Linux) orCmd+Shift+P(Mac) - Type
AI Chat: Open AI Chatand select
- Press
- Type a message in the input box
- Press
Enteror click "Send" to send the message - AI will process the message and return a response (supports Markdown format)
When AI generates code patches:
- Auto Preview: Patches are automatically applied to code and a diff preview window is displayed in VS Code
- View Changes: View the differences before and after modification in the preview window
- Make Decision:
- Click Accept button: Keep changes, patch is officially applied
- Click Reject button: Revert changes, code returns to pre-modification state
- Button Location: Accept/Reject buttons are displayed above the input box in the chat panel
Note: If the user doesn't click a button, the buttons will remain displayed until the user makes a choice or a new patch is generated.
- First Time Opening Workspace: Extension automatically initializes RAG index, indexing the entire codebase
- Auto Update: Extension periodically detects file changes and automatically updates the index (interval configured by
RAG_UPDATE_INTERVAL_SECONDS) - Closed Detection: Even if files change when VS Code is closed, it will automatically detect and update when reopened
- View Logs: In VS Code's "Output" panel, select "AI Service" channel to view detailed logs
Example prompt: Processing message: Please complete all the TODOs in the project.
The code under sample_ws has some incomplete parts, marked with TODO in comments, along with implementation ideas.
The code in sample_ws can create a weighted directed graph, select different destination points on the graph based on user choices, use different shortest path algorithms, view paths and lengths, and display them in ASCII format in the command line.
The extension uses Python scripts to handle AI requests, main components include:
-
Frontend (
src/extension.ts,src/ChatPanel.ts)- VS Code extension main program
- Manages Webview chat interface
- Handles user input and message display
- Handles Patch preview and application
-
AI Service (
python/ai_service.py)- Receives chat messages and history
- Calls Flow Agent to process requests
- Returns formatted AI responses
-
Flow Agent (
python/agents/flow.py)- Intelligent agent system supporting multi-turn iteration
- Automatically calls tools to complete tasks
- Manages tool call history and context
-
Tool System (
python/tools/)- Extensible tool framework
- Supports various tools (patch application, command execution, code linting, etc.)
- Automatic tool discovery and registration
-
LLM Client (
python/llm/)- Encapsulates OpenAI API calls
- Supports asynchronous processing
- Manages API configuration and error handling
-
RAG Service (
python/rag/)- Codebase indexing and retrieval
- Automatic code description generation
- Incremental index updates
The extension uses an event stream system to pass information:
- ToolCallEvent: Tool call event
- ToolResultEvent: Tool execution result event
- ReportEvent: Final report event (via
send_reporttool) - MessageEvent: Normal message event
All events are displayed in the chat interface, preserving complete conversation history.
- Initialization: When opening workspace for the first time, scans all code files and creates index
- Snapshot System: Uses file hash snapshots to detect changes
- Incremental Update: Only updates changed files for efficiency
- Context Retrieval: When AI answers questions, RAG system retrieves relevant code context
- Generate Patch: AI generates code patches through
apply_patchtool - Auto Apply: Patches are automatically applied to code files (preview mode)
- Show Preview: Opens diff preview window in VS Code
- User Choice: User decides whether to keep changes via Accept/Reject buttons
- Execute Action:
- Accept: Keep changes (patch already applied, no additional action needed)
- Reject: Revert changes (restore to pre-modification state)
Edit the python/ai_service.py file to customize AI response logic.
Edit python/agents/flow.py (ReAct Flow) or planact_flow.py (PlanAct Flow) in the same directory to:
- Adjust maximum iteration count
- Modify tool calling logic
- Customize prompts
- Create a new tool file in the
python/tools/directory - Inherit from
MCPToolbase class - Implement necessary properties and methods
- Tools are automatically discovered and registered
Example tool structure:
from tools.base_tool import MCPTool
class MyCustomTool(MCPTool):
@property
def name(self) -> str:
return "my_tool"
@property
def agent_tool(self) -> bool:
return True # Whether to expose to AI
def get_tool_definition(self) -> Dict[str, Any]:
# Return tool definition
pass
async def execute(self, ...) -> Dict[str, Any]:
# Execute tool logic
passEdit the python/llm/chat_llm.py file to:
- Switch to different LLM providers
- Adjust model parameters
- Add custom prompts
Edit relevant files in the python/rag/ directory to:
- Modify code slicing strategy
- Adjust description generation method
- Customize retrieval algorithm
# Compile
npm run compile
# Watch mode compilation (auto recompile)
npm run watchThe project uses uv as the Python package manager:
# Install dependencies (if needed)
uv sync
# Install dependencies (dev version, required for running tests)
uv sync --extra dev
# Activate virtual environment
source .venv/bin/activate # Linux/macOS
# or
.venv\Scripts\activate # WindowsRunning tests requires installing dev version dependencies (see above).
# Run Python tool tests
cd python
python -m pytest tests/ -v
# Or run a single test file
python tests/test_apply_patch_tool.py- In VS Code or code editors based on VS Code architecture (such as Cursor), press the
Run and Debugbutton to start Extension Development Host (see image below) - Test extension functionality in the development host
- View debug console and output panel ("AI Service" channel) for logs
.
βββ src/ # TypeScript source code
β βββ extension.ts # Extension main entry
β βββ ChatPanel.ts # Chat panel logic
β βββ patchPreview.ts # Patch preview provider
β βββ patchUtils.ts # Patch utility functions
β βββ snapshot.ts # Snapshot system
βββ python/ # Python service
β βββ ai_service.py # AI service main script
β βββ agents/ # Agent system
β β βββ flow.py # ReAct Flow Agent
β β βββ memory.py # Memory system
β β βββ planact_flow.py # PlanAct Flow Agent
β βββ tools/ # Tool system
β β βββ base_tool.py # Tool base class
β β βββ tool_factory.py # Tool factory
β β βββ apply_patch_tool.py
β β βββ command_tool.py
β β βββ fetch_url_tool.py
β β βββ lint_tool.py
β β βββ message_tool.py
β β βββ parallel_task_executor.py
β β βββ search_replace_tool.py
β β βββ send_report_tool.py
β β βββ web_search_tool.py
β β βββ workspace_rag_tool.py
β β βββ workspace_structure_tool.py
β βββ llm/ # LLM client
β β βββ chat_llm.py
β β βββ rag_llm.py
β βββ rag/ # RAG indexing service
β β βββ class_slicer.py
β β βββ description_generator.py
β β βββ function_slicer.py
β β βββ hash.py
β β βββ incremental_updater.py
β β βββ indexing.py
β β βββ rag_service.py
β βββ rag_init_service.py # RAG initialization service
β βββ rag_update_service.py # RAG update service
β βββ models/ # Data models
β βββ prompts/ # Prompts
β β βββ flow_prompt.py
β βββ utils/ # Utility functions
β β βββ logger.py
β β βββ patch_parser.py
β βββ tests/ # Tests
βββ media/ # Static resources (CSS, etc.)
β βββ chat.css # Chat interface styles
βββ out/ # TypeScript compilation output
βββ doc/ # Documentation
βββ logs/ # Log files
βββ sample_ws/ # Sample workspace
βββ package.json # Node.js configuration
βββ pyproject.toml # Python project configuration
βββ tsconfig.json # TypeScript configuration
βββ uv.lock # uv lock file
MIT

