This project provides a fully automated setup for coding assistance in Visual Studio Code using local AI models, integrated with GitHub for version control and collaboration.
- Local AI Integration: Uses Ollama with models like CodeLlama for code generation
- VS Code Extensions: Continue.dev and Tabnine for AI-powered coding assistance
- Aider Integration: Configured to use Ollama models for automated code editing
- MCP Servers: Model Context Protocol servers for AI agent communication
- GitHub Automation: Automated commit and push workflows
- Agent Communication: AI agents can communicate via shared message files
-
Prerequisites:
- Python 3.x with transformers, torch, accelerate
- Node.js
- Ollama installed
- GitHub CLI (gh) authenticated
-
Clone or Download:
git clone https://github.com/Obayne/CodeSet.git cd CodeSet -
Install Dependencies:
# For MCP servers cd mcp-servers/backend-developer npm install npm run build
-
Configure VS Code:
- Install extensions: Continue, Tabnine
- Copy .continue/config.json to your VS Code Continue config
-
Set up MCP in Cline:
- The MCP server is configured in your Cline settings
Run the automated coding workflow:
python automate_workflow.py "Add a new API endpoint for user authentication"This will:
- Use Aider to generate/modify code based on the prompt
- Commit changes
- Push to GitHub
Use the MCP tools in Cline:
generate_backend_code: Generate code using local AI modelsend_message: Send messages between AI agents
Messages are stored in messages.json for inter-agent communication.
CodeSet/
├── .aider.conf.yml # Aider configuration
├── .continue/config.json # Continue extension config
├── .gitignore
├── README.md
├── automate_workflow.py # Automated coding script
├── messages.json # Agent communication log
├── mcp-servers/ # MCP servers for agents
│ └── backend-developer/
│ ├── inference.py # Python script for AI inference
│ ├── package.json
│ ├── src/
│ └── build/
└── AI Local/ # Local AI models (external)
The setup uses CodeLlama models from HuggingFace, loaded locally for privacy and performance.
Supported models:
- CodeLlama-7b-Instruct
- CodeLlama-13b
- And others in D:\DEV\AI Local
- Repository: https://github.com/Obayne/CodeSet
- Automated commits and releases
- Use GitHub CLI for additional automation
To add more AI agents:
- Create new folders in
mcp-servers/ - Implement inference.py for the specific model
- Build and add to MCP settings
- Add communication tools as needed
This project is open source. Check individual model licenses for usage restrictions.