Skip to content

inklife/CopilotMask

Repository files navigation

CopilotMask - GitHub Copilot - Ollama - Proxy Service

Language: English | δΈ­ζ–‡

πŸ“– Project Overview

CopilotMask is a GitHub Copilot BASE_URL customization tool implemented through Ollama proxy, specifically designed for VS Code's GitHub Copilot. By running this project locally or remotely, you can use custom AI API providers in GitHub Copilot, breaking through official limitations and using your own API keys.

🎯 Core Features

  • πŸ”— API Proxy Forwarding: Forward GitHub Copilot requests to custom AI service providers
  • πŸ€– Multi-Model Support: Support multiple AI models including OpenAI, Claude, DeepSeek, etc.
  • 🧠 Reasoning Process Display: Special optimization for DeepSeek to show complete reasoning processes
  • πŸ“Š Powerful Logging System: Provide detailed request and response logs for easy debugging
  • πŸ”„ Smart Configuration Management: Support intelligent priority configuration with .env files and environment variables

πŸ’‘ Project Significance

  • πŸ†“ Break Free from Subscription Limits: Use powerful AI models without GitHub Copilot paid membership
  • βœ… Payment Alternative: Perfect solution for users who cannot easily purchase or add official payment methods
  • πŸ”§ Easy Debugging: Deep understanding of Copilot's working mechanism through detailed logging system
  • 🌐 Third-Party Relay Support: Perfect support for various third-party API relay stations
  • ⚑ High Performance: Built on FastAPI with excellent performance

πŸš€ Quick Start

1. Requirements

  • Python 3.8+
  • pip package manager

2. Install Dependencies

pip install -r requirements.txt

3. Configure Environment Variables

Copy .env.example to .env file:

cp .env.example .env

Edit the .env file to configure your API keys and service addresses (see Configuration section for details).

4. Start Service

python main.py

The service will start at http://localhost:11434.

5. Configure GitHub Copilot

  1. Open GitHub Copilot settings
  2. Select "Manage Models"
  3. Choose "Ollama"
  4. Select models starting with virtual-
  5. Start using your custom API!

πŸ”§ Troubleshooting

Model Addition Failure: If you encounter failures when adding models in GitHub Copilot, this is typically because of VS Code Copilot. When model addition fails, backend logs show no registration requests received from VS Code Copilot, indicating that Copilot did not attempt to access the service.

Solution:

  1. First register all available Ollama models in Copilot settings
  2. Then unregister all models
  3. Restart VS Code or refresh Copilot status
  4. Retry adding the desired virtual models

This operation will reset Copilot's internal state and typically resolves model addition failures.

βš™οΈ Configuration Guide

πŸ”§ Core Configuration Items

The configuration file supports intelligent priority system:

  • When .env file exists, use .env configuration preferentially
  • When .env doesn't exist, use .env.example as configuration template
  • System environment variables can supplement file configuration

Basic Configuration

# Environment mode
ENVIRONMENT=development

# Server configuration
SERVER_HOST=0.0.0.0          # Service binding address
SERVER_PORT=11434            # Service port (recommended to keep 11434)

# Ollama configuration
REAL_OLLAMA_BASE_URL=http://1.2.3.4:11434  # Real Ollama service address
OLLAMA_TIMEOUT=10.0          # Ollama source request timeout

Logging Configuration

LOG_LEVEL=INFO               # Log level: DEBUG, INFO, WARNING, ERROR
LOG_FILE_NAME=ollama_proxy.log  # Log file name

API Provider Configuration

OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_MAX_TOKENS=4096
OPENAI_MODELS=gpt-4o-mini,gpt-4o,gpt-3.5-turbo
OPENAI_CAPABILITIES=completion,chat,tools,vision
Claude Configuration
CLAUDE_API_KEY=your_claude_api_key_here
CLAUDE_BASE_URL=https://api.anthropic.com
CLAUDE_MAX_TOKENS=25600
CLAUDE_MODELS=claude-sonnet-4-20250514,claude-opus-4-20250514
CLAUDE_CAPABILITIES=completion,chat,tools,vision
DeepSeek Configuration
DEEPSEEK_API_KEY=your_deepseek_api_key_here
DEEPSEEK_BASE_URL=https://api.deepseek.com
DEEPSEEK_MAX_TOKENS=30000
DEEPSEEK_MODELS=deepseek-chat,deepseek-reasoner
DEEPSEEK_CAPABILITIES=completion,chat,tools

πŸŽ›οΈ Capabilities Configuration (Usually No Need to Modify)

CAPABILITIES configuration supports two formats:

  1. One-dimensional array (all models have same capabilities):

    OPENAI_CAPABILITIES=completion,chat,tools,vision
  2. Two-dimensional array (each model has different capabilities):

    OPENAI_CAPABILITIES=completion,chat,tools,vision|chat,tools|completion,chat

Supported capability types:

  • completion: Text completion
  • chat: Conversational chat
  • tools: Tool calling
  • vision: Visual understanding

πŸ—οΈ Project Architecture

Core Modules

  • main.py: Main application entry point, FastAPI application creation and startup
  • config.py: Configuration management module, intelligent loading of environment variables
  • model_manager.py: Model manager, handles virtual model registration and client initialization
  • routes.py: Route handler, defines API routes and request forwarding logic
  • api_handlers.py: API handlers, processes API calls from different service providers
  • middleware.py: Middleware, handles request logging and response processing
  • logger.py: Logging system, provides structured logging
  • utils.py: Utility functions, contains various helper functions

πŸ”„ Workflow

  1. Request Reception: GitHub Copilot sends OpenAI format requests
  2. Model Recognition: Detect if the model name in the request is a virtual model
  3. Request Forwarding:
    • Virtual model β†’ Forward to corresponding remote API service provider
    • Non-virtual model β†’ Forward to Ollama source
  4. Format Conversion: Handle conversion between different API formats (e.g., Claude)
  5. Response Return: Convert response to OpenAI format and return to Copilot

🎨 Special Features

DeepSeek Reasoning Process Display

The project has special optimization for DeepSeek, capable of displaying complete reasoning processes, allowing you to understand AI's thinking process.

Smart Configuration System

Supports complex configuration priority management, allowing flexible use of different configurations in different environments.

πŸ“Š Logging System

The project provides a powerful logging system:

  • Structured logging
  • Detailed tracking of requests and responses
  • Support for different log levels
  • Dual output to file and console

🚧 Future Development Plans

  • πŸ–₯️ GUI Interface: Graphical configuration and management interface
  • πŸ”§ System Service: Support running as system service
  • ☁️ Serverless Adaptation: Adapt to serverless platforms like function computing
  • 🌐 Forwarding Configuration Station: Create server for user_id-based configuration mapping

🀝 Contributing

Welcome to submit Issues and Pull Requests to help improve the project!

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

ℹ️ Support

If you encounter problems during use, please:

  1. Check if the configuration file is correct
  2. Check the log file for error information
  3. Submit an Issue describing the problem

Enjoy the freedom and convenience brought by CopilotMask! πŸŽ‰

About

Freely define your Copilot's BASE_URL from now on.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages