An OpenAI-compatible proxy server that forwards requests to GitHub Copilot via the Copilot SDK.
This proxy allows local applications expecting an OpenAI-compatible API to use GitHub Copilot as their backend. Applications connect to this proxy without needing API keys—the proxy handles authentication with GitHub Copilot.
- Node.js >= 18.0.0
- GitHub Copilot CLI installed and in PATH (or configure custom path)
- Active GitHub Copilot subscription
npm installCopy the example environment file and customize as needed:
cp .env.example .env| Variable | Default | Description |
|---|---|---|
COPILOT_PROXY_PORT |
3001 |
Port the proxy listens on |
COPILOT_PROXY_DEFAULT_MODEL |
gpt-5.2 |
Default model when not specified in request |
COPILOT_CLI_PATH |
(system PATH) | Custom path to Copilot CLI executable |
Start the server:
npm startThe server will be available at http://localhost:3001 (or your configured port).
OpenAI-compatible chat completions endpoint.
Request:
{
"model": "gpt-5.2",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"stream": false
}Response:
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1706300000,
"model": "gpt-5.2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0
}
}List available models.
Get information about a specific model.
Health check endpoint.
Set "stream": true in your request to receive Server-Sent Events (SSE) streaming responses, compatible with OpenAI's streaming format.
At the time of writing, there doesn't appear to be any documented
programmatic way to list all available models via the Copilot SDK.
Set the list of supported models in your organization by modifying
the AVAILABLE_MODELS array in index.js.
# Non-streaming
curl http://localhost:3001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Say hello!"}]
}'
# Streaming
curl http://localhost:3001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Tell me a short story"}],
"stream": true
}'from openai import OpenAI
client = OpenAI(
base_url="http://localhost:3001/v1",
api_key="not-needed" # Any value works
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)MIT