High-performance proxy service that converts OpenAI API to Anthropic API compatible format. Allows developers to seamlessly call OpenAI models using existing Anthropic client code.
- β Seamless Compatibility: Call OpenAI models using standard Anthropic clients
- β Full Functionality: Supports text, tool calls, streaming responses, and more
- β Intelligent Routing: Automatically selects the most suitable OpenAI model based on request content
- β Hot Reload: Automatically reloads configuration file changes without restarting the service
- β Structured Logging: Detailed request/response logs for debugging and monitoring
- β Error Mapping: Comprehensive error handling and mapping mechanisms
- Python 3.11+
- uv (recommended package manager)
# Install dependencies using uv (recommended)
uv sync- Copy the example configuration file:
cp config/example.json config/settings.json- Edit
config/settings.json:
{
"openai": {
"api_key": "your-openai-api-key-here", // Replace with your OpenAI API key
"base_url": "https://api.openai.com/v1" // OpenAI API address
},
"api_key": "your-proxy-api-key-here", // API key for the proxy service
// Other configurations...
}# Development mode
uv run main.py --config config/settings.json
# Production mode
uv run main.py# Build and start the service
docker-compose up --build
# Run in background
docker-compose up --build -d
# Stop the service
docker-compose downThe service will start at http://localhost:8000.
This project can be used with Claude Code for development and testing. To configure Claude Code to work with this proxy service, create a .claude/settings.json file with the following configuration:
{
"env": {
"ANTHROPIC_API_KEY": "your-api-key",
"ANTHROPIC_BASE_URL": "http://127.0.0.1:8000",
"DISABLE_TELEMETRY": "1",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
},
"apiKeyHelper": "echo 'your-api-key'",
"permissions": {
"allow": [],
"deny": []
}
}Configuration Notes:
- Replace
ANTHROPIC_API_KEYwith your API key, inconfig/settings.json - Replace
ANTHROPIC_BASE_URLwith the actual URL where this proxy service is running - The
apiKeyHelperwith your API key, inconfig/settings.json
from anthropic import Anthropic
# Initialize client pointing to the proxy service
client = Anthropic(
base_url="http://localhost:8000/v1",
api_key="your-proxy-api-key-here" # Use the api_key from the configuration file
)
# Send a message request
response = client.messages.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello, GPT!"}
],
max_tokens=1024
)
print(response.content[0].text)# Streaming response
stream = client.messages.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Tell me a story about AI"}
],
max_tokens=1024,
stream=True
)
for chunk in stream:
if chunk.type == "content_block_delta":
print(chunk.delta.text, end="", flush=True)# Tool calls
tools = [
{
"name": "get_current_weather",
"description": "Get the current weather for a specified city",
"input_schema": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name"
}
},
"required": ["city"]
}
}
]
response = client.messages.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "What's the weather like in Beijing now?"}
],
tools=tools,
tool_choice={"type": "auto"}
)openai-to-claude/
βββ src/
β βββ api/ # API endpoints and middleware
β βββ config/ # Configuration management
β βββ core/ # Core business logic
β β βββ clients/ # HTTP clients
β β βββ converters/ # Data format converters
β βββ models/ # Pydantic data models
β βββ common/ # Common utilities (logging, token counting, etc.)
βββ config/ # Configuration files
βββ tests/ # Test suite
βββ CLAUDE.md # Claude Code project guide
βββ pyproject.toml # Project dependencies and configuration
CONFIG_PATH: Configuration file path (default:config/settings.json)LOG_LEVEL: Log level (default:INFO)
{
"openai": {
"api_key": "your-openai-api-key-here",
"base_url": "https://api.openai.com/v1"
},
"server": {
"host": "0.0.0.0",
"port": 8000
},
"api_key": "your-proxy-api-key-here",
"logging": {
"level": "INFO"
},
"models": {
"default": "Qwen/Qwen3-Coder",
"small": "deepseek-ai/DeepSeek-V3-0324",
"think": "deepseek-ai/DeepSeek-R1-0528",
"long_context": "gemini-2.5-pro",
"web_search": "gemini-2.5-flash"
},
"parameter_overrides": {
"max_tokens": null,
"temperature": null,
"top_p": null,
"top_k": null
}
}-
openai: OpenAI API configuration
api_key: OpenAI API key for accessing OpenAI servicesbase_url: OpenAI API base URL, default ishttps://api.openai.com/v1
-
server: Server configuration
host: Service listening host address, default is0.0.0.0(listen on all network interfaces)port: Service listening port, default is8000
-
api_key: API key for the proxy service, used to verify requests to the
/v1/messagesendpoint -
logging: Logging configuration
level: Log level, options areDEBUG,INFO,WARNING,ERROR,CRITICAL, default isINFO
-
models: Model configuration, defines model selection for different usage scenarios
default: Default general model for general requestssmall: Lightweight model for simple tasksthink: Deep thinking model for complex reasoning taskslong_context: Long context processing model for handling long textweb_search: Web search model for web search, currently supports geimini
-
parameter_overrides: Parameter override configuration, allows administrators to set model parameter override values in the configuration file
max_tokens: Maximum token count override, when set, will override the max_tokens parameter in client requeststemperature: Temperature parameter override, controls the randomness of output, range 0.0-2.0top_p: top_p sampling parameter override, controls the probability threshold of candidate words, range 0.0-1.0top_k: top_k sampling parameter override, controls the number of candidate words, range >=0
# Run all tests
pytest
# Run unit tests
pytest tests/unit
# Run integration tests
pytest tests/integration
# Generate coverage report
pytest --cov=src --cov-report=htmlPOST /v1/messages- Anthropic Messages APIGET /health- Health check endpointGET /- Welcome page
- API key authentication
- Request rate limiting (planned)
- Input validation and sanitization
- Structured logging
- Request/response time monitoring
- Memory usage tracking
- Error rate statistics
Issues and Pull Requests are welcome!
This project is licensed under the MIT License - see the LICENSE file for details.
- claude-code-router - Very good project, many places in this project have referenced this project
- FastAPI - Modern high-performance web framework
- Anthropic - Claude AI models
- OpenAI - OpenAI API specification
This project can be used with Claude Code for development and testing. To configure Claude Code to work with this proxy service, create a .claude/settings.json file with the following configuration:
{
"env": {
"ANTHROPIC_API_KEY": "sk-chen0v0...",
"ANTHROPIC_BASE_URL": "http://127.0.0.1:8100",
"DISABLE_TELEMETRY": "1",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
},
"apiKeyHelper": "echo 'sk-chen0v0...'",
"permissions": {
"allow": [],
"deny": []
}
}- Replace
ANTHROPIC_API_KEYwith your actual Anthropic API key - Replace
ANTHROPIC_BASE_URLwith the actual URL where this proxy service is running - The
apiKeyHelperfield should also be updated with your actual API key