This guide walks you through adding a new LLM provider to AI Paddle Battle. Most providers that use an OpenAI-compatible chat completions API can be added in under 10 minutes.
There are six places you need to touch:
server/src/models.ts-- register the provider and its modelsserver/src/adapters/-- create an adapter classserver/src/adapters/index.ts-- wire the adapter into the factoryserver/src/pricing.ts-- add per-token pricing data.env.example-- document the API key environment variableclient/src/components/SetupScreen.tsx-- make the provider selectable in the UI
Open server/src/models.ts and add a new entry to the PROVIDERS array:
{
id: 'myprovider',
displayName: 'My Provider',
baseUrl: 'https://api.myprovider.com/v1',
requiresApiKey: true,
isOpenAICompatible: true, // set to false if it uses a custom API format
models: [
{ id: 'my-model-large', displayName: 'My Model Large' },
{ id: 'my-model-small', displayName: 'My Model Small' },
],
},If the provider supports the standard /v1/chat/completions endpoint with Bearer token auth, create a thin wrapper. This is the entire file:
// server/src/adapters/myprovider.ts
import { OpenAICompatibleAdapter } from './openai-compatible.js';
export class MyProviderAdapter extends OpenAICompatibleAdapter {
constructor(model: string, apiKey: string) {
super('myprovider', model, apiKey, 'https://api.myprovider.com/v1');
}
}That is it -- six lines. The OpenAICompatibleAdapter base class handles callLLM(), testConnection(), request formatting, response parsing, and token usage tracking.
For a real example, see server/src/adapters/deepseek.ts.
If the provider has a non-standard API (like Anthropic or Google), extend BaseAdapter directly and implement two methods:
// server/src/adapters/myprovider.ts
import { BaseAdapter } from './base.js';
export class MyProviderAdapter extends BaseAdapter {
provider = 'myprovider';
constructor(model: string, apiKey: string) {
super(model, apiKey, 'https://api.myprovider.com/v1');
}
async callLLM(
system: string,
user: string
): Promise<{ content: string; usage?: { input: number; output: number } }> {
// Make the API call using fetch()
// Parse the response and return { content, usage }
}
async testConnection(
apiKey: string
): Promise<{ success: boolean; error?: string }> {
// Send a minimal request to verify the key works
// Return { success: true } or { success: false, error: '...' }
}
}For real examples, see server/src/adapters/anthropic.ts or server/src/adapters/google.ts.
Open server/src/adapters/index.ts and:
-
Import your adapter:
import { MyProviderAdapter } from './myprovider.js';
-
Add a case to the
createAdapterswitch statement:case 'myprovider': return new MyProviderAdapter(model, apiKey);
Open server/src/pricing.ts and add cost-per-token entries for each model so the stats panel can estimate match cost.
Add the environment variable to .env.example:
MYPROVIDER_API_KEY=
The provider list in the client is currently hardcoded in client/src/components/SetupScreen.tsx. Add your new provider there so it appears in the setup dropdown.
Alternatively, if fetching from the server's /api/providers endpoint is available, the client will pick it up automatically.
- Start the dev server:
npm run dev - Select your provider in the game UI and enter an API key.
- Start a match and verify:
- The paddle moves (LLM move calls are working).
- Trash talk appears (LLM trash talk calls are working).
- Token counts show up in the stats panel.
- The "Test Connection" button in the setup screen works.
- Keep
max_tokenslow (the base adapter uses 60) to minimize latency. - The game loop has a 2-second timeout per LLM call. If a provider is consistently slow, you may need to increase
this.timeoutMsin your adapter constructor. - If the provider requires custom headers beyond
Authorization: Bearer, overridecallLLM()even for OpenAI-compatible providers.