-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Milestone
Description
Summary
Add native support for the top LangChain-supported LLM providers beyond OpenAI. Currently Chatfield requires openai: prefix and only uses ChatOpenAI, forcing users to rely on proxy solutions (LiteLLM, OpenRouter) for other providers.
Motivation
- LangChain already provides official integrations for major providers
- Users should be able to use Anthropic Claude, Google Gemini, etc. directly
- Simpler developer experience than setting up proxy infrastructure
- Better alignment with LangChain's multi-provider philosophy
Proposed LLM Support
Support the top 5 LangChain-supported providers:
- OpenAI -
openai:gpt-4o(existing) - Anthropic -
anthropic:claude-3-5-sonnet-20241022 - Google -
google-genai:gemini-2.0-flash-exp - Azure OpenAI -
azure-openai:gpt-4o - Groq -
groq:llama-3.3-70b-versatile
Implementation Plan
1. Provider Detection and Instantiation
Update Interviewer.__init__() to detect provider prefix and instantiate appropriate chat model:
Python (interviewer.py):
from langchain_openai import ChatOpenAI, AzureChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_groq import ChatGroq
# Parse provider and model from llm_id
if ':' not in llm_id:
raise ValueError(f'LLM ID must have format "provider:model", got {llm_id!r}')
provider, model_name = llm_id.split(':', 1)
# Instantiate appropriate chat model
if provider == 'openai':
self.llm = ChatOpenAI(
model=model_name,
temperature=temperature,
base_url=base_url,
api_key=api_key,
)
elif provider == 'anthropic':
self.llm = ChatAnthropic(
model=model_name,
temperature=temperature,
base_url=base_url,
api_key=api_key,
)
elif provider == 'google-genai':
self.llm = ChatGoogleGenerativeAI(
model=model_name,
temperature=temperature,
api_key=api_key,
)
elif provider == 'azure-openai':
self.llm = AzureChatOpenAI(
model=model_name,
temperature=temperature,
azure_endpoint=base_url,
api_key=api_key,
)
elif provider == 'groq':
self.llm = ChatGroq(
model=model_name,
temperature=temperature,
base_url=base_url,
api_key=api_key,
)
else:
raise ValueError(
f'Unsupported LLM provider: {provider!r}. '
f'Supported providers: openai, anthropic, google-genai, azure-openai, groq'
)TypeScript (interviewer.ts):
import { ChatOpenAI, AzureChatOpenAI } from '@langchain/openai'
import { ChatAnthropic } from '@langchain/anthropic'
import { ChatGoogleGenerativeAI } from '@langchain/google-genai'
import { ChatGroq } from '@langchain/groq'
// Parse provider and model from llmId
if (!llmId.includes(':')) {
throw new Error(`LLM ID must have format "provider:model", got ${llmId}`)
}
const [provider, modelName] = llmId.split(':', 2)
// Instantiate appropriate chat model
if (provider === 'openai') {
this.llm = new ChatOpenAI({
modelName: modelName,
temperature: temperature,
apiKey: apiKey,
configuration: { baseURL: baseUrl }
})
} else if (provider === 'anthropic') {
this.llm = new ChatAnthropic({
model: modelName,
temperature: temperature,
apiKey: apiKey,
clientOptions: { baseURL: baseUrl }
})
} else if (provider === 'google-genai') {
this.llm = new ChatGoogleGenerativeAI({
model: modelName,
temperature: temperature,
apiKey: apiKey,
})
} else if (provider === 'azure-openai') {
this.llm = new AzureChatOpenAI({
model: modelName,
temperature: temperature,
azureOpenAIApiKey: apiKey,
azureOpenAIEndpoint: baseUrl,
})
} else if (provider === 'groq') {
this.llm = new ChatGroq({
model: modelName,
temperature: temperature,
apiKey: apiKey,
})
} else {
throw new Error(
`Unsupported LLM provider: ${provider}. ` +
`Supported providers: openai, anthropic, google-genai, azure-openai, groq`
)
}2. Environment Variable Support
Add automatic API key detection per provider:
# Python
def get_api_key_for_provider(provider: str, explicit_key: Optional[str]) -> Optional[str]:
if explicit_key:
return explicit_key
env_vars = {
'openai': 'OPENAI_API_KEY',
'anthropic': 'ANTHROPIC_API_KEY',
'google-genai': 'GOOGLE_API_KEY',
'azure-openai': 'AZURE_OPENAI_API_KEY',
'groq': 'GROQ_API_KEY',
}
env_var = env_vars.get(provider)
return os.environ.get(env_var) if env_var else None3. Update Security Detection
Extend DANGEROUS_ENDPOINTS list:
DANGEROUS_ENDPOINTS = [
'api.openai.com',
'api.anthropic.com',
'generativelanguage.googleapis.com',
'groq.com',
]4. Dependencies
Python (pyproject.toml):
dependencies = [
"langchain-openai>=0.3.29",
"langchain-anthropic>=0.3.4",
"langchain-google-genai>=2.1.10",
"langchain-groq>=0.2.5",
]TypeScript (package.json):
{
"dependencies": {
"@langchain/openai": "^0.6.9",
"@langchain/anthropic": "^0.4.7",
"@langchain/google-genai": "^0.2.13",
"@langchain/groq": "^0.2.11"
}
}Usage Examples
# Python
from chatfield import Interviewer, chatfield
interview = chatfield().field("name").build()
# Anthropic Claude
interviewer = Interviewer(interview, llm_id='anthropic:claude-3-5-sonnet-20241022')
# Google Gemini
interviewer = Interviewer(interview, llm_id='google-genai:gemini-2.0-flash-exp')
# Groq
interviewer = Interviewer(interview, llm_id='groq:llama-3.3-70b-versatile')// TypeScript
import { Interviewer, chatfield } from '@chatfield/core'
const interview = chatfield().field('name').build()
// Anthropic Claude
const interviewer = new Interviewer(interview, {
llmId: 'anthropic:claude-3-5-sonnet-20241022'
})
// Google Gemini
const interviewer = new Interviewer(interview, {
llmId: 'google-genai:gemini-2.0-flash-exp'
})
// Groq
const interviewer = new Interviewer(interview, {
llmId: 'groq:llama-3.3-70b-versatile'
})Testing Strategy
- Add provider-specific test fixtures with mock LLMs
- Test provider detection and instantiation logic
- Test environment variable fallback for each provider
- Test error messages for unsupported providers
- Add integration tests behind
requires_api_keymarkers for each provider
Documentation Updates
- Update
Api_Configuration.mdwith all provider examples - Update README with multi-provider support
- Add troubleshooting guide for provider-specific issues
- Document model name formats per provider
Related Issues
- Add OpenRouter support for flexible LLM backend selection #40 (OpenRouter support - complementary gateway approach)
- Add open-source model support via LiteLLM and local hosting #58 (LiteLLM support - complementary proxy approach)
Metadata
Metadata
Assignees
Labels
No labels