llm-kit-pro is a unified, async-first Python toolkit for interacting with multiple Large Language Model (LLM) providers through a consistent, provider-agnostic API.
It is designed for developers who need to switch between providers (OpenAI, Gemini, Anthropic/Bedrock) without rewriting their core application logic, with a heavy emphasis on structured data and multimodal inputs.
- Unified API: One interface for OpenAI, Gemini, Anthropic, and AWS Bedrock.
- Pydantic-First Structured Output: Pass Pydantic models directly to get validated, type-safe dictionaries back.
- Native "Strict Mode": Automatically handles OpenAI's Structured Outputs requirements.
- Multimodal Inputs: First-class support for PDF, PNG, JPEG, and Text files across all supported providers.
- Async-First: Built from the ground up for high-performance asynchronous Python applications.
- Provider-Agnostic Inputs: Use
LLMFileto handle different file types without worrying about provider-specific formatting. - Universal File Loader: Load files from local paths or URLs with automatic MIME type detection.
pip install llm-kit-proInstall with specific provider support:
# For OpenAI
pip install "llm-kit-pro[openai]"
# For Google Gemini
pip install "llm-kit-pro[gemini]"
# For Anthropic
pip install "llm-kit-pro[anthropic]"
# For AWS Bedrock (Anthropic/Llama/etc)
pip install "llm-kit-pro[bedrock]"from llm_kit_pro.providers.openai import OpenAIClient
from llm_kit_pro.providers.openai.config import OpenAIConfig
client = OpenAIClient(OpenAIConfig(
api_key="your-key",
model="gpt-4o-mini"
))
text = await client.generate_text(
prompt="Explain quantum entanglement like I'm five."
)
print(text)Instead of messy regex or manual JSON parsing, define your schema as a Pydantic model. llm-kit-pro handles the schema injection, strict mode enforcement, and final validation.
from pydantic import BaseModel
from llm_kit_pro.providers.gemini import GeminiClient
from llm_kit_pro.providers.gemini.config import GeminiConfig
class MovieReview(BaseModel):
title: str
rating: int
summary: str
sentiment: str
client = GeminiClient(GeminiConfig(
api_key="your-key",
model="gemini-2.5-flash"
))
# Pass the class directly!
data = await client.generate_json(
prompt="Review the movie 'Inception'",
schema=MovieReview
)
print(data["rating"])llm-kit-pro treats files as first-class citizens. You can pass images or PDFs directly to the model.
from llm_kit_pro.core.helpers import load_file
from pydantic import BaseModel
class Invoice(BaseModel):
vendor: str
amount: float
due_date: str
# Load your file (from local path or URL)
pdf = load_file("invoice.pdf")
# Or from URL: pdf = await load_file_async("https://example.com/invoice.pdf")
# Extract structured data from the document
data = await client.generate_json(
prompt="Extract the invoice details",
schema=Invoice,
files=[pdf]
)Every provider implements this interface, ensuring your code remains portable.
generate_text(prompt, files=None, **kwargs)->strgenerate_json(prompt, schema, files=None, **kwargs)->dict
A simple container for file-based inputs.
content: Raw bytes.mime_type: e.g.,application/pdf,image/jpeg.filename: Optional metadata.
Utilities to load files from local paths or URLs:
load_file(source)->LLMFile- Universal loader (auto-detects local vs URL)load_file_async(source)->LLMFile- Async versionload_file_from_path(path)->LLMFile- Load from local filesystemload_file_from_url(url)->LLMFile- Download from URL
See File Loader Guide for detailed documentation.
| Provider | Installation Extra | Status | Structured Output | Multimodal |
|---|---|---|---|---|
| OpenAI | [openai] |
✅ Stable | Native Strict Mode | Images |
| Google Gemini | [gemini] |
✅ Stable | Native JSON Schema | Images, PDF |
| Anthropic | [anthropic] |
✅ Stable | Native Tool Use | Images, PDF |
| AWS Bedrock | [bedrock] |
✅ Stable | Schema Injection | Images, PDF (Claude) |
🚧 Under active development
The public API is stabilizing. We are currently focusing on adding more Bedrock adapters (Llama 3, Titan).
MIT License