Skip to content

Commit 4c40049

Browse files
authored
Merge pull request #32 from AgentOptimizer/release-prep
Prepare package for PyPI release
2 parents 38a189c + dc88f83 commit 4c40049

14 files changed

Lines changed: 1060 additions & 24 deletions

File tree

README.md

Lines changed: 27 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,22 @@
1-
# AgentOpt
1+
<p align="center">
2+
<img src="logo.png" alt="AgentOpt Logo" width="200">
3+
</p>
24

3-
**Find the right LLM models for your AI agents.**
5+
<h1 align="center">AgentOpt</h1>
6+
7+
<p align="center">
8+
<strong>Find the right LLM models for your AI agents.</strong>
9+
</p>
10+
11+
<p align="center">
12+
<a href="https://pypi.org/project/agentopt/"><img src="https://img.shields.io/pypi/v/agentopt?logo=python&logoColor=white&color=3776ab" alt="PyPI"></a>
13+
<a href="https://pepy.tech/projects/agentopt"><img src="https://static.pepy.tech/badge/agentopt" alt="Downloads"></a>
14+
<a href="https://github.com/AgentOptimizer/agentopt"><img src="https://img.shields.io/github/stars/AgentOptimizer/agentopt?style=flat&logo=github&color=181717" alt="GitHub stars"></a>
15+
<a href="https://github.com/AgentOptimizer/agentopt/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-green?style=flat" alt="License"></a>
16+
<a href="https://agentoptimizer.github.io/agentopt/"><img src="https://img.shields.io/badge/docs-website-blue?style=flat&logo=materialformkdocs&logoColor=white" alt="Docs"></a>
17+
</p>
18+
19+
---
420

521
Choosing the right LLM model is hard. Different models have different cost, performance, and latency tradeoffs. Should you use a thinking model? What effort level? What about different models for different steps of your agent pipeline? The combinatorial space explodes quickly — if your agent has 3 steps and you're considering 5 models per step, that's 125 combinations to evaluate.
622

@@ -10,7 +26,7 @@ AgentOpt solves this automatically. Give it your agent and a small evaluation da
1026

1127
- **Non-intrusive**: Wrap your agent in a simple factory function — we take care of the rest. No framework adapters, no code changes to your agent internals.
1228
- **Framework-agnostic**: Works with OpenAI SDK, LangChain, LangGraph, CrewAI, LlamaIndex, AG2, or any framework that uses `httpx` for LLM calls.
13-
- **Smart search algorithms**: selection algorithms from brute force to advanced methods like Bayesian optimization, so you don't have to evaluate every combination.
29+
- **Smart search algorithms**: Selection algorithms from brute force to advanced methods like Bayesian optimization, so you don't have to evaluate every combination.
1430
- **Automatic tracking**: Transparently intercepts all LLM calls to measure token usage, latency, and cost — no manual instrumentation.
1531
- **Response caching**: Identical LLM calls are cached (in-memory + SQLite on disk), so re-running experiments is instant and free.
1632

@@ -139,9 +155,9 @@ AgentOpt intercepts LLM calls at the `httpx` transport layer — the one chokepo
139155

140156
```
141157
your_agent(input)
142-
└── framework internals (LangChain, CrewAI, etc.)
143-
└── httpx.Client.send() intercepted here
144-
└── LLM API (OpenAI, Anthropic, etc.)
158+
+-- framework internals (LangChain, CrewAI, etc.)
159+
+-- httpx.Client.send() <-- intercepted here
160+
+-- LLM API (OpenAI, Anthropic, etc.)
145161
```
146162

147163
For each model combination, AgentOpt:
@@ -208,6 +224,10 @@ selector = BruteForceModelSelector(
208224
)
209225
```
210226

227+
## Documentation
228+
229+
Full documentation is available at **[agentoptimizer.github.io/agentopt](https://agentoptimizer.github.io/agentopt/)**.
230+
211231
## Development
212232

213233
```bash
@@ -219,4 +239,4 @@ uv run pytest
219239

220240
## License
221241

222-
MIT
242+
Apache 2.0

logo.png

1.15 MB
Loading

pyproject.toml

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,28 @@ build-backend = "hatchling.build"
44

55
[project]
66
name = "agentopt"
7-
version = "0.3.0"
7+
version = "0.1.0"
88
description = "Find the right LLM models for your AI agents — automatic model selection with accuracy/cost/latency tradeoffs"
99
requires-python = ">=3.10"
10-
license = "MIT"
10+
license = "Apache-2.0"
1111
readme = "README.md"
12+
authors = [
13+
{ name = "AgentOptimizer Team" },
14+
]
15+
keywords = ["llm", "model-selection", "agents", "optimization", "pareto"]
16+
classifiers = [
17+
"Development Status :: 3 - Alpha",
18+
"Intended Audience :: Developers",
19+
"Intended Audience :: Science/Research",
20+
"License :: OSI Approved :: Apache Software License",
21+
"Programming Language :: Python :: 3",
22+
"Programming Language :: Python :: 3.10",
23+
"Programming Language :: Python :: 3.11",
24+
"Programming Language :: Python :: 3.12",
25+
"Programming Language :: Python :: 3.13",
26+
"Topic :: Scientific/Engineering :: Artificial Intelligence",
27+
"Typing :: Typed",
28+
]
1229
dependencies = [
1330
"httpx>=0.24.0",
1431
"pydantic>=2.0.0",

src/agentopt/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
LLM instances), then let a ModelSelector find the best combination.
77
"""
88

9+
__version__ = "0.1.0"
10+
911
from agentopt.proxy import CallRecord, LLMTracker
1012

1113
from .base_models import AgentFn, Dataset, EvalFn, ModelsConfig
@@ -33,6 +35,8 @@
3335
pass
3436

3537
__all__ = [
38+
# Metadata
39+
"__version__",
3640
# Core API
3741
"ModelSelector",
3842
"BaseModelSelector",

0 commit comments

Comments
 (0)