You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Choosing the right LLM model is hard. Different models have different cost, performance, and latency tradeoffs. Should you use a thinking model? What effort level? What about different models for different steps of your agent pipeline? The combinatorial space explodes quickly — if your agent has 3 steps and you're considering 5 models per step, that's 125 combinations to evaluate.
6
22
@@ -10,7 +26,7 @@ AgentOpt solves this automatically. Give it your agent and a small evaluation da
10
26
11
27
-**Non-intrusive**: Wrap your agent in a simple factory function — we take care of the rest. No framework adapters, no code changes to your agent internals.
12
28
-**Framework-agnostic**: Works with OpenAI SDK, LangChain, LangGraph, CrewAI, LlamaIndex, AG2, or any framework that uses `httpx` for LLM calls.
13
-
-**Smart search algorithms**: selection algorithms from brute force to advanced methods like Bayesian optimization, so you don't have to evaluate every combination.
29
+
-**Smart search algorithms**: Selection algorithms from brute force to advanced methods like Bayesian optimization, so you don't have to evaluate every combination.
14
30
-**Automatic tracking**: Transparently intercepts all LLM calls to measure token usage, latency, and cost — no manual instrumentation.
15
31
-**Response caching**: Identical LLM calls are cached (in-memory + SQLite on disk), so re-running experiments is instant and free.
16
32
@@ -139,9 +155,9 @@ AgentOpt intercepts LLM calls at the `httpx` transport layer — the one chokepo
0 commit comments