diff --git a/README.md b/README.md index 6c59aa86..4e223960 100644 --- a/README.md +++ b/README.md @@ -102,7 +102,7 @@ const finalCheckpoint = await run( ## Why Perstack? -Agentic AI apps are tangled. Agent loop, prompts, tools, and orchestration are buried in application logic. Untestable until production. Opaque when things break. Impossible to hand off to the people who understand the domain. +Building an agentic AI app — something like [Claude Code](https://github.com/anthropics/claude-code), [OpenHands](https://github.com/OpenHands/OpenHands), or [OpenClaw](https://github.com/openclaw/openclaw)? The agent loop, prompts, tools, and orchestration end up buried in your application code. You can't test the agent without deploying the app. You can't hand prompt tuning to domain experts without giving them access to the codebase. And when something breaks in production, you're reading raw LLM logs. | Problem | Perstack's Approach | | :--- | :--- | @@ -112,7 +112,7 @@ Agentic AI apps are tangled. Agent loop, prompts, tools, and orchestration are b | **No sustained behavior** | Event-sourced execution with step-level checkpoints. Resume, replay, and diff across model or provider changes. | | **No real isolation** | Each Expert runs in its own context — workspace boundaries, environment sandboxing, tool whitelisting. Your platform enforces security at the infrastructure level. | -8 LLM providers supported. Switch with a single config change. No vendor lock-in. +8 LLM providers. Switch with one config change. No vendor lock-in. ## What to Read Next