Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ const finalCheckpoint = await run(

## Why Perstack?

Agentic AI apps are tangled. Agent loop, prompts, tools, and orchestration are buried in application logic. Untestable until production. Opaque when things break. Impossible to hand off to the people who understand the domain.
Building an agentic AI app — something like [Claude Code](https://github.com/anthropics/claude-code), [OpenHands](https://github.com/OpenHands/OpenHands), or [OpenClaw](https://github.com/openclaw/openclaw)? The agent loop, prompts, tools, and orchestration end up buried in your application code. You can't test the agent without deploying the app. You can't hand prompt tuning to domain experts without giving them access to the codebase. And when something breaks in production, you're reading raw LLM logs.

| Problem | Perstack's Approach |
| :--- | :--- |
Expand All @@ -112,7 +112,7 @@ Agentic AI apps are tangled. Agent loop, prompts, tools, and orchestration are b
| **No sustained behavior** | Event-sourced execution with step-level checkpoints. Resume, replay, and diff across model or provider changes. |
| **No real isolation** | Each Expert runs in its own context — workspace boundaries, environment sandboxing, tool whitelisting. Your platform enforces security at the infrastructure level. |

8 LLM providers supported. Switch with a single config change. No vendor lock-in.
8 LLM providers. Switch with one config change. No vendor lock-in.

## What to Read Next

Expand Down