Define agent skills in YAML. Generate a production-ready agent service.
Anthropic · OpenAI · Gemini · Ollama · OpenRouter — E2B · Kubernetes sandboxes — AWS ECS Fargate deploy (beta)
1. Define — one YAML file describes your entire agent:
# agent-bundle.yaml
name: personalized-recommend
model:
provider: openrouter
model: qwen/qwen3.5-397b-a17b
prompt:
system: |
You are a personalization assistant.
Use the filesystem tools to read/write user profile memory.
Recommend relevant products with concise reasons.
sandbox:
provider: e2b
e2b:
template: base
build:
dockerfile: ./Dockerfile
mcp:
servers:
- name: fs
transport: stdio
command: npx
args: ["@modelcontextprotocol/server-filesystem", "/data", "/memory"]
skills:
- path: ./skills/update-memory
- path: ./skills/recommend2. Generate — Prisma-style typed codegen:
npx agent-bundle generatenode_modules/@agent-bundle/personalized-recommend/
├── index.ts ← typed agent factory
├── types.ts ← type definitions
├── bundle.json ← resolved config snapshot
└── package.json ← scoped package metadata
3. Use — import the generated package and embed it in your own service:
// server.ts — your own Hono / Express / Fastify app
import { PersonalizedRecommend } from "@agent-bundle/personalized-recommend";
const agent = await PersonalizedRecommend.init();
app.post("/api/events", async (c) => {
const { userId, event } = await c.req.json();
const result = await agent.respond([
{ role: "user", content: `Update profile for ${userId}: ${event}` },
]);
return c.json({ userId, response: result.output });
});
app.get("/api/recommendations/:userId", async (c) => {
const result = await agent.respond([
{ role: "user", content: `Recommend products for ${c.req.param("userId")}` },
]);
return c.json(result.output);
});No special runtime, no sidecar — it's a regular TypeScript import. Deploy it however you deploy your service.
Agent skills work great inside local coding agents. Deploying them to production means rewriting everything.
| Without agent-bundle | With agent-bundle | |
|---|---|---|
| Define | Scattered scripts and prompts | Single YAML config — version it, diff it, review it |
| Generate | Hand-wire LLM calls, no type safety | npx agent-bundle generate — typed factory you can import like any package |
| Ship | Rewrite into a service from scratch | Import the generated package into your own service — zero rewrite |
| Behave | Dev and prod diverge silently | Same sandbox runtime in dev, serve, and build |
git clone https://github.com/yujiachen-y/agent-bundle.git
cd agent-bundle/demo/personalized-recommend
npm run setupnpm install agent-bundle # add to your project
npx agent-bundle generate # generate typed client from agent-bundle.yamlThen import and use in your code:
import { PersonalizedRecommend } from "@agent-bundle/personalized-recommend";See Configuration Guide for all YAML options and Agent Skills for the skill format.
Prisma-style generate command. Your YAML config becomes a typed TypeScript package in node_modules/@agent-bundle/<name> — import it like any other dependency. When you use prompt variables, their names are checked at compile time.
dev, serve, and build share the same sandbox abstraction. What passes locally ships as-is. No environment-specific surprises.
Swap model providers or sandbox backends with one line of YAML. Run locally with Ollama — no API key needed:
model:
provider: ollama
model: llama3Or use any cloud provider:
model:
provider: anthropic # or: openai, gemini, openrouter
model: claude-sonnet-4-20250514Pull skills and commands from a plugin marketplace and compose them into a single agent:
skills:
- path: ./skills/report-formatter
commands:
- path: ./commands/quick-analysis
plugins:
- marketplace: anthropics/knowledge-work-plugins
name: finance
skills:
- variance-analysis
- close-managementLocal skills, local commands, and marketplace plugins — all merged into one typed bundle. See the financial-plugin demo.
npx agent-bundle dev opens a WebUI at localhost:3000 — watch the agent's file tree, preview generated files, inspect the full LLM transcript, and monitor token usage and tool call metrics. No more black boxes.
| Workspace | File Preview | Transcript | Metrics |
|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
OpenTelemetry tracing and metrics are also supported for production deployments.
Agent crashes mid-run? Resume from its last conversation state.
Connect to internal services via MCP servers. Even under prompt injection, the agent cannot exceed what the MCP server permits for that token.
npx agent-bundle deploy --target aws --secret API_KEYPushes to ECR and deploys to ECS Fargate — no Terraform or CloudFormation required. See Deploy docs for details.
The agent orchestrator routes between the LLM provider, sandbox, and MCP servers. All three interfaces share the same abstraction across dev, serve, and build modes.
| Demo | What It Shows |
|---|---|
code-formatter/e2b |
Config-only agent with E2B sandbox |
code-formatter/k8s |
Config-only agent with Kubernetes sandbox |
financial-plugin |
Plugins + custom commands |
data-analyst-e2b |
WebUI dev mode with data analysis |
pdf-to-deck |
PDF to presentation deck (skills.sh skills) |
personalized-recommend |
generate + custom server + MCP integration |
observability-demo |
OpenTelemetry tracing and metrics |
If agent-bundle is useful to you, consider giving it a star. It helps others discover the project.
-
deploy --target awsGA — currently beta, stability not guaranteed - GCP Cloud Run deploy target
- Pluggable agent loop engines — Claude Code, Codex via process bridge
- Fine-grained Docker sandbox isolation
Contributions welcome! See CONTRIBUTING.md for guidelines.




