Caution
Alpha software. This package is part of a broader effort by Ian Flores Siaca to develop proper AI infrastructure for the R ecosystem. It is under active development and should not be used in production until an official release is published. APIs may change without notice.
Graph-based multi-agent workflow orchestration for R. Built on ellmer for LLM chat and optionally securer for sandboxed code execution.
Use orchestr when a single ellmer chat isn't enough -- when you need multi-step reasoning (ReAct loops), parallel tool execution, supervisor-routed agent teams, or persistent memory across turns. If your workflow fits in one LLM call, use ellmer directly. If it needs orchestration, use orchestr.
orchestr is part of a 7-package ecosystem for building governed AI agents in R:
┌─────────────┐
│ securer │
└──────┬──────┘
┌────────────────┼─────────────────┐
│ │ │
┌──────▼──────┐ ┌─────▼──────┐ ┌───────▼────────┐
│ securetools │ │ secureguard│ │ securecontext │
└──────┬───────┘ └─────┬──────┘ └───────┬────────┘
└────────────────┼─────────────────┘
┌──────▼────────┐
│>>> orchestr<<<│
└──────┬────────┘
┌────────────────┼─────────────────┐
│ │
┌──────▼──────┐ ┌──────▼──────┐
│ securetrace │ │ securebench │
└─────────────┘ └─────────────┘
orchestr is the orchestration hub that wires agents into workflows. It sits below the tool/guardrail/context layer and above the observability and benchmarking layers, coordinating agents that use securer for execution, secureguard for safety, and securecontext for memory.
| Package | Role |
|---|---|
| securer | Sandboxed R execution with tool-call IPC |
| securetools | Pre-built security-hardened tool definitions |
| secureguard | Input/code/output guardrails (injection, PII, secrets) |
| orchestr | Graph-based agent orchestration |
| securecontext | Document chunking, embeddings, RAG retrieval |
| securetrace | Structured tracing, token/cost accounting, JSONL export |
| securebench | Guardrail benchmarking with precision/recall/F1 metrics |
# install.packages("pak")
pak::pak("ian-flores/orchestr")orchestr uses ellmer for LLM access. You'll need an API key for your chosen provider:
# For Anthropic (Claude)
Sys.setenv(ANTHROPIC_API_KEY = "your-key-here")
# For OpenAI
Sys.setenv(OPENAI_API_KEY = "your-key-here")See ellmer's documentation for all supported providers.
- Agent -- wraps an ellmer Chat with tools and optional secure execution
- GraphBuilder -- fluent API for constructing agent graphs with typed state
- Conditional routing -- route between agents based on state
- Human-in-the-loop -- interrupt graph execution for human approval
- State management -- typed state schemas with reducers, snapshots
- Memory & checkpointing -- persist state across invocations
- Visualization -- render graphs as Mermaid diagrams
The react_graph() function wraps a single agent with state management and
checkpointing. ellmer's Chat class handles tool call loops internally --
when an agent has registered tools, they are executed automatically during
$chat(). The graph wraps this with state management and checkpointing.
library(orchestr)
library(ellmer)
analyst <- agent("analyst", chat = chat_anthropic(
system_prompt = "You analyze data. Use your tools to compute results."
))
graph <- react_graph(analyst, max_iterations = 5)
result <- graph$invoke(list(messages = list("What is the mean of c(1,2,3,4,5)?")))pipeline_graph() chains agents in sequence. Each agent processes the state
and passes it to the next. One LLM call per agent in the pipeline.
drafter <- agent("drafter", chat = chat_anthropic(
system_prompt = "Write a short draft on the given topic."
))
editor <- agent("editor", chat = chat_anthropic(
system_prompt = "Improve the following draft."
))
pipeline <- pipeline_graph(drafter, editor)
result <- pipeline$invoke(list(messages = list("Benefits of open source.")))supervisor_graph() creates a supervisor that routes tasks to specialized
workers. The supervisor decides which worker to invoke (or to finish) by
calling an automatically injected route tool.
supervisor <- agent("supervisor", chat = chat_anthropic(
system_prompt = "You coordinate workers to solve tasks."
))
math_worker <- agent("math", chat = chat_anthropic(
system_prompt = "You are a math expert. Solve math problems step by step."
))
writing_worker <- agent("writing", chat = chat_anthropic(
system_prompt = "You are a writing expert. Help with writing tasks."
))
graph <- supervisor_graph(
supervisor = supervisor,
workers = list(math = math_worker, writing = writing_worker),
max_iterations = 10
)
result <- graph$invoke(list(messages = list("Calculate the integral of x^2 from 0 to 1.")))Each node in a graph that calls an LLM makes an API request. Be mindful of costs:
react_graph(): The agent runs once, with ellmer handling any tool calls internallypipeline_graph(): One LLM call per agent in the pipelinesupervisor_graph(max_iterations = 50): Up to 50+ LLM calls (supervisor routing + worker execution). Start with lowmax_iterationsvalues- Use
verbose = TRUEwhen compiling graphs to see execution flow:compile(verbose = TRUE)
Contributions are welcome! Please file issues on GitHub and submit pull requests.
MIT