A distributed runtime for orchestrating AI agents and workflows on the BEAM.
Weave intelligent workflows on the BEAM.
Installation • Quick Start • Architecture • Features • Examples • Roadmap
Inspired by LangGraph and Temporal, Loom uses the Erlang/Elixir actor model to run AI workflows as resilient, distributed processes.
Loom doesn't replace your LLM library — it provides the runtime and orchestration layer for running agent workflows reliably at scale.
- 🧵 Actor-based AI agents — Each agent is a BEAM process with supervision
- 🔀 DAG workflow orchestration — Define complex pipelines as directed acyclic graphs
- ⚡ Parallel execution — Independent steps run concurrently, automatically
- 🛡️ Fault-tolerant execution — OTP supervisors, retries, and graceful error handling
- 🌐 Distributed runtime — Schedule tasks across multiple BEAM nodes
- 📡 Telemetry built-in — Observable by default with
:telemetryevents - 🔌 Pluggable LLM providers — OpenAI, Anthropic, Ollama out of the box
Add loom to your mix.exs:
def deps do
[
{:loom, "~> 0.1.0"}
]
enddefmodule ResearchAgent do
use Loom.Agent
@impl true
def run(input, _opts) do
Loom.LLM.chat("Research: #{input}")
end
end
defmodule SummaryAgent do
use Loom.Agent
@impl true
def run(%{deps: %{research: data}}, _opts) do
Loom.LLM.chat("Summarize: #{data}")
end
endworkflow =
Loom.Workflow.new("research_pipeline")
|> Loom.Workflow.step(:research, ResearchAgent)
|> Loom.Workflow.step(:summarize, SummaryAgent, deps: [:research]){:ok, results} = Loom.run(workflow, "What is the BEAM?")
results[:research] # => "The BEAM is..."
results[:summarize] # => "In summary..." User
│
▼
┌─────────────┐
│ Workflow │ DAG definition
└──────┬──────┘
│
┌─────────┴─────────┐
▼ ▼
┌─────────────┐ ┌─────────────┐
│ Scheduler │ │ State Store │
└──────┬──────┘ └─────────────┘
│
▼
┌─────────────┐
│ Executor │ Parallel group execution
└──────┬──────┘
│
┌─────┼─────┐
▼ ▼ ▼
┌───┐ ┌───┐ ┌───┐
│ A │ │ A │ │ A │ Agent Pool (BEAM processes)
└─┬─┘ └─┬─┘ └─┬─┘
│ │ │
▼ ▼ ▼
LLM Tools APIs
| Concept | BEAM Mapping | Description |
|---|---|---|
| Agent | Process | Autonomous unit of work with supervision |
| Workflow | DAG | Directed acyclic graph of agent steps |
| Scheduler | GenServer | Queue-based workflow dispatcher |
| Executor | Task.async_stream | Parallel step execution engine |
| Runtime | Node/Distribution | Multi-node task distribution |
Independent steps execute concurrently — no configuration needed:
workflow =
Loom.Workflow.new("parallel_search")
|> Loom.Workflow.step(:web_search, WebSearchAgent)
|> Loom.Workflow.step(:wiki_search, WikiSearchAgent)
|> Loom.Workflow.step(:combine, CombineAgent, deps: [:web_search, :wiki_search])Execution groups are computed automatically:
Group 1: [:web_search, :wiki_search] ← parallel
Group 2: [:combine] ← after both complete
Agents run under OTP supervision with configurable retry policies:
defmodule ResilientAgent do
use Loom.Agent, timeout: 10_000, max_retries: 5
@impl true
def run(input, _opts) do
# If this fails, it retries up to 5 times
Loom.LLM.chat("Process: #{input}")
end
@impl true
def on_error(:timeout, _input), do: :retry
def on_error(:rate_limited, _input), do: :retry
def on_error(_other, _input), do: :abort
endBuild multi-agent systems where each agent is a BEAM process:
workflow =
Loom.Workflow.new("coding_swarm")
|> Loom.Workflow.step(:plan, PlannerAgent)
|> Loom.Workflow.step(:code, CoderAgent, deps: [:plan])
|> Loom.Workflow.step(:test, TesterAgent, deps: [:plan])
|> Loom.Workflow.step(:review, ReviewerAgent, deps: [:code, :test]) Planner
/ \
Coder Tester ← parallel
\ /
Reviewer
Run workflows across multiple BEAM nodes:
# Nodes connect automatically
Node.connect(:"worker@host2")
Node.connect(:"worker@host3")
# Tasks are distributed across available nodes
Loom.run(workflow, input)Loom emits telemetry events for full observability:
:telemetry.attach("my-handler", [:loom, :agent, :stop], fn _event, measurements, metadata, _config ->
IO.puts("Agent #{metadata.agent} completed in #{measurements.duration}ms")
end, nil)Events:
[:loom, :agent, :start | :stop | :error][:loom, :workflow, :start | :stop | :error][:loom, :step, :start | :stop | :error]
# config/config.exs
config :loom,
llm_provider: :openai, # :openai | :anthropic | :ollama
llm_model: "gpt-4o",
llm_api_key: "sk-...",
max_concurrent_workflows: 10,
agent_timeout: 30_000,
agent_max_retries: 3See the examples/ directory:
- research_agent.exs — Research → Analyze → Summarize pipeline
- coding_agent.exs — Planner → Coder/Tester → Reviewer swarm
Run an example:
export LOOM_LLM_API_KEY=sk-...
mix run examples/research_agent.exs- Agent runtime with supervision
- DAG workflow orchestration
- Scheduler with concurrency control
- Pluggable LLM client
- Telemetry integration
- Workflow state persistence
- Distributed execution improvements
- Streaming pipelines
- Advanced retry strategies
- LiveView dashboard
- Workflow visualization
- Execution replay
- Tool/function calling framework
- Agent communication channels
- Plugin system
| Python (LangGraph) | TypeScript (CrewAI) | Elixir (Loom) | |
|---|---|---|---|
| Concurrency | Threading/asyncio | Event loop | BEAM processes |
| Fault tolerance | Manual | Manual | OTP supervisors |
| Distribution | Complex | Complex | Built-in |
| Scalability | Limited | Limited | Millions of processes |
The BEAM was literally designed for the exact problems AI agent systems face: massive concurrency, fault tolerance, and distribution.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
git clone https://github.com/MeghanBao/loom
cd loom
mix deps.get
mix testMIT License — see LICENSE for details.
Loom — Weave intelligent workflows on the BEAM.