Host Layer
CLI/daemon orchestrates runs, provisions skills/config, manages KVM VM lifecycle, and records run events.
diff --git a/README.md b/README.md
index 928bd2c..bc0a0f6 100644
--- a/README.md
+++ b/README.md
@@ -24,12 +24,12 @@
- Architecture · + Architecture · Quick Start · - OCI Support · - Host Mounts · - Snapshots · - Observability + OCI Support · + Host Mounts · + Snapshots · + Observability
@@ -46,49 +46,30 @@ ## What You Get -- **Isolated execution** — Each stage runs inside its own micro-VM boundary (not shared-process containers). +- **Isolated execution** — Each stage runs inside its own micro-VM boundary, not shared-process containers. - **Policy-enforced runtime** — Command allowlists, resource limits, seccomp-BPF, and controlled network egress. - **Skill-native model** — MCP servers, SKILL files, and CLI tools mounted as declared capabilities. - **Composable pipelines** — Sequential `.pipe()`, parallel `.fan_out()`, with explicit stage-level failure domains. -- **Claude Code native runtime** — Each stage runs `claude-code`, backed by Claude (default) or Ollama via Claude-compatible provider mode. -- **OCI-native** — Auto-pulls guest images (kernel + initramfs) from GHCR on first run. Mount container images as base OS or as skill providers — no local build steps required. +- **Claude Code native runtime** — Each stage runs `claude-code`, backed by Claude or Ollama via provider mode. +- **OCI-native** — Auto-pulls guest images from GHCR; mount container images as base OS or skill providers. - **Observability native** — OTLP traces, metrics, structured logs, and stage-level telemetry emitted by design. -- **Persistent host mounts** — Share host directories into guest VMs via 9p/virtiofs with explicit read-only or read-write mode. Data in `mode: rw` mounts persists across VM restarts. +- **Persistent host mounts** — Share host directories into guest VMs via 9p/virtiofs with read-only or read-write mode. - **No root required** — Usermode SLIRP networking via smoltcp (no TAP devices). > Isolation is the primitive. Pipelines are compositions of bounded execution environments. ## Why Not Containers? -Containers share a host kernel. - -For general application isolation, this is often sufficient. -For AI agents executing tools, code, and external integrations, it creates shared failure domains. - -In a shared-process model: - -- Tool execution and agent runtime share the same kernel. -- Escape surfaces are reduced, but not eliminated. -- Resource isolation depends on cgroups and cooperative enforcement. - -VoidBox binds each agent stage to its own micro-VM boundary. - -Isolation is enforced by hardware virtualization — not advisory process controls. +Containers share a host kernel — sufficient for general isolation, but AI agents executing tools, code, and external integrations create shared failure domains. VoidBox binds each agent stage to its own micro-VM boundary, enforced by hardware virtualization rather than advisory process controls. See [Architecture](https://the-void-ia.github.io/void-box/docs/architecture/) ([source](docs/architecture.md)) for the full security model. --- ## Quick Start -### 1. Add dependency - ```bash cargo add void-box ``` -### 2. Define skills and build a VoidBox - -#### Rust API - ```rust use void_box::agent_box::VoidBox; use void_box::skill::Skill; @@ -105,15 +86,13 @@ let reasoning = Skill::agent("claude-code") let researcher = VoidBox::new("hn_researcher") .skill(hn_api) .skill(reasoning) - .llm(LlmProvider::ollama("qwen3-coder")) // claude-code runtime using Ollama backend + .llm(LlmProvider::ollama("qwen3-coder")) .memory_mb(1024) .network(true) .prompt("Analyze top HN stories for AI engineering trends") .build()?; ``` -#### Or use a YAML spec - ```yaml # hackernews_agent.yaml api_version: v1 @@ -137,421 +116,38 @@ agent: timeout_secs: 600 ``` -### 3. Run - -```rust -// Rust API -let result = researcher.run(None).await?; -println!("{}", result.claude_result.result_text); -``` - ```bash -# Or via CLI with a YAML spec voidbox run --file hackernews_agent.yaml ``` --- -## Architecture - -``` -┌───────────────────────────────────────────────────────┐ -│ Host │ -│ VoidBox Engine / Pipeline Orchestrator │ -│ │ -│ ┌─────────────────────────────────────────────────┐ │ -│ │ OCI Client (~/.voidbox/oci/) │ │ -│ │ guest image → kernel + initramfs (auto-pull) │ │ -│ │ base image → rootfs (pivot_root) │ │ -│ │ OCI skills → read-only mounts │ │ -│ └─────────────────────┬───────────────────────────┘ │ -│ │ │ -│ ┌─────────────────────▼───────────────────────────┐ │ -│ │ VMM (KVM / Virtualization.framework) │ │ -│ │ vsock ←→ guest-agent (PID 1) │ │ -│ │ SLIRP ←→ eth0 (10.0.2.15) │ │ -│ │ Linux/KVM: virtio-blk ←→ OCI base rootfs │ │ -│ │ 9p/virtiofs ←→ skills + host mounts │ │ -│ │ Snapshot: base/diff → ~/.void-box/snapshots │ │ -│ └─────────────────────────────────────────────────┘ │ -│ │ -│ Seccomp-BPF │ OTLP export │ -└──────────────┼────────────────────────────────────────┘ - Hardware │ Isolation -═══════════════╪════════════════════════════════════════ - │ -┌──────────────▼──────────────────────────────────────────┐ -│ Guest VM (Linux) │ -│ guest-agent: auth, allowlist, rlimits │ -│ claude-code runtime (Claude API or Ollama backend) │ -│ OCI rootfs (pivot_root) + skill mounts (/skills/...) │ -└─────────────────────────────────────────────────────────┘ -``` - -See [docs/architecture.md](docs/architecture.md) for the full component diagram, wire protocol, and security model. - -## Observability - -Every pipeline run is fully instrumented out of the box. Each VM stage emits -spans and metrics via OTLP, giving you end-to-end visibility across isolated -execution boundaries — from pipeline orchestration down to individual tool calls -inside each micro-VM. - -
-
-
Void-Box
+
+VoidBox composes declared skills with a hardware-isolated execution environment. Each run is provisioned through host orchestration and enforced by guest runtime controls.
+void-box is a composable agent runtime where each agent runs in a hardware-isolated micro-VM. On Linux this uses KVM; on macOS (Apple Silicon) it uses Virtualization.framework (VZ). The core equation is:
+VoidBox = Agent(Skills) + Isolation
+A VoidBox binds declared skills (MCP servers, CLI tools, procedural knowledge files, reasoning engines) to an isolated execution environment. Boxes compose into pipelines where output flows between stages, each in a fresh VM.
+┌──────────────────────────────────────────────────────────────────┐
+│ User / Daemon / CLI │
+│ │
+│ ┌──────────────────────────────────────────────────────────┐ │
+│ │ VoidBox (agent_box.rs) │ │
+│ │ name: "analyst" │ │
+│ │ prompt: "Analyze AAPL..." │ │
+│ │ skills: [claude-code, financial-data.md, market-mcp] │ │
+│ │ config: memory=1024MB, vcpus=1, network=true │ │
+│ └─────────────────────┬────────────────────────────────────┘ │
+│ │ resolve_guest_image() → .build() → .run()
+│ ┌─────────────────────▼───────────────────────────────────┐ │
+│ │ OCI Client (voidbox-oci/) │ │
+│ │ guest image → kernel + initramfs (auto-pull, cached) │ │
+│ │ base image → rootfs (pivot_root) │ │
+│ │ OCI skills → read-only mounts (/skills/...) │ │
+│ │ cache: ~/.voidbox/oci/{blobs,rootfs,guest}/ │ │
+│ └─────────────────────┬───────────────────────────────────┘ │
+│ │ │
+│ ┌─────────────────────▼───────────────────────────────────┐ │
+│ │ Sandbox (sandbox/) │ │
+│ │ ┌─────────────┐ ┌──────────────┐ │ │
+│ │ │ MockSandbox │ │ LocalSandbox │ │ │
+│ │ │ (testing) │ │ (KVM / VZ) │ │ │
+│ │ └─────────────┘ └──────┬───────┘ │ │
+│ └──────────────────────────┼──────────────────────────────┘ │
+│ │ │
+│ ┌──────────────────────────▼──────────────────────────────┐ │
+│ │ MicroVm (vmm/) │ │
+│ │ ┌────────┐ ┌────────┐ ┌─────────────┐ ┌──────────────┐ │ │
+│ │ │ KVM VM │ │ vCPU │ │ VsockDevice │ │ VirtioNet │ │ │
+│ │ │ │ │ thread │ │ (AF_VSOCK) │ │ (SLIRP) │ │ │
+│ │ └────────┘ └────────┘ └───────┬─────┘ └───────┬──────┘ │ │
+│ │ Linux/KVM: virtio-blk (OCI rootfs) │ │ │
+│ │ 9p/virtiofs: skills + host mounts │ │ │
+│ │ Seccomp-BPF on VMM thread │ │ │ │
+│ └────────────────────────────────┼───────────────┼────────┘ │
+│ │ │ │
+└═══════════════════════════════════╪═══════════════╪══════════════┘
+ Hardware Isolation │ │
+ │ vsock:1234 │ SLIRP NAT
+┌───────────────────────────────────▼───────────────▼───────────────┐
+│ Guest VM (Linux kernel) │
+│ │
+│ ┌──────────────────────────────────────────────────────────────┐ │
+│ │ guest-agent (PID 1) │ │
+│ │ - Authenticates via session secret (kernel cmdline) │ │
+│ │ - Reads /etc/voidbox/allowed_commands.json │ │
+│ │ - Reads /etc/voidbox/resource_limits.json │ │
+│ │ - Applies setrlimit + command allowlist │ │
+│ │ - Drops privileges to uid:1000 │ │
+│ │ - Listens on vsock port 1234 │ │
+│ │ - pivot_root to OCI rootfs (if sandbox.image set) │ │
+│ └────────────────────────┬─────────────────────────────────────┘ │
+│ │ fork+exec │
+│ ┌────────────────────────▼─────────────────────────────────────┐ │
+│ │ claude-code (or claudio mock) │ │
+│ │ --output-format stream-json │ │
+│ │ --dangerously-skip-permissions │ │
+│ │ Skills: ~/.claude/skills/*.md │ │
+│ │ MCP: ~/.claude/mcp.json │ │
+│ │ OCI skills: /skills/{python,go,...} (read-only mounts) │ │
+│ │ LLM: Claude API / Ollama (via SLIRP → host:11434) │ │
+│ └──────────────────────────────────────────────────────────────┘ │
+│ │
+│ eth0: 10.0.2.15/24 gw: 10.0.2.2 dns: 10.0.2.3 │
+└───────────────────────────────────────────────────────────────────┘
+1. VoidBox::new("name") User declares skills, prompt, config
+ │
+2. resolve_guest_image() Resolve kernel + initramfs (5-step chain)
+ │ Pulls from GHCR if no local paths found
+ │
+3. .build() Creates Sandbox (mock or local VM backend: KVM/VZ)
+ │ Mounts OCI rootfs + skill images if configured
+ │
+4. .run(input) Execution begins
+ │
+ ├─ provision_security() Write resource limits + allowlist to /etc/voidbox/
+ ├─ provision_skills() Write SKILL.md files to ~/.claude/skills/
+ │ Write mcp.json to ~/.claude/
+ ├─ write input Write /workspace/input.json (if piped from previous stage)
+ │
+ ├─ sandbox.exec_claude() Send ExecRequest over vsock
+ │ │
+ │ [vsock port 1234]
+ │ │
+ │ guest-agent receives Validates session secret
+ │ │ Checks command allowlist
+ │ │ Applies resource limits (setrlimit)
+ │ │ Drops privileges (uid:1000)
+ │ │
+ │ fork+exec claude-code Runs with --output-format stream-json
+ │ │
+ │ claude-code executes Reads skills, calls LLM, uses tools
+ │ │
+ │ ExecResponse sent stdout/stderr/exit_code over vsock
+ │ │
+ ├─ parse stream-json Extract ClaudeExecResult (tokens, cost, tools)
+ ├─ read output file /workspace/output.json
+ │
+5. StageResult box_name, claude_result, file_output
+
+Pipeline::named("analysis", box1)
+ .pipe(box2) Sequential: box1.output → box2.input
+ .fan_out(vec![box3, box4]) Parallel: both receive box2.output
+ .pipe(box5) Sequential: merged [box3, box4] → box5.input
+ .run()
+
+Stage flow:
+ box1.run(None) → carry_data = output bytes
+ box2.run(carry_data) → carry_data = output bytes
+ [box3, box4].run(carry) → carry_data = JSON array merge
+ box5.run(carry_data) → PipelineResult
+For parallel stages (fan_out), each box runs in a separate tokio::task::JoinSet. Their outputs are merged as a JSON array for the next stage.
void-box uses smoltcp-based usermode networking (SLIRP) -- no root, no TAP devices, no bridge configuration.
+Guest VM Host
+┌─────────────────────┐ ┌──────────────────┐
+│ eth0: 10.0.2.15/24 │ │ │
+│ gw: 10.0.2.2 │── virtio-net ──────│ SLIRP stack │
+│ dns: 10.0.2.3 │ (MMIO) │ (smoltcp) │
+└─────────────────────┘ │ │
+ │ 10.0.2.2 → NAT │
+ │ → 127.0.0.1 │
+ └──────────────────┘
+10.0.2.15/2410.0.2.2 (mapped to host 127.0.0.1)10.0.2.3 (forwarded to host resolver):11434) via 10.0.2.2CLI/daemon orchestrates runs, provisions skills/config, manages KVM VM lifecycle, and records run events.
`guest-agent` authenticates host requests, enforces command allowlist + rlimits, drops privilege, then executes `claude-code`.
vsock protocol frames requests/responses and supports streaming chunks plus telemetry.
KVM isolation + seccomp + session auth + policy controls + controlled networking.
guest-agent authenticates host requests, enforces command allowlist + rlimits, drops privilege, then executes claude-code.
vsock protocol frames requests/responses and supports streaming chunks plus telemetry. See the Wire Protocol page for frame format details.
KVM isolation + seccomp + session auth + policy controls + controlled networking. See the Security page for defense-in-depth details.
spec -> env provision -> skill mount -> claude-code run -> events/report
-For pipelines, stages compose sequentially and in fan-out mode with explicit failure boundaries per stage.
-
Void-Box
+
+The CLI provides run/validate/status/log flows and a TUI command mode layered on daemon HTTP endpoints.
-serve, run, validate, status, logs, tui
/run, /input, /status, /logs, /cancel, /history
The CLI provides run/validate/status/log flows. The TUI layers an interactive command mode on top of the daemon HTTP endpoints. Both communicate with the same daemon process.
POST /v1/runsGET /v1/runs/:idGET /v1/runs/:id/eventsPOST /v1/runs/:id/cancelPOST /v1/sessions/:id/messagesGET /v1/sessions/:id/messages| Command | Description |
|---|---|
voidbox serve | Start the daemon HTTP server. All other commands (except validate) require the daemon to be running. |
voidbox run --file <spec> | Execute a spec file (agent, pipeline, or workflow). Submits the run to the daemon and streams events to stdout. |
voidbox validate --file <spec> | Validate a spec file without running it. Checks schema, skill references, and sandbox configuration for errors. |
voidbox status | Show daemon status and active runs. Displays run IDs, current stage, and elapsed time. |
voidbox logs <run-id> | Stream logs for a specific run. Follows the event stream until the run completes or is cancelled. |
voidbox tui | Launch the interactive TUI interface. Connects to the daemon and provides a command prompt for managing runs. |
voidbox snapshot create --config-hash <hash> | Create a snapshot from a running or stopped VM, keyed by its configuration hash. |
voidbox snapshot list | List all stored snapshots with their hash prefixes, sizes, and creation timestamps. |
voidbox snapshot delete <hash-prefix> | Delete a snapshot by its hash prefix. Removes the state file and memory dumps. |
The daemon exposes an HTTP API used by both the CLI and TUI. These endpoints are also available for direct integration:
+| Method | Path | Description |
|---|---|---|
POST | /v1/runs | Create a new run. Accepts a spec payload and returns a run ID. |
GET | /v1/runs/:id | Get run status. Returns current state, stage progress, and timing information. |
GET | /v1/runs/:id/events | Stream run events via Server-Sent Events (SSE). Provides real-time progress updates. |
POST | /v1/runs/:id/cancel | Cancel a running run. Sends SIGKILL to the guest process and tears down the VM. |
POST | /v1/sessions/:id/messages | Send a message to an active session. Used for interactive/conversational workflows. |
GET | /v1/sessions/:id/messages | Get session messages. Returns the full message history for a session. |
The TUI provides an interactive command prompt connected to the daemon. Available commands:
+| Command | Description |
|---|---|
/run | Start a new run from a spec file or inline definition. |
/input | Send input to an active interactive session. |
/status | Display the status of all active and recent runs. |
/logs | Stream logs for a specific run ID. |
/cancel | Cancel a running run by its ID. |
/history | Show the history of completed runs with their results. |
Current TUI is functional but minimal: polling-oriented and plain text. A richer panel-based, live-streaming UX can be layered on top of event streaming APIs.
+The current TUI is functional but minimal: polling-oriented and plain text. A richer panel-based, live-streaming UX is planned and can be layered on top of the existing SSE event streaming APIs without changes to the daemon.
+
Void-Box
+
+VoidBox events are designed to keep capability and boundary context explicit for every action.
-run_id, box_name, skill_id, environment_id, mode, stream, seq.
Every pipeline run in VoidBox is fully instrumented. The event system provides structured identity fields on every action, OTLP-compatible traces and metrics, and structured logs -- all designed to keep capability and boundary context explicit.
+run_id -- unique identifier for the pipeline run.
+ box_name -- the VoidBox that emitted the event.
+ skill_id -- which skill is active.
+ environment_id -- the execution environment (VM) identifier.
+ mode -- execution mode (single, pipeline, workflow).
+ stream -- output stream (stdout, stderr).
+ seq -- monotonic sequence number for ordering.
run.started, run.finished, run.failed, run.cancelledenv.provisionedskill.mountedbox.started, workflow.plannedlog.chunk, log.closed| Event | Description |
|---|---|
run.started | Pipeline run has begun execution. |
run.finished | Pipeline run completed successfully. |
run.failed | Pipeline run failed with an error. |
run.cancelled | Pipeline run was cancelled by the user. |
env.provisioned | Guest environment has been provisioned (skills, config, mounts). |
skill.mounted | A skill has been written to the guest filesystem. |
box.started | A VoidBox has started execution within a stage. |
workflow.planned | A workflow planner has generated a pipeline plan. |
log.chunk | A chunk of streaming output from the guest (stdout/stderr). |
log.closed | The output stream for a box has closed. |
VoidBox emits OpenTelemetry-compatible traces that capture the full execution hierarchy:
+Pipeline span
+ └─ Stage 1 span (box_name="data_analyst")
+ ├─ tool_call event: Read("input.json")
+ ├─ tool_call event: Bash("curl ...")
+ └─ attributes: tokens_in, tokens_out, cost_usd, model
+ └─ Stage 2 span (box_name="quant_analyst")
+ └─ ...
+Each stage span includes attributes for token counts, cost, model used, and individual tool call events. Fan-out stages create parallel child spans under the same pipeline parent.
+
+Full distributed traces exported via OTLP gRPC. Pipeline, stage, and tool-call spans with rich attributes for token usage, cost, model, and timing.
+Token counts (input/output), cost in USD, execution duration, and VM lifecycle timing. Exported as OTLP metrics alongside traces.
+All log output is prefixed with [vm:NAME] for easy filtering. Stream-json output from claude-code is parsed into structured events.
The guest-agent reads /proc/stat and /proc/meminfo periodically, sending TelemetryBatch messages over vsock. The host-side TelemetryAggregator ingests these and exports as OTLP metrics.
| Env var | Description |
|---|---|
VOIDBOX_OTLP_ENDPOINT | OTLP gRPC endpoint (e.g. http://localhost:4317) |
OTEL_SERVICE_NAME | Service name for traces (default: void-box) |
OpenTelemetry support is enabled at compile time:
+cargo build --features opentelemetry
+For a full OTLP setup walkthrough with Jaeger or Grafana, see the Observability Setup guide.
The guest-side telemetry pipeline works independently from the host tracing system:
+/proc/stat (CPU usage) and /proc/meminfo (memory usage).TelemetryBatch message and sent to the host over the vsock channel.TelemetryAggregator receives batches, computes deltas, and exports them as OTLP metrics.This gives visibility into guest resource consumption without any instrumentation inside the workload itself.
+Daemon run/session state uses a provider abstraction: disk (default), plus example adapters for sqlite and valkey.
Daemon run and session state uses a provider abstraction, allowing different storage backends:
+File-based persistence. Run state and events are stored as JSON files on the local filesystem. No external dependencies.
+Adapter implementations for sqlite and valkey (Redis-compatible) backends. Useful for shared state in multi-node deployments.
VoidBox can mount host directories into the guest VM using sandbox.mounts. Each mount specifies a host path, a guest mount point, and a mode ("ro" or "rw", default "ro").
Read-write mounts write directly to the host directory — data persists across VM restarts since the host directory survives. This is the primary mechanism for stateful workloads.
+9p (virtio-9p) — kernel-based 9P filesystem protocol over virtio transport.
+virtiofs — Apple Virtualization.framework native shared directory support.
+Each mount entry has three fields:
+| Field | Description | Default |
|---|---|---|
host | Path on the host filesystem | (required) |
guest | Mount point inside the guest VM | (required) |
mode | "ro" (read-only) or "rw" (read-write) | "ro" |
sandbox:
+ mounts:
+ - host: ./data
+ guest: /data
+ mode: rw # persistent — host directory survives VM restarts
+ - host: ./config
+ guest: /config
+ mode: ro # read-only (default)
+Use mode: rw mounts to persist agent output, databases, or generated artifacts across VM restarts. The host directory is the source of truth.
Use mode: ro mounts to inject configuration files, credentials, or reference data into the guest without risk of modification.
Mount a host directory as rw to collect logs, results, or checkpoint files that survive VM lifecycle. Ideal for long-running pipelines.
Mount your project source tree as ro to let the agent read and analyze code without modifying the host copy.
Void-Box
@@ -48,6 +49,11 @@ run, validate, status, logs) and TUI interaction model.
Events + ObservabilityStructured run events, session persistence, and traceability.
+ OCI ContainersGuest images, base images, and OCI skill mounts for composing language runtimes.
+ SnapshotsSub-second VM restore via snapshot/restore with COW mmap — base and diff snapshot types.
+ Host MountsShare host directories into guest VMs via 9p/virtiofs with read-only or read-write mode.
+ Security ModelDefense in depth: KVM isolation, seccomp-BPF, session authentication, guest hardening, SLIRP controls.
+ Wire ProtocolAF_VSOCK framing, message types, session authentication, and network layout.
VoidBox supports OCI container images in three ways: as a pre-built guest image containing the kernel and initramfs, as a base image providing the full guest root filesystem, and as OCI skills that mount additional container images as read-only tool providers.
+Images are pulled from Docker Hub, GHCR, or any OCI-compliant registry and cached locally at ~/.voidbox/oci/.
sandbox.guest_image)Pre-built kernel + initramfs distributed as a FROM scratch OCI image containing two files: vmlinuz and rootfs.cpio.gz. Auto-pulled from GHCR on first run — no local toolchain needed.
VoidBox resolves the kernel/initramfs using a 5-step chain:
+1. sandbox.kernel / sandbox.initramfs (explicit paths in spec)
+2. VOID_BOX_KERNEL / VOID_BOX_INITRAMFS (env vars)
+3. sandbox.guest_image (explicit OCI ref)
+4. ghcr.io/the-void-ia/voidbox-guest:v{version} (default auto-pull)
+5. None → mock fallback (mode: auto)
+Cache layout: ~/.voidbox/oci/guest/<sha256>/vmlinuz + rootfs.cpio.gz + <sha256>.done marker.
sandbox.image)A full container image (e.g. python:3.12-slim) used as the guest root filesystem. The guest-agent switches root with overlayfs + pivot_root (or secure switch-root fallback when kernel returns EINVAL for initramfs root).
Host builds a cached ext4 disk artifact from the extracted OCI rootfs and attaches it as virtio-blk (guest sees /dev/vda).
Rootfs remains directory-mounted via virtiofs path. No block device needed.
+Security properties are preserved across both paths: OCI root switch is driven only by kernel cmdline flags set by the trusted host, command allowlist + authenticated vsock control channel still gate execution, and the writable layer is tmpfs-backed while the base OCI lowerdir remains read-only.
+Cache layout: ~/.voidbox/oci/rootfs/<sha256>/ (full layer extraction with whiteout handling).
Mount additional container images as read-only tool providers at arbitrary guest paths. Each skill image is pulled, extracted, and mounted independently — no sandbox.image required. This lets you compose language runtimes (Python, Go, Java, etc.) without baking them into the initramfs.
api_version: v1
+kind: agent
+name: multi-tool-agent
+
+sandbox:
+ mode: auto
+ memory_mb: 2048
+ vcpus: 2
+ network: true
+
+llm:
+ provider: ollama
+ model: "qwen2.5-coder:7b"
+
+agent:
+ prompt: >
+ You have Python, Go, and Java available as mounted skills.
+ Set up PATH to include the skill binaries:
+ export PATH=/skills/python/usr/local/bin:/skills/go/usr/local/go/bin:/skills/java/bin:$PATH
+
+ Write a "Hello from <language>" one-liner in each language and run all three.
+ Report which versions are installed.
+ skills:
+ - "agent:claude-code"
+ - image: "python:3.12-slim"
+ mount: "/skills/python"
+ - image: "golang:1.23-alpine"
+ mount: "/skills/go"
+ - image: "eclipse-temurin:21-jdk-alpine"
+ mount: "/skills/java"
+ timeout_secs: 300
+~/.voidbox/oci/
+ blobs/ # Content-addressed blob cache
+ rootfs/ # Extracted OCI rootfs layers (per sha256)
+ guest/ # Guest image cache (kernel + initramfs)
+The voidbox-oci/ crate provides the OCI distribution client:
| Module | Purpose |
|---|---|
registry.rs | OCI Distribution HTTP client (anonymous + bearer auth, HTTP for localhost) |
manifest.rs | Manifest / image index parsing, platform selection |
cache.rs | Content-addressed blob cache + rootfs/guest done markers |
unpack.rs | Layer extraction (full rootfs with whiteouts, or selective guest file extraction) |
lib.rs | OciClient: pull(), resolve_rootfs(), resolve_guest_files() |
All OCI examples are in examples/specs/oci/:
| Spec | Description |
|---|---|
agent.yaml | Single agent with sandbox.image: python:3.12-slim |
workflow.yaml | Workflow with sandbox.image: alpine:3.20 (no LLM) |
pipeline.yaml | Multi-language pipeline: Python base + Go and Java OCI skills |
skills.yaml | OCI skills only (Python, Go, Java) mounted into default initramfs |
guest-image-workflow.yaml | Workflow using sandbox.guest_image for auto-pulled kernel + initramfs |
VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
+VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
+cargo run --bin voidbox -- run \
+ --file examples/specs/oci/skills.yaml
+ VOID_BOX_KERNEL=target/vmlinux-arm64 \
+VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
+cargo run --bin voidbox -- run \
+ --file examples/specs/oci/skills.yaml
+
Void-Box
+
+VoidBox uses claude-code as the canonical agent runtime. This is the execution identity in guest environments.
When configured for Ollama, VoidBox still runs claude-code; only the provider backend changes via runtime env config.
VoidBox uses claude-code as the canonical agent runtime. This is the execution identity inside every guest environment -- regardless of which LLM provider backend is configured, the guest always runs claude-code.
The default path. VoidBox provisions your ANTHROPIC_API_KEY into the guest environment and claude-code calls the Claude API directly over the SLIRP network.
When configured for Ollama, VoidBox still runs claude-code -- only the provider backend changes. The guest reaches Ollama on the host via the SLIRP gateway (10.0.2.2:11434). Claude-code's provider compatibility layer handles the translation.
# default provider path
+# Default provider path (Claude API)
cargo run --bin voidbox -- run --file examples/specs/hackernews_agent.json
-# same spec, ollama backend through claude-code compatibility
+# Same spec, Ollama backend through claude-code compatibility
VOIDBOX_LLM_PROVIDER=ollama \
VOIDBOX_LLM_MODEL=qwen2.5-coder:7b \
cargo run --bin voidbox -- run --file examples/specs/hackernews_agent.json
Skills are the composable units of capability injected into a VoidBox. Each type is provisioned differently in the guest environment:
+| Type | Constructor | Provisioned as | Example |
|---|---|---|---|
| Agent | Skill::agent("claude-code") | Reasoning engine designation | The LLM itself |
| File | Skill::file("path/to/SKILL.md") | ~/.claude/skills/{name}.md | Domain methodology |
| Remote | Skill::remote("owner/repo/skill") | Fetched from GitHub, written to skills/ | obra/superpowers/brainstorming |
| MCP | Skill::mcp("server-name") | Entry in ~/.claude/mcp.json | Structured tool server |
| CLI | Skill::cli("jq") | Expected in guest initramfs | Binary tool |
Inside the micro-VM, the guest-agent runs as PID 1 and controls the full execution lifecycle:
/etc/voidbox/allowed_commands.json (command allowlist) and /etc/voidbox/resource_limits.json (rlimits). Applies setrlimit constraints for memory, file descriptors, and process count.uid:1000 before executing any user workload.claude-code with --output-format stream-json, streaming structured output back to the host over vsock.Skills are pre-provisioned before execution: file skills are written to ~/.claude/skills/, MCP entries go to ~/.claude/mcp.json, and OCI skill images are mounted read-only at their declared paths.
Runs should fail fast when guest image/runtime is incompatible with required execution mode. Use production guest image builds for runtime examples.
+Runs fail fast when the guest image or runtime is incompatible with the required execution mode. This prevents silent failures where a spec expects capabilities not present in the guest. Use production guest image builds for runtime examples.
VoidBox uses a layered security model with five distinct isolation boundaries. Each layer provides independent protection — compromise of one layer does not grant access through subsequent layers.
+Layer 1: Hardware isolation (KVM)
+ — Separate kernel, memory space, devices per VM
+
+Layer 2: Seccomp-BPF (VMM process)
+ — VMM thread restricted to KVM ioctls + vsock + networking syscalls
+
+Layer 3: Session authentication (vsock)
+ — 32-byte random secret, per-VM, injected at boot
+
+Layer 4: Guest hardening (guest-agent)
+ — Command allowlist, rlimits, privilege drop, timeout watchdog
+
+Layer 5: Network isolation (SLIRP)
+ — Rate limiting, max connections, CIDR deny list
+Each VoidBox runs in its own micro-VM with a separate kernel, memory space, and devices. Hardware virtualization enforces isolation — not advisory process controls. On macOS, Apple's Virtualization.framework provides equivalent hypervisor-level isolation.
The VMM thread is restricted via seccomp-BPF to only the syscalls needed for KVM operation: KVM ioctls, vsock communication, and networking syscalls. All other syscalls are blocked at the kernel level.
Every VM gets a unique 32-byte random session secret, injected via kernel command line. The guest-agent requires this secret on every request.
+Host Guest
+ | |
+ +-- getrandom(32 bytes) |
+ +-- hex-encode -> kernel cmdline |
+ | voidbox.secret=abc123... |
+ | |
+ | boot |
+ | ------------------------------------> |
+ | +-- parse /proc/cmdline
+ | +-- store in OnceLock
+ | |
+ +-- ExecRequest { secret: "abc123..." } |
+ | ------------------------------------> |
+ | +-- verify secret
+ | +-- execute if match
+ | <------------------------------------ |
+ | ExecResponse { ... } |
+The guest-agent (PID 1) enforces four independent controls:
+Only approved binaries execute. The allowlist is read from /etc/voidbox/allowed_commands.json, provisioned by the trusted host at boot.
setrlimit enforces memory, file descriptor, and process count limits. Read from /etc/voidbox/resource_limits.json.
Child processes run as uid:1000. The guest-agent drops privileges before executing any command, preventing root access inside the VM.
A watchdog timer sends SIGKILL to child processes that exceed the configured timeout, preventing runaway execution.
VoidBox uses smoltcp-based usermode networking (SLIRP) — no root, no TAP devices, no bridge configuration.
+ipnet, blocks access to specified network rangesSnapshot cloning shares identical VM state across restored instances. Three areas require awareness:
+Restored VMs inherit the same /dev/urandom pool. Mitigated by: fresh CID per restore, hardware RDRAND re-seeding on rdtsc.
Clones share guest page table layout. Mitigated by: short-lived tasks, no direct network addressability (SLIRP NAT), command allowlist limiting attack surface.
+Restored VMs reuse the snapshot's stored session secret for vsock authentication (the secret is baked into the guest's kernel cmdline in snapshot memory). Per-restore secret rotation would require guest-side support.
+VoidBox supports sub-second VM restore via snapshot/restore. Snapshots capture the full VM state (vCPU registers, memory, devices) and restore via COW mmap — the guest resumes execution without re-booting the kernel or re-running initialization.
All snapshot features are explicit opt-in only. If you never set a snapshot field, the system behaves exactly as before — cold boot, zero snapshot code runs.
| Type | When Created | Contents | Use Case |
|---|---|---|---|
| Base | After cold boot, VM stopped | Full memory dump + all KVM state | Golden image for repeated boots |
| Diff | After dirty tracking enabled, VM stopped | Only modified pages since base | Layered caching (base + delta) |
# Applies to all boxes
+sandbox:
+ memory_mb: 256
+ snapshot: "abc123def456"
+ pipeline:
+ boxes:
+ - name: analyst
+ prompt: "analyze data"
+ sandbox:
+ snapshot: "def789"
+ - name: coder
+ prompt: "write code"
+ # no snapshot = cold boot
+ use void_box::agent_box::VoidBox;
+
+// Cold boot (default — no snapshot)
+let box1 = VoidBox::new("analyst")
+ .prompt("analyze data")
+ .memory_mb(256)
+ .build()?;
+
+// Restore from snapshot (explicit opt-in)
+let box2 = VoidBox::new("analyst")
+ .prompt("analyze data")
+ .snapshot("/path/to/snapshot/dir") // or hash prefix
+ .build()?;
+# Create a snapshot from a running VM
+voidbox snapshot create --config-hash <hash>
+
+# List stored snapshots
+voidbox snapshot list
+
+# Delete a snapshot
+voidbox snapshot delete <hash-prefix>
+
+# Run with a snapshot (via spec)
+voidbox run --file spec.yaml # spec has sandbox.snapshot set
+# POST /runs with snapshot override
+curl -X POST http://localhost:8080/runs \
+ -H 'Content-Type: application/json' \
+ -d '{"file": "workflow.yaml", "snapshot": "abc123def456"}'
+None — the system behaves identically to before if untouchedMeasured on Linux/KVM with 256 MB RAM, 1 vCPU, userspace virtio-vsock:
+| Phase | Time | Notes |
|---|---|---|
| Cold boot | ~10 ms | |
| Base snapshot | ~420 ms | Full 256 MB memory dump |
| Base restore | ~1.3 ms | COW mmap, lazy page loading |
| Diff snapshot | ~270 ms | Only dirty pages (~1.5 MB, 0.6% of RAM) |
| Diff restore | ~3 ms | Base COW mmap + dirty page overlay |
| Base speedup | ~8x | Cold boot / base restore |
| Diff savings | 99.4% | Memory file size reduction |
~/.void-box/snapshots/
+ <hash-prefix>/ # first 16 chars of config hash
+ state.bin # bincode: VmSnapshot (vCPU regs, irqchip, PIT, vsock, config)
+ memory.mem # full memory dump (base)
+ memory.diff # dirty pages only (diff snapshots)
+The 7-step restore process:
+1. VmSnapshot::load(dir) Read state.bin (vCPU, irqchip, PIT, vsock, config)
+2. Vm::new(memory_mb) Create KVM VM with matching memory size
+3. restore_memory(mem, path) COW mmap(MAP_PRIVATE|MAP_FIXED) — lazy page loading
+4. vm.restore_irqchip(state) Restore PIC master/slave + IOAPIC
+5. VirtioVsockMmio::restore() Restore vsock device registers (userspace backend)
+6. create_vcpu_restored(state) Per-vCPU restore (see register restore order below)
+7. vCPU threads resume Guest continues execution from snapshot point
+Memory restore uses kernel MAP_PRIVATE lazy page loading — pages are demand-faulted from the file, writes create anonymous copies. No userfaultfd required.
The restore sequence in cpu.rs is order-sensitive. Getting it wrong causes silent guest crashes (kernel panic → reboot via port 0x64).
1. MSRs KVM_SET_MSRS
+2. sregs KVM_SET_SREGS (segment regs, CR0/CR3/CR4)
+3. LAPIC KVM_SET_LAPIC + periodic timer bootstrap (see below)
+4. vcpu_events KVM_SET_VCPU_EVENTS (exception/interrupt state)
+5. XCRs (XCR0) KVM_SET_XCRS — MUST come before xsave
+6. xsave (FPU/SSE) KVM_SET_XSAVE — depends on XCR0 for feature mask
+7. regs KVM_SET_REGS (GP registers, RIP, RFLAGS)
+XCR0 restore is critical. XCR0 controls which XSAVE features (x87, SSE, AVX) are active. Without it, the guest's XRSTORS instruction triggers a #GP because the default XCR0 only enables x87, but the guest's XSAVE area references SSE/AVX features.
When the guest was idle (NO_HZ) at snapshot time, the LAPIC timer is masked with vector=0 (LVTT=0x10000). After restore, no timer interrupt ever fires, so the scheduler never runs. The restore code detects this state and bootstraps a periodic LAPIC timer (mode=periodic, vector=0xEC, TMICT=0x200000, TDCR=divide-by-1) to kick the scheduler back to life.
+The userspace virtio-vsock backend must be used for VMs that will be snapshotted. The kernel vhost backend (/dev/vhost-vsock) does not expose internal vring indices, making queue state capture incomplete. The userspace backend tracks last_avail_idx/last_used_idx directly, ensuring clean snapshot/restore of the virtqueue state.
The snapshot stores the VM's actual CID (assigned at cold boot). On restore, the same CID is reused — the guest kernel caches the CID during virtio-vsock probe and silently drops packets with mismatched dst_cid.
Every layer has an optional snapshot field that defaults to None:
| Layer | Field | Type | Default |
|---|---|---|---|
SandboxBuilder | .snapshot(path) | Option<PathBuf> | None |
BoxConfig | snapshot | Option<PathBuf> | None |
SandboxSpec (YAML) | sandbox.snapshot | Option<String> | None |
BoxSandboxOverride | sandbox.snapshot | Option<String> | None |
CreateRunRequest (API) | snapshot | Option<String> | None |
Resolution chain: per-box override → top-level spec → None (cold boot).
When a snapshot string is provided, the runtime resolves it as:
+~/.void-box/snapshots/<prefix>/ (if state.bin exists)state.bin exists)No env var fallback, no auto-detection.
+evict_lru(max_bytes) removes oldest snapshots firstcompute_layer_hash(base, layer, content) for deterministic cache keyslist_snapshots() / voidbox snapshot listdelete_snapshot(prefix) / voidbox snapshot delete <prefix>Snapshot cache is stored at ~/.void-box/snapshots/.
Snapshot cloning shares identical VM state across restored instances:
+/dev/urandom pool. Mitigated by: fresh CID per restore, hardware RDRAND re-seeding on rdtsc.Host and guest communicate over AF_VSOCK (port 1234) using the void-box-protocol crate. The protocol uses a simple length-prefixed binary framing with JSON payloads.
+---------------+-----------+--------------------+
+| length (4 B) | type (1B) | payload (N bytes) |
++---------------+-----------+--------------------+
+| Field | Size | Description |
|---|---|---|
length | 4 bytes | u32 little-endian, payload size only (excludes the 5-byte header) |
type | 1 byte | Message type discriminant |
payload | N bytes | JSON-encoded body |
| Type Byte | Direction | Message | Description |
|---|---|---|---|
0x01 | host → guest | ExecRequest | Execute a command (program, args, env, timeout) |
0x02 | guest → host | ExecResponse | Command result (stdout, stderr, exit_code) |
0x03 | both | Ping | Session authentication handshake |
0x04 | guest → host | Pong | Authentication reply with protocol version |
0x05 | host → guest | Shutdown | Request guest shutdown |
0x0A | host → guest | SubscribeTelemetry | Start telemetry stream |
0x0B | host → guest | WriteFile | Write file to guest filesystem |
0x0C | guest → host | WriteFileResponse | Write file acknowledgement |
0x0D | host → guest | MkdirP | Create directory tree |
0x0E | guest → host | MkdirPResponse | Mkdir acknowledgement |
0x0F | guest → host | ExecOutputChunk | Streaming output chunk (stream, data, seq) |
0x10 | host → guest | ExecOutputAck | Flow control ack (optional) |
0x11 | both | SnapshotReady | Guest signals readiness for live snapshot |
64 MB — prevents OOM from untrusted length fields. Messages exceeding this limit are rejected before allocation.
+32-byte hex token injected as voidbox.secret=<hex> in kernel cmdline. The guest-agent reads it from /proc/cmdline at boot and requires it in every ExecRequest.
The Debug impl for ExecRequest redacts environment variables matching KEY, SECRET, TOKEN, PASSWORD patterns — preventing accidental credential exposure in logs.
VoidBox uses smoltcp-based usermode networking (SLIRP) — no root, no TAP devices, no bridge configuration.
+Guest VM Host
++---------------------+ +------------------+
+| eth0: 10.0.2.15/24 | | |
+| gw: 10.0.2.2 |-- virtio-net ------| SLIRP stack |
+| dns: 10.0.2.3 | (MMIO) | (smoltcp) |
++---------------------+ | |
+ | 10.0.2.2 -> NAT |
+ | -> 127.0.0.1 |
+ +------------------+
+| Endpoint | Address | Description |
|---|---|---|
| Guest IP | 10.0.2.15/24 | Static IP assigned to guest eth0 |
| Gateway | 10.0.2.2 | Mapped to host 127.0.0.1 — guest reaches host services via this address |
| DNS | 10.0.2.3 | Forwarded to host resolver |
Outbound TCP/UDP is NATed through the host. The guest reaches host services (e.g. Ollama on :11434) via 10.0.2.2.
Void-Box
diff --git a/site/guides/getting-started/index.html b/site/guides/getting-started/index.html
new file mode 100644
index 0000000..f1cdd12
--- /dev/null
+++ b/site/guides/getting-started/index.html
@@ -0,0 +1,142 @@
+
+
+
+
+
+
Void-Box
+
+ This guide walks you through installing VoidBox, defining your first agent, and running it. By the end you will have an isolated micro-VM executing a prompt-driven workflow.
+ +/dev/kvm available (most cloud instances and bare-metal servers).Add VoidBox as a library dependency:
+cargo add void-box
+ Or install the CLI binary:
+cargo install void-box
+ Declare skills, bind them to an isolated execution boundary, and run:
+use void_box::agent_box::VoidBox;
+use void_box::skill::Skill;
+use void_box::llm::LlmProvider;
+
+// Skills = declared capabilities
+let hn_api = Skill::file("skills/hackernews-api.md")
+ .description("HN API via curl + jq");
+
+let reasoning = Skill::agent("claude-code")
+ .description("Autonomous reasoning and code execution");
+
+// VoidBox = Agent(Skills) + Isolation
+let researcher = VoidBox::new("hn_researcher")
+ .skill(hn_api)
+ .skill(reasoning)
+ .llm(LlmProvider::ollama("qwen3-coder"))
+ .memory_mb(1024)
+ .network(true)
+ .prompt("Analyze top HN stories for AI engineering trends")
+ .build()?;
+
+ The same agent defined declaratively:
+# hackernews_agent.yaml
+api_version: v1
+kind: agent
+name: hn_researcher
+
+sandbox:
+ mode: auto
+ memory_mb: 1024
+ network: true
+
+llm:
+ provider: ollama
+ model: qwen3-coder
+
+agent:
+ prompt: "Analyze top HN stories for AI engineering trends"
+ skills:
+ - "file:skills/hackernews-api.md"
+ - "agent:claude-code"
+ timeout_secs: 600
+
+ With the Rust API:
+cargo run
+ With the CLI and a YAML spec:
+voidbox run --file hackernews_agent.yaml
+ When you run a VoidBox agent, the following sequence executes automatically:
+
Void-Box
@@ -32,6 +33,10 @@
Practical implementation walkthroughs.
.pipe() and .fan_out() — fresh VM per stage, no state leaks.
Declarative YAML SpecsDefine agents and pipelines as config files and run them with void-box run.
diff --git a/site/guides/observability-setup/index.html b/site/guides/observability-setup/index.html
new file mode 100644
index 0000000..c916180
--- /dev/null
+++ b/site/guides/observability-setup/index.html
@@ -0,0 +1,109 @@
+
+
+
+
+
+
Void-Box
+
+ Every pipeline run is fully instrumented out of the box. Each VM stage emits spans and metrics via OTLP, giving you end-to-end visibility across isolated execution boundaries — from pipeline orchestration down to individual tool calls inside each micro-VM.
+ +
+
+
[vm:NAME] prefixed and trace-correlated for easy filtering.Build with the opentelemetry feature flag and set the OTLP endpoint:
cargo build --features opentelemetry
+ Then set the endpoint environment variable when running:
+VOIDBOX_OTLP_ENDPOINT=http://localhost:4317 \
+cargo run --bin voidbox -- run --file agent.yaml
+ | Environment Variable | Description |
|---|---|
VOIDBOX_OTLP_ENDPOINT | OTLP gRPC endpoint (e.g. http://localhost:4317) |
OTEL_SERVICE_NAME | Service name for traces (default: void-box) |
Traces follow a hierarchical structure from the pipeline level down to individual tool calls within each VM stage:
+Pipeline span
+ └─ Stage 1 span (box_name="data_analyst")
+ ├─ tool_call event: Read("input.json")
+ ├─ tool_call event: Bash("curl ...")
+ └─ attributes: tokens_in, tokens_out, cost_usd, model
+ └─ Stage 2 span (box_name="quant_analyst")
+ └─ ...
+ Each stage span carries attributes for token counts, cost, model used, and duration. Tool call events are recorded as span events within the stage span.
+ +The guest-agent inside each micro-VM periodically reads /proc/stat and /proc/meminfo, then sends TelemetryBatch messages over vsock to the host. On the host side, the TelemetryAggregator ingests these batches and exports them as OTLP metrics.
Guest telemetry gives you per-VM resource utilization without any agent-side instrumentation. CPU and memory metrics flow automatically as long as the guest-agent is running.
+The repository includes a ready-to-run observability stack in the playground/ directory with pre-configured:
See the playground/ directory in the repository for setup instructions.
Void-Box
diff --git a/site/guides/pipeline-composition/index.html b/site/guides/pipeline-composition/index.html
index fd30feb..f0c46b3 100644
--- a/site/guides/pipeline-composition/index.html
+++ b/site/guides/pipeline-composition/index.html
@@ -3,7 +3,7 @@
-
Void-Box
diff --git a/site/guides/running-on-linux/index.html b/site/guides/running-on-linux/index.html
new file mode 100644
index 0000000..4bce1d8
--- /dev/null
+++ b/site/guides/running-on-linux/index.html
@@ -0,0 +1,163 @@
+
+
+
+
+
+
Void-Box
+
+ VoidBox runs natively on any Linux host with /dev/kvm. This guide covers zero-setup mode, manual image builds, mock mode for development, and running the test suite.
On a Linux host with /dev/kvm, VoidBox auto-pulls a pre-built guest image (kernel + initramfs) from GHCR on first run. No manual build steps required:
# Just works — guest image is pulled and cached automatically
+ANTHROPIC_API_KEY=sk-ant-xxx \
+cargo run --bin voidbox -- run --file examples/specs/oci/agent.yaml
+
+# Or with Ollama
+cargo run --bin voidbox -- run --file examples/specs/oci/workflow.yaml
+ The guest image (ghcr.io/the-void-ia/voidbox-guest) contains the kernel and initramfs with guest-agent, busybox, and common tools. It is cached at ~/.voidbox/oci/guest/ after the first pull.
VoidBox resolves the kernel and initramfs using a 5-step chain:
+sandbox.kernel / sandbox.initramfs in the spec (explicit paths)VOID_BOX_KERNEL / VOID_BOX_INITRAMFS env varssandbox.guest_image in the spec (explicit OCI ref)ghcr.io/the-void-ia/voidbox-guest:v{version} (auto-pull)mode: autoTo use a custom guest image or disable auto-pull:
+sandbox:
+ # Use a specific guest image
+ guest_image: "ghcr.io/the-void-ia/voidbox-guest:latest"
+
+ # Or disable auto-pull (empty string)
+ # guest_image: ""
+
+ If you prefer to build the guest image locally:
+# Build base guest initramfs (guest-agent + tools; no Claude bundle)
+scripts/build_guest_image.sh
+
+# Download a kernel
+scripts/download_kernel.sh
+
+# Run with explicit paths
+ANTHROPIC_API_KEY=sk-ant-xxx \
+VOID_BOX_KERNEL=target/vmlinuz-amd64 \
+VOID_BOX_INITRAMFS=/tmp/void-box-rootfs.cpio.gz \
+cargo run --example trading_pipeline
+ For a production Claude-capable initramfs:
+# Build production rootfs/initramfs with native claude-code + CA certs + sandbox user
+scripts/build_claude_rootfs.sh
+ | Script | Purpose |
|---|---|
scripts/build_guest_image.sh | Base runtime image for general VM/OCI work |
scripts/build_claude_rootfs.sh | Production image for direct Claude runtime in guest |
scripts/build_test_image.sh | Deterministic test image with claudio mock |
No KVM required. Mock mode lets you develop and test pipeline logic without hardware virtualization:
+cargo run --example quick_demo
+cargo run --example trading_pipeline
+cargo run --example parallel_pipeline
+ Run a parallel pipeline with per-box model overrides using environment variables:
+OLLAMA_MODEL=phi4-mini \
+OLLAMA_MODEL_QUANT=qwen3-coder \
+OLLAMA_MODEL_SENTIMENT=phi4-mini \
+VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
+VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
+cargo run --example parallel_pipeline
+ cargo test --lib
+ cargo test --test skill_pipeline
+ cargo test --test integration
+ scripts/build_test_image.sh
+VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \
+VOID_BOX_INITRAMFS=/tmp/void-box-test-rootfs.cpio.gz \
+cargo test --test e2e_skill_pipeline -- --ignored --test-threads=1
+
Void-Box
+
+ VoidBox runs natively on Apple Silicon Macs using Apple's Virtualization.framework. No Docker or Linux VM required.
+ +VoidBox on macOS uses Virtualization.framework (VZ) directly on Apple Silicon (M1 or later). This gives you the same hardware-isolated micro-VM execution model as Linux/KVM, with no container runtime dependency.
+Install the musl cross-compilation toolchain. This compiles from source and takes approximately 30 minutes the first time:
+# Install the musl cross-compilation toolchain
+brew install filosottile/musl-cross/musl-cross
+ Add the Rust target for Linux ARM64:
+rustup target add aarch64-unknown-linux-musl
+ Download the kernel, build the guest initramfs, compile, codesign, and run:
+# Download an ARM64 Linux kernel (cached in target/)
+scripts/download_kernel.sh
+
+# Build the guest initramfs (cross-compiles guest-agent, downloads claude-code + busybox)
+scripts/build_claude_rootfs.sh
+
+# Build the example and sign it with the virtualization entitlement
+cargo build --example ollama_local
+codesign --force --sign - --entitlements voidbox.entitlements target/debug/examples/ollama_local
+
+# Run (Ollama must be listening on 0.0.0.0:11434)
+OLLAMA_MODEL=qwen3-coder \
+VOID_BOX_KERNEL=target/vmlinux-arm64 \
+VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
+target/debug/examples/ollama_local
+ Every cargo build invalidates the code signature. You must re-run codesign after each rebuild, or macOS will refuse to grant the virtualization entitlement and the VM will fail to start.
When using the voidbox CLI, cargo run automatically codesigns before executing (via .cargo/config.toml runner). Just run:
cargo run --bin voidbox -- run --file examples/specs/oci/guest-image-workflow.yaml
+ If running the binary directly (e.g. ./target/debug/voidbox), you must codesign manually first:
codesign --force --sign - --entitlements voidbox.entitlements target/debug/voidbox
+ Then run with the required environment variables:
+VOID_BOX_KERNEL=target/vmlinux-arm64 \
+VOID_BOX_INITRAMFS=target/void-box-rootfs.cpio.gz \
+./target/debug/voidbox run --file examples/specs/oci/agent.yaml
+
Void-Box
diff --git a/site/index.html b/site/index.html
index 5bf4b84..1f0bdb3 100644
--- a/site/index.html
+++ b/site/index.html
@@ -3,7 +3,7 @@
- Agent runtime boundaries with the simplicity of declarative skills.
@@ -157,11 +162,97 @@Three steps from declaration to isolated execution.
+Define capabilities as MCP servers, SKILL files, CLI tools, or OCI images. Skills are what your agent can do.
+Set memory, vCPUs, network access, and mounts. Each sandbox is a hardware-isolated micro-VM boundary.
+VoidBox boots a micro-VM, provisions skills, runs claude-code, and returns results. No shared kernel. No escape surface.
+One command. Binary + kernel + initramfs — everything bundled.
+curl -fsSL https://raw.githubusercontent.com/the-void-ia/void-box/main/scripts/install.sh | sh
+ brew tap the-void-ia/tap
+brew install voidbox
+ curl -fsSLO https://github.com/the-void-ia/void-box/releases/download/v0.1.2/voidbox_0.1.2_amd64.deb
+sudo dpkg -i voidbox_0.1.2_amd64.deb
+ sudo rpm -i https://github.com/the-void-ia/void-box/releases/download/v0.1.2/voidbox-0.1.2-1.x86_64.rpm
+ v0.1.2 · All releases →
+Real agent runs inside hardware-isolated micro-VMs.
+OpenClaw agent — live demo running inside a VoidBox micro-VM
+
+ Code review pipeline — multi-stage analysis with fan-out
+