-
Notifications
You must be signed in to change notification settings - Fork 0
feat(acp): remote ACP — run agent sessions on the server, stream to thin client #40
Description
Summary
Today ACP agents (Claude, Codex, Gemini) run as local subprocesses on the machine running axon serve. When using the thin client / hive system (issue #23), the client device connects over WebSocket but can't run ACP sessions because it doesn't have Claude/Codex/Gemini installed locally. Remote ACP lets the subprocess run on the server while the thin client streams the session over WebSocket — full agent capability from any device with just the axon binary and a token.
Current Execution Flow
Frontend/Client
│ WS: { type: "execute", mode: "pulse_chat", input: "...", flags: { agent, model, session_id } }
▼
crates/web/execute/sync_mode/pulse_chat.rs
→ get_or_create_acp_connection()
→ resolve_acp_adapter_command() // finds 'claude', 'codex', or 'gemini' binary
▼
crates/services/acp.rs:140 — AcpClientScaffold::spawn_adapter()
→ tokio::process::Command::new(program) // ← SPAWNS LOCAL SUBPROCESS
→ stdin/stdout piped to ACP SDK
▼
crates/services/acp/session.rs
→ initialize_connection() // ACP protocol over pipes
→ AcpBridgeClient callbacks stream events back over WS
The bottleneck is spawn_adapter() — it assumes the binary is reachable locally. Everything above and below it is already transport-agnostic.
The Key Insight
The ACP SDK only requires AsyncRead + AsyncWrite traits for the subprocess stdio. If those traits are implemented over a WebSocket tunnel instead of OS pipes, the rest of the pipeline (protocol handling, bridge callbacks, permission gates, session state) is unchanged.
This means remote ACP is an I/O adapter swap, not an architectural rewrite.
Design
Two execution modes
Mode A — Server-side subprocess (default, current behavior):
Thin Client → WS → Server → spawns Claude subprocess locally → streams events back
Used when: server has Claude/Codex/Gemini installed (homelab axon host)
Mode B — Client-side subprocess (remote ACP):
Thin Client → WS → Server → WS tunnel → Thin Client spawns Claude locally → events tunnel back
Used when: Claude is installed on the CLIENT device, not the server (e.g., dev laptop with Claude Code installed, using axon as the session/memory backend)
The server is always the session authority — it holds session state, memory, and routes events to the frontend. The subprocess location is a deployment detail.
New WS message types (client ↔ server)
// Server → Thin Client: "please spawn this for me"
{ type: "acp_spawn_request", session_id: string, program: string, args: string[], cwd?: string }
// Thin Client → Server: "spawned OK" or error
{ type: "acp_spawn_ack", session_id: string, ok: boolean, error?: string }
// Bidirectional: raw ACP protocol bytes tunneled over WS
{ type: "acp_pipe_data", session_id: string, stream: "stdin" | "stdout" | "stderr", data: string /* base64 */ }
// Thin Client → Server: process exited
{ type: "acp_process_exit", session_id: string, exit_code: number }
// Server → Thin Client: forwarded permission response from frontend user
{ type: "acp_permission_response", session_id: string, tool_call_id: string, decision: "allow" | "deny" }Implementation in spawn_adapter()
pub enum SpawnMode {
Local, // current behavior — spawn subprocess here
RemoteClient(ConnId), // delegate to connected thin client
}
// In AcpClientScaffold::spawn_adapter():
match spawn_mode {
SpawnMode::Local => {
// existing tokio::process::Command logic (unchanged)
}
SpawnMode::RemoteClient(conn_id) => {
// 1. Send acp_spawn_request to conn_id over WS
// 2. Wait for acp_spawn_ack
// 3. Return (stdin_tx, stdout_rx) backed by WS tunnel
// → implement AsyncRead/AsyncWrite over the pipe_data channel
// 4. Rest of pipeline unchanged (ACP SDK doesn't know the difference)
}
}Thin client side (crates/client.rs)
New handler in the thin client's WS message loop:
WsMessage::AcpSpawnRequest { session_id, program, args, cwd } => {
// validate program against ALLOWED_MODES (same allowlist as server)
let child = tokio::process::Command::new(&program)
.args(&args)
.current_dir(cwd.unwrap_or("."))
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()?;
// pump child stdio ↔ WS acp_pipe_data messages
// forward permission requests from child to server
ws.send(AcpSpawnAck { session_id, ok: true }).await;
}Session routing
When a thin client connects and sends hive_register, the server notes its ConnId. When a new ACP session is requested:
- Check
cfg.acp_spawn_mode(config:server|client|auto) - If
auto: try server-local spawn first; if binary not found, fall back to registered hive client - If
client: always delegate to the originating client'sConnId - Track
spawn_conn_idon theAcpConnectionHandlefor routing subsequent pipe data
Auth + security
acp_spawn_requestonly sent to authenticated hive clients (token validated onhive_register)- Program allowlist enforced on the CLIENT side (same
ALLOWED_MODES/ALLOWED_FLAGSas server) - Pipe data is opaque bytes — server never inspects payload, just routes
- Permission requests from remote subprocess still flow to the web UI user (server mediates)
- Session state always lives on server — thin client subprocess is stateless from the server's perspective
Config
# axon.toml
[acp]
spawn_mode = "auto" # "server" | "client" | "auto"AXON_ACP_SPAWN_MODE=auto # env overrideFiles
| File | Action |
|---|---|
crates/services/acp.rs |
Add SpawnMode enum; branch spawn_adapter() on mode |
crates/services/acp/session.rs |
spawn_adapter_with_io() accepts (AsyncRead, AsyncWrite) — already mostly there |
crates/web/hive.rs |
Track acp_capable: bool on HiveEntry; lookup by ConnId |
crates/web.rs |
Route acp_pipe_data / acp_spawn_ack / acp_process_exit WS messages |
crates/client.rs (issue #23) |
Handle acp_spawn_request; pump child stdio ↔ WS |
crates/core/config/types/config.rs |
Add acp_spawn_mode field |
docs/MCP.md |
Document remote ACP execution model |
docs/sessions/ |
Session log when implemented |
Acceptance Criteria
-
SpawnMode::Local— existing behavior unchanged, all current tests pass -
SpawnMode::RemoteClient(conn_id)— subprocess spawned on thin client, events stream back - ACP SDK receives
(AsyncRead, AsyncWrite)regardless of spawn mode — no protocol changes - Permission requests from remote subprocess correctly routed to web UI user
-
AXON_ACP_SPAWN_MODE=autotries server-local first, falls back to registered hive client - Thin client enforces
ALLOWED_MODESallowlist before spawning - Session state (history, memory, context) always stored server-side
-
hive_registerresponse indicates whether client is ACP-capable -
cargo clippyclean, all existing ACP tests pass - Thin client binary handles
acp_spawn_requestmessage and pumps stdio ↔ WS