Bridge ACP adapters running on a macOS host into OrbStack, Docker, or any other remote environment over raw TCP or HTTP CONNECT.
acpnet was built for one specific pain point:
acpxand OpenClaw can run inside a containercodexandclaude codeoften live on the macOS host- ACP adapters expect to be spawned locally over stdio
This project turns that local stdio boundary into a network hop while keeping the ACP stream intact.
- Runs a host-side bridge server that starts ACP adapters such as:
@zed-industries/codex-acp@zed-industries/claude-agent-acp
- Runs a client-side shim inside a container or remote machine
- Forwards ACP stdio traffic over:
- raw TCP
- HTTP CONNECT
- Optionally rewrites absolute paths in ACP NDJSON messages so container paths and host paths can differ
acpx is designed to spawn ACP adapters locally. In a containerized setup, that means:
- the container can run
acpx - the host can run
codex/claude - but the two cannot talk over local stdio directly
acpnet fills that gap without patching acpx, codex-acp, claude-agent-acp, or OpenClaw.
These were manually tested on March 15, 2026 on macOS + OrbStack:
| Scenario | Status |
|---|---|
Local raw TCP bridge with generic stdio process (cat) |
Verified |
Container acpx codex -> host Codex over raw TCP |
Verified |
Container acpx claude -> host Claude Code over raw TCP |
Verified |
Container path /workspace/... -> host path /Users/... with --map, Codex |
Verified |
Container path /workspace/... -> host path /Users/... with --map, Claude Code |
Verified |
Container acpx codex -> host Codex over HTTP CONNECT |
Verified |
Container acpx claude -> host Claude Code over HTTP CONNECT |
Verified |
container / remote env
acpx / OpenClaw / any ACP client
|
| spawn
v
acpnet client
|
| TCP or HTTP CONNECT
v
macOS host
acpnet serve
|
| spawn
v
codex-acp / claude-agent-acp
|
v
codex / claude code
The bridge uses a small handshake, then tunnels the remaining ACP traffic.
- Client connects over raw TCP or HTTP CONNECT
- Client sends one JSON line:
{"token":"...","agent":"codex","cwd":"/workspace/project"}- Server validates the token, resolves the target adapter, maps the working directory if needed, and starts the host-side adapter
- Server responds with one JSON line:
{"ok":true}- The rest of the stream is forwarded bidirectionally
When path mappings are configured, the bridge rewrites absolute paths inside JSON lines in both directions. This is what makes /workspace/... inside the container work against /Users/... on the host.
--map is required whenever the client-side path and host-side path are different.
Example:
- container or OpenClaw side project path:
/app - macOS host project path:
/Users/zhangwei/work/my-project
Start the host bridge like this:
acpnet serve \
--listen 0.0.0.0:4601 \
--token "$TOKEN" \
--map /app=/Users/zhangwei/work/my-projectWhat --map does:
- maps the incoming client
cwdbefore the host adapter starts - rewrites absolute paths inside ACP JSON lines in both directions
- keeps the container seeing
/app/...while the host agent sees/Users/...
When you need it:
- container path and host path are different
- OpenClaw or
acpxruns in Docker/OrbStack with a mounted project - the host agent must work on the same files through a different absolute path
When you do not need it:
- the client and host both use the same absolute path
Common mistake:
- client runs in
/app - host bridge starts without
--map - host rejects the request with an error such as
stat "/app": no such file or directory
Important limitation:
acpnetbridges ACP traffic, not filesystems- if the client runs on a different machine, the host must also have the same project files
--maponly translates paths; it does not copy or mount code- if the host does not have the project locally, Codex or Claude Code cannot work on it
Install the published build:
brew install zhangweiii/tap/acpnetUpgrade to the latest published build:
brew upgrade acpnetConfirm the installed version:
acpnet versiongit clone https://github.com/your-org/acpnet.git
cd acpnet
go build -o dist/acpnet-darwin-arm64 .
GOOS=linux GOARCH=arm64 go build -o dist/acpnet-linux-arm64 .Raw TCP only:
TOKEN='replace-with-a-random-secret'
./dist/acpnet-darwin-arm64 serve \
--listen 0.0.0.0:4601 \
--token "$TOKEN"Raw TCP + HTTP CONNECT:
TOKEN='replace-with-a-random-secret'
./dist/acpnet-darwin-arm64 serve \
--listen 0.0.0.0:4601 \
--http-listen 0.0.0.0:4603 \
--http-path /v1/connect \
--token "$TOKEN"With path mapping:
./dist/acpnet-darwin-arm64 serve \
--listen 0.0.0.0:4601 \
--http-listen 0.0.0.0:4603 \
--token "$TOKEN" \
--map /workspace=/Users/zhangwei/workIf the container uses /app instead of /workspace, map that path instead:
./dist/acpnet-darwin-arm64 serve \
--listen 0.0.0.0:4601 \
--token "$TOKEN" \
--map /app=/Users/zhangwei/work/my-projectRaw TCP:
/workspace/acpnet/dist/acpnet-linux-arm64 \
client \
--server tcp://host.docker.internal:4601 \
--token "$TOKEN" \
--agent codexHTTP CONNECT:
/workspace/acpnet/dist/acpnet-linux-arm64 \
client \
--server http://host.docker.internal:4603/v1/connect \
--token "$TOKEN" \
--agent codexIf --server does not include a scheme, it defaults to raw TCP.
The cleanest setup is to override acpx agent aliases in ~/.acpx/config.json.
{
"agents": {
"codex": {
"command": "/workspace/acpnet/dist/acpnet-linux-arm64 client --server tcp://host.docker.internal:4601 --token YOUR_TOKEN --agent codex"
},
"claude": {
"command": "/workspace/acpnet/dist/acpnet-linux-arm64 client --server tcp://host.docker.internal:4601 --token YOUR_TOKEN --agent claude"
}
}
}{
"agents": {
"codex": {
"command": "/workspace/acpnet/dist/acpnet-linux-arm64 client --server http://host.docker.internal:4603/v1/connect --token YOUR_TOKEN --agent codex"
},
"claude": {
"command": "/workspace/acpnet/dist/acpnet-linux-arm64 client --server http://host.docker.internal:4603/v1/connect --token YOUR_TOKEN --agent claude"
}
}
}Then inside the container:
acpx codex exec "Reply with exactly OK."
acpx claude exec "Reply with exactly OK."This bridge is designed to work well with containerized OpenClaw setups that delegate through acpx.
Recommended pattern:
- Run OpenClaw inside the container
- Install and enable the OpenClaw
acpxplugin - Configure container-local
~/.acpx/config.jsonto pointcodex/claudetoacpnet client - Run
acpnet serveon the host
This avoids patching OpenClaw source code.
The repository includes a verification script for the published Homebrew build.
Local checks only:
./scripts/verify-brew-e2e.shLocal checks plus container checks:
./scripts/verify-brew-e2e.sh --containerIf your environment already has a suitable image with node, npm, and npx, override the default image:
ACPNET_E2E_IMAGE=agent0ai/agent-zero:latest ./scripts/verify-brew-e2e.sh --containerWhat the script validates:
- the brew-installed host
acpnet serve - raw TCP and HTTP CONNECT transport
- local
acpx -> acpnet -> host codex - local
acpx -> acpnet -> host claude - optional container
acpx -> release Linux acpnet client -> brew host acpnet
Requirements:
- local
acpnetinstalled from Homebrew npxcodexfor Codex checksclaudefor Claude checksdockeronly when using--container
Environment overrides:
ACPNET_E2E_IMAGE: container image to use for--containerACPNET_E2E_WORKSPACE: host path mapped as/workspaceACPNET_E2E_REPO_OWNER/ACPNET_E2E_REPO_NAME: release source override
If you do not override adapter commands, the host server uses:
codex:npx -y @zed-industries/codex-acp@0.10.0claude:npx -y @zed-industries/claude-agent-acp@0.21.0
Override them if you need different versions:
./dist/acpnet-darwin-arm64 serve \
--token "$TOKEN" \
--codex-cmd 'npx -y @zed-industries/codex-acp@0.10.0' \
--claude-cmd 'npx -y @zed-industries/claude-agent-acp@0.21.0'When HTTP mode is enabled:
curl http://127.0.0.1:4603/healthzExample response:
{"ok":true,"path":"/v1/connect","version":"dev"}acpnet serve \
--listen 0.0.0.0:4601 \
--http-listen 0.0.0.0:4603 \
--http-path /v1/connect \
--token YOUR_TOKEN \
--map /workspace=/Users/zhangwei/work \
--codex-cmd 'npx -y @zed-industries/codex-acp@0.10.0' \
--claude-cmd 'npx -y @zed-industries/claude-agent-acp@0.21.0'acpnet client \
--server tcp://host.docker.internal:4601 \
--token YOUR_TOKEN \
--agent codex \
--cwd /workspace/projectacpnet versiongo test ./...- JSON path rewrite logic
- raw TCP bridge round trip
- HTTP CONNECT bridge round trip
Real Codex / Claude Code integrations require local credentials and are not suitable for public CI.
- The bridge is not anonymous. Use a strong random token.
- Raw TCP should usually be bound to a private interface.
- HTTP CONNECT mode is convenient for remote routing, but you should still put it behind your own network boundary or reverse proxy.
- Path rewriting is intentionally simple: it rewrites absolute path prefixes in JSON values. That is sufficient for ACP NDJSON traffic, but you should still test your exact workflow before exposing it broadly.
- TLS termination and reverse-proxy deployment examples
- Optional allowlists for source IPs or agents
- Better metrics and structured logs
- Packaged examples for OpenClaw + OrbStack
MIT