Skip to content

Conversation

@josephschmitt
Copy link
Contributor

@josephschmitt josephschmitt commented Sep 1, 2025

Adds an optional Docker-backed server mode for the TUI and headless server to isolate the runtime environment without sacrificing TUI performance.

The workspace isolation is especially useful in enterprise environments. Local machines can oftentimes contain lots of tools and credentials that are quite destructive. Allowing an AI agent to run on your local machine with the same basic access to your entire system as you have can be dangerous.

This instead isolates the opencode server to run inside a docker container with only the current working directory mounted inside as a volume. This way, the agent can only modify those files, and has no other access to the host system. Since only the server is running in the container and not the TUI itself, the performance penalty should be relatively minimal.

Why

  • Improve security/isolation by running the server in a container
  • Avoid host tooling/version conflicts while keeping the TUI native on the host
  • Keep this fully optional; default behavior is unchanged

What

  • TUI/Serve: --docker flag to start the server in Docker, mounting $PWD to /workspace and mapping a host port to container 8080.
  • Image: default to opencodeai/opencode:server; support --docker-image.
  • Local builds: support --dockerfile, --docker-context, --docker-build for building a local image; added script/docker-build and docker:build script.
  • Auth: sync only opencode-managed provider credentials to the server (PUT /auth/:id) and inject only provider-defined env vars (from models.dev) into the container (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY). No $HOME/XDG dirs are mounted.
  • Dockerfile: based on oven/bun; installs minimal tools (git, curl, unzip, tar, nodejs, npm, golang) and runs bun run /app/src/index.ts serve --hostname 0.0.0.0 --port 8080.
  • CI: GitHub Action to publish opencodeai/opencode:server on release (multi-arch).
  • Docs: README snippet for Docker usage.
  • Config: add server.docker (bool) and server.image so plain opencode can auto-use Docker server mode by default.

Usage

  • TUI: opencode --docker (uses Hub image) or opencode --docker --docker-image opencode:local after a local build
  • Serve: opencode serve --docker --port 8080
  • Build: bun run docker:build (tags both opencodeai/opencode:server and opencode:local)

Notes

  • Backwards-compatible and opt-in.
  • Only provider credentials are synced; no other host secrets are exposed.

@@ -0,0 +1,50 @@
name: Publish Docker Image
Copy link
Contributor Author

@josephschmitt josephschmitt Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an attempt to try to automate some of the infrastructure, automatically publishing new opencode docker images to docker hub whenever new opencode releases are released. The image would have the latest opencode server pre-built.

Comment on lines 139 to 146
if (!docker) {
UI.error("docker not found, starting server locally")
return Server.listen({ port: args.port, hostname: args.hostname })
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For discussion if we want to allow this fallback. If I am depending on opencode to be isolated, I might miss that it failed and not notice if it just quietly falls back

Comment on lines +20 to +22
RUN sed -i 's/"@opencode-ai\/sdk": "workspace:\*"/"@opencode-ai\/sdk": "latest"/g' package.json && \
sed -i 's/"@opencode-ai\/plugin": "workspace:\*"/"@opencode-ai\/plugin": "latest"/g' package.json && \
node -e 'const fs=require("fs"); const root=JSON.parse(fs.readFileSync("/tmp/root.package.json","utf8")); const pkg=JSON.parse(fs.readFileSync("package.json","utf8")); const cat=(root.workspaces&&root.workspaces.catalog)||{}; if(pkg.dependencies){for(const k of Object.keys(pkg.dependencies)) if(pkg.dependencies[k]==="catalog:") pkg.dependencies[k]=cat[k]||pkg.dependencies[k];} if(pkg.devDependencies){for(const k of Object.keys(pkg.devDependencies)) if(pkg.devDependencies[k]==="catalog:") pkg.devDependencies[k]=cat[k]||pkg.devDependencies[k];} fs.writeFileSync("package.json", JSON.stringify(pkg, null, 2));'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of a hack around not putting the entire monorepo into the docker context, open to other suggestions

…r not ready; prevent TUI crash/connection refused
@gw0
Copy link

gw0 commented Nov 10, 2025

Why only run the Opencode server in a container? Why would one trust the TUI to run on your host? One of the strengths of terminal interfaces is that you can run the whole thing inside a container without any issues (compared to GUI that require X11 support on host).

With this approach the issue of how to define projects and link them with sessions will reappear? In the container all projects will be mounted at /workspace, consequently all sessions will believe they live there.

Eventually one would like to give the Opencode server access to some tools, install binaries, and add custom mountpoints, so docs should contain instructions for this. For example to keep the host system clean one might want to give Opencode container access to a "firewalled Docker socket" (Tecnativa/docker-socket-proxy) and allow it to docker exec to other dev containers where the actual code is being built/run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants