-
Notifications
You must be signed in to change notification settings - Fork 1
feat(ccy): persistent/resumable containers across login sessions #12
Description
Use Case
Run CCY containers that persist across logout/login, allowing sessions to be shared between local desktop and SSH access.
- Start a CCY session on local desktop, detach, re-attach over SSH
- Container continues running when user is logged out
- Re-attach to an existing session rather than always starting fresh
Constraints
- Must support multiple simultaneous CCY containers per project — e.g. two separate Claude Code sessions on the same project at the same time
- Single-attach prevention only matters per session, not per project
- Containers must be individually addressable/resumable
Technical Brainstorm
Persistence (survive logout)
loginctl enable-linger <username>
- Single command, allows user systemd services to keep running post-logout
- Foundation everything else builds on
- Already partially addressed by
play-systemd-user-tweaks.yml
Podman Quadlets (.container files in ~/.config/containers/systemd/)
- Declarative systemd unit, survives reboot if
[Install]section enabled - Requires Podman 4.4+ / Fedora 38+ (fine for F43 branch)
- Version-controllable, fits project's IaC approach
podman generate systemd
- Older imperative equivalent, generates unit from running container
- Less preferred vs Quadlets
Attach/Detach (session problem)
podman attach
- One client at a time — hard PTY kernel limitation
- Detach with
ctrl-p ctrl-q, re-attach withpodman attach - Not suitable for sharing between local + SSH simultaneously
tmux inside container
- Run tmux as container entrypoint
- Clients do:
podman exec -it <name> tmux attach -t main - True detach/reattach; multiple simultaneous viewers possible
- Session survives SSH disconnect
- Natural fit since CCY already targets developer workflows
podman exec -it (multiple)
- Each exec is a separate shell — no shared state
- Not suitable if goal is resuming the same Claude Code session
Multiple containers per project
This is the key complication. If multiple CCY containers can run for the same project simultaneously, we can't use a single container name like ccy-<project>.
Options for naming/addressing:
- Numeric suffix:
ccy-<project>-1,ccy-<project>-2— assigned at creation, stored in.claude/ccy/sessions/ - Random ID suffix:
ccy-<project>-<random8>— no collision, discoverable viapodman ps --filter label=ccy.project=<project> - Podman labels: Tag containers with
ccy.project,ccy.worktree,ccy.started-by, discoverable with filters - Session file in worktree: Each worktree
.claude/ccy/.session-idstores the container ID for that specific session
Preferred approach: labels + session file per worktree
label ccy.project=<project-name>
label ccy.worktree=<worktree-path>
label ccy.branch=<git-branch>
Resume logic:
# Find containers for this project+worktree
EXISTING=$(podman ps --filter "label=ccy.worktree=$(pwd)" -q)
if [ -n "$EXISTING" ]; then
podman exec -it "$EXISTING" tmux attach -t main
exit 0
fi
# Start new containerPreventing double-attach to same session
Once tmux is inside the container, multiple clients can attach. For single-attach enforcement:
- flock wrapper: shell lockfile around
tmux attach - tmux
lock-session: read-only lock when session occupied - Simplest option: do nothing — two people attaching to same tmux session is actually sometimes useful (pair/observe)
May not need to enforce this. Worth deciding at implementation time.
CCY wrapper changes needed
Current flow: podman run -it --rm ... — fire and forget
New flow:
- Check for existing container by label (
ccy.worktree=$(pwd)) - If found:
podman exec -it <id> tmux attach -t main - If not found: start new container with tmux as entrypoint, drop
--rm, add labels - On container exit: container stops but is not auto-removed (allow manual cleanup or explicit
--rmflag) - Store container ID in
.claude/ccy/.session-idfor fast lookup (fallback to label filter)
Dockerfile changes needed
- Add
tmuxto base image - Change entrypoint to launch tmux session wrapping existing entrypoint
Playbook changes needed
play-install-claude-yolo.yml: addloginctl enable-lingertask- Possibly a
play-ccy-cleanup.bashor cron/systemd timer to prune stopped CCY containers
Open Questions
- Should stopped containers be auto-removed after N hours of inactivity? Or manual cleanup only?
- Do we want
--rm-equivalent behavior on explicit exit (ctrl-d out of Claude), vs detach keeping it alive? - How should the user discover/list resumable sessions? (a
ccy --listsubcommand?) - Single-attach enforcement: enforce or just allow tmux multi-attach?
- Container name collision: use labels only, or also enforce unique names?
Related Files
files/var/local/claude-yolo/claude-yolo— main CCY wrapper (start/attach logic here)files/var/local/claude-yolo/Dockerfile— base image (add tmux here)files/var/local/claude-yolo/entrypoint.sh— container entrypointplaybooks/imports/optional/common/play-install-claude-yolo.yml— deployment playbookplaybooks/imports/play-systemd-user-tweaks.yml— linger/systemd user config