diff --git a/.claude/rules/agent-default-models.md b/.claude/rules/agent-default-models.md index 8a0d7f705..d36826aae 100644 --- a/.claude/rules/agent-default-models.md +++ b/.claude/rules/agent-default-models.md @@ -10,7 +10,6 @@ Last verified: 2026-03-13 | Claude Code | _(routed by Anthropic)_ | `ANTHROPIC_BASE_URL=https://openrouter.ai/api` — model selection handled by Claude's own routing | | Codex CLI | `openai/gpt-5.3-codex` | Hardcoded in `setupCodexConfig()` → `~/.codex/config.toml` | | OpenClaw | `openrouter/auto` | `modelDefault` field in agent config; written to OpenClaw config via `setupOpenclawConfig()` | -| ZeroClaw | _(provider default)_ | `ZEROCLAW_PROVIDER=openrouter` — model selection handled by ZeroClaw's OpenRouter integration | | OpenCode | _(provider default)_ | `OPENROUTER_API_KEY` env var — model selection handled by OpenCode natively | | Kilo Code | _(provider default)_ | `KILO_PROVIDER_TYPE=openrouter` — model selection handled by Kilo Code natively | | Hermes | _(provider default)_ | `OPENAI_BASE_URL=https://openrouter.ai/api/v1` + `OPENAI_API_KEY` — model selection handled by Hermes | diff --git a/.claude/rules/discovery.md b/.claude/rules/discovery.md index a06feef09..beaae0f09 100644 --- a/.claude/rules/discovery.md +++ b/.claude/rules/discovery.md @@ -62,7 +62,7 @@ Do NOT add agents speculatively. Only add one if there's **real community buzz** Agents that ship compiled binaries (Rust, Go, etc.) need separate ARM (aarch64) tarball builds. npm-based agents are arch-independent and only need x86_64 builds. When adding a new agent: - If it installs via `npm install -g` → x86_64 tarball only (Node handles arch) - If it installs a pre-compiled binary (curl download, cargo install, go install) → add an ARM entry in `.github/workflows/agent-tarballs.yml` matrix `include` section -- Current native binary agents needing ARM: zeroclaw (Rust), opencode (Go), hermes, claude +- Current native binary agents needing ARM: opencode (Go), hermes, claude To add: same steps as before (manifest.json entry, matrix entries, implement on 1+ cloud, README). diff --git a/.claude/skills/setup-agent-team/qa-fixtures-prompt.md b/.claude/skills/setup-agent-team/qa-fixtures-prompt.md index 52cfcec06..fc036454e 100644 --- a/.claude/skills/setup-agent-team/qa-fixtures-prompt.md +++ b/.claude/skills/setup-agent-team/qa-fixtures-prompt.md @@ -31,7 +31,7 @@ Cloud credentials are stored in `~/.config/spawn/{cloud}.json` (loaded by `sh/sh For each cloud with a fixture directory, check if its required env vars are set: - **hetzner**: `HCLOUD_TOKEN` -- **digitalocean**: `DO_API_TOKEN` +- **digitalocean**: `DIGITALOCEAN_ACCESS_TOKEN` - **aws**: `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` Skip clouds where credentials are missing (log which ones). @@ -53,11 +53,11 @@ curl -s -H "Authorization: Bearer ${HCLOUD_TOKEN}" "https://api.hetzner.cloud/v1 curl -s -H "Authorization: Bearer ${HCLOUD_TOKEN}" "https://api.hetzner.cloud/v1/locations" ``` -### DigitalOcean (needs DO_API_TOKEN) +### DigitalOcean (needs DIGITALOCEAN_ACCESS_TOKEN) ```bash -curl -s -H "Authorization: Bearer ${DO_API_TOKEN}" "https://api.digitalocean.com/v2/account/keys" -curl -s -H "Authorization: Bearer ${DO_API_TOKEN}" "https://api.digitalocean.com/v2/sizes" -curl -s -H "Authorization: Bearer ${DO_API_TOKEN}" "https://api.digitalocean.com/v2/regions" +curl -s -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" "https://api.digitalocean.com/v2/account/keys" +curl -s -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" "https://api.digitalocean.com/v2/sizes" +curl -s -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" "https://api.digitalocean.com/v2/regions" ``` For any other cloud directories found, read their TypeScript module in `packages/cli/src/{cloud}/` to discover the API base URL and auth pattern, then call equivalent GET-only endpoints. diff --git a/.claude/skills/setup-agent-team/refactor-issue-prompt.md b/.claude/skills/setup-agent-team/refactor-issue-prompt.md index ff976ac7d..c50bab8af 100644 --- a/.claude/skills/setup-agent-team/refactor-issue-prompt.md +++ b/.claude/skills/setup-agent-team/refactor-issue-prompt.md @@ -74,7 +74,7 @@ Track lifecycle: "pending-review" → "under-review" → "in-progress". Check la 7. Keep pushing commits to the same branch as work progresses 8. When fix is complete and tests pass: `gh pr ready NUMBER`, post update comment linking PR 9. Do NOT close the issue — `Fixes #SPAWN_ISSUE_PLACEHOLDER` auto-closes on merge -10. Clean up: `git worktree remove WORKTREE_BASE_PLACEHOLDER`, shutdown teammates +10. Clean up: run `git worktree remove WORKTREE_BASE_PLACEHOLDER` and call `TeamDelete` in ONE turn, then output a plain-text summary with **NO further tool calls**. A text-only response ends the non-interactive session immediately. ## Commit Markers @@ -84,5 +84,6 @@ Every commit: `Agent: issue-fixer` + `Co-Authored-By: Claude Sonnet 4.5 10 min), comment on issue explaining complexity and exit +- **NO TOOLS AFTER TeamDelete.** After calling `TeamDelete`, do NOT call any other tool. Output plain text only to end the session. Any tool call after `TeamDelete` causes an infinite shutdown prompt loop in non-interactive (-p) mode. See issue #3103. Begin now. Fix issue #SPAWN_ISSUE_PLACEHOLDER. diff --git a/.claude/skills/setup-agent-team/refactor-team-prompt.md b/.claude/skills/setup-agent-team/refactor-team-prompt.md index 6fef56324..e132bc300 100644 --- a/.claude/skills/setup-agent-team/refactor-team-prompt.md +++ b/.claude/skills/setup-agent-team/refactor-team-prompt.md @@ -273,6 +273,8 @@ Setup: `mkdir -p WORKTREE_BASE_PLACEHOLDER`. Cleanup: `git worktree prune` at cy **The session ENDS when you produce a response with NO tool calls.** EVERY iteration MUST include at minimum: `TaskList` + `Bash("sleep 15")`. +**EXCEPTION — After TeamDelete:** Once `TeamDelete` has been called and completed (step 4 of the shutdown sequence), your VERY NEXT response MUST be plain text only with **NO tool calls**. Do NOT call `TaskList`, `Bash`, or any other tool after `TeamDelete`. A text-only response is the termination signal for the non-interactive harness. Any tool call after `TeamDelete` causes an infinite loop of shutdown prompt injections. + Keep looping until: - All tasks are completed OR - Time budget is reached (10 min warn, 12 min shutdown, 15 min force) @@ -289,11 +291,13 @@ Follow this exact shutdown sequence: 1. At 10 min: broadcast "wrap up" to all teammates 2. At 12 min: send `shutdown_request` to EACH teammate by name 3. Wait for ALL shutdown confirmations — keep calling `TaskList` while waiting -4. After all confirmations: `git worktree prune && rm -rf WORKTREE_BASE_PLACEHOLDER` -5. Print summary and exit +4. In ONE turn: call `TeamDelete`, then run `git worktree prune && rm -rf WORKTREE_BASE_PLACEHOLDER` — do everything in this single turn +5. **Output a plain-text summary and STOP** — do NOT call any tool after `TeamDelete`. This text-only response ends the session. **NEVER exit without shutting down all teammates first.** If a teammate doesn't respond to shutdown_request within 2 minutes, send it again. +**CRITICAL — NO TOOLS AFTER TeamDelete.** After `TeamDelete` returns (whether success or "No team name found"), you MUST NOT make any further tool calls. Output your final summary as plain text and stop. Any tool call after `TeamDelete` triggers an infinite shutdown prompt loop in non-interactive (-p) mode. See issue #3103. + ## Safety - **NEVER close a PR.** No teammate, including team-lead and pr-maintainer, may close any PR — not even PRs created by refactor teammates. Closing PRs is the **security team's responsibility exclusively**. The only exception is if you are immediately opening a superseding PR (state the replacement PR number in the close comment). If a PR is stale, broken, or should not be merged, **leave it open** and comment explaining the issue — the security team will close it during review. diff --git a/.github/workflows/agent-tarballs.yml b/.github/workflows/agent-tarballs.yml index b5d28ced1..319df0b63 100644 --- a/.github/workflows/agent-tarballs.yml +++ b/.github/workflows/agent-tarballs.yml @@ -49,8 +49,6 @@ jobs: # Native-binary agents need ARM builds too. # npm-based agents (codex, openclaw, kilocode) are arch-independent — x86_64 only. include: - - agent: zeroclaw - arch: arm64 - agent: opencode arch: arm64 - agent: hermes diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index 24d1f71cb..fb9ac98f9 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -20,7 +20,7 @@ jobs: strategy: fail-fast: false matrix: - agent: [claude, codex, cursor, openclaw, opencode, kilocode, zeroclaw, hermes, junie] + agent: [claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie] steps: - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index c8c455661..a80e7a646 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -56,7 +56,7 @@ jobs: run: bun install - name: Run Biome check (all packages) - run: bunx @biomejs/biome check packages/cli/src/ packages/shared/src/ .claude/scripts/ .claude/skills/setup-spa/ + run: bunx @biomejs/biome check packages/cli/src/ packages/shared/src/ macos-compat: name: macOS Compatibility diff --git a/.github/workflows/packer-snapshots.yml b/.github/workflows/packer-snapshots.yml index 080442b64..0dfbf5c95 100644 --- a/.github/workflows/packer-snapshots.yml +++ b/.github/workflows/packer-snapshots.yml @@ -71,18 +71,18 @@ jobs: - name: Generate variables file run: | jq -n \ - --arg token "$DO_API_TOKEN" \ + --arg token "$DIGITALOCEAN_ACCESS_TOKEN" \ --arg agent "$AGENT_NAME" \ --arg tier "$TIER" \ --argjson install "$INSTALL_COMMANDS" \ '{ - do_api_token: $token, + digitalocean_access_token: $token, agent_name: $agent, cloud_init_tier: $tier, install_commands: $install }' > packer/auto.pkrvars.json env: - DO_API_TOKEN: ${{ secrets.DO_API_TOKEN }} + DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DO_API_TOKEN }} AGENT_NAME: ${{ matrix.agent }} TIER: ${{ steps.config.outputs.tier }} INSTALL_COMMANDS: ${{ steps.config.outputs.install }} @@ -96,7 +96,7 @@ jobs: if: cancelled() run: | # Filter by spawn-packer tag to avoid destroying builder droplets from other workflows - DROPLET_IDS=$(curl -s -H "Authorization: Bearer ${DO_API_TOKEN}" \ + DROPLET_IDS=$(curl -s -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \ "https://api.digitalocean.com/v2/droplets?per_page=200&tag_name=spawn-packer" \ | jq -r '.droplets[].id') @@ -107,28 +107,28 @@ jobs: for ID in $DROPLET_IDS; do echo "Destroying orphaned builder droplet: ${ID}" - curl -s -X DELETE -H "Authorization: Bearer ${DO_API_TOKEN}" \ + curl -s -X DELETE -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \ "https://api.digitalocean.com/v2/droplets/${ID}" || true done env: - DO_API_TOKEN: ${{ secrets.DO_API_TOKEN }} + DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DO_API_TOKEN }} - name: Cleanup old snapshots if: success() run: | PREFIX="spawn-${AGENT_NAME}-" - SNAPSHOTS=$(curl -s -H "Authorization: Bearer ${DO_API_TOKEN}" \ + SNAPSHOTS=$(curl -s -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \ "https://api.digitalocean.com/v2/images?private=true&per_page=100" \ | jq -r --arg prefix "$PREFIX" \ '[.images[] | select(.name | startswith($prefix))] | sort_by(.created_at) | reverse | .[1:] | .[].id') for ID in $SNAPSHOTS; do echo "Deleting old snapshot: ${ID}" - curl -s -X DELETE -H "Authorization: Bearer ${DO_API_TOKEN}" \ + curl -s -X DELETE -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \ "https://api.digitalocean.com/v2/images/${ID}" || true done env: - DO_API_TOKEN: ${{ secrets.DO_API_TOKEN }} + DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DO_API_TOKEN }} AGENT_NAME: ${{ matrix.agent }} - name: Submit to DO Marketplace @@ -162,7 +162,7 @@ jobs: HTTP_CODE=$(curl -s -o /tmp/mp-response.json -w "%{http_code}" \ -X PATCH \ -H "Content-Type: application/json" \ - -H "Authorization: Bearer ${DO_API_TOKEN}" \ + -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \ -d "$(jq -n \ --arg reason "Nightly rebuild — $(date -u '+%Y-%m-%d')" \ --argjson imageId "$IMG_ID" \ @@ -177,6 +177,6 @@ jobs: exit 1 ;; esac env: - DO_API_TOKEN: ${{ secrets.DO_API_TOKEN }} + DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DO_API_TOKEN }} AGENT_NAME: ${{ matrix.agent }} MARKETPLACE_APP_IDS: ${{ secrets.MARKETPLACE_APP_IDS }} diff --git a/CLAUDE.md b/CLAUDE.md index 5264810e8..25dc3fd1c 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -5,7 +5,7 @@ Spawn is a matrix of **agents x clouds**. Every script provisions a cloud server ## The Matrix `manifest.json` is the source of truth. It tracks: -- **agents** — AI agents and self-hosted AI tools (Claude Code, OpenClaw, ZeroClaw, ...) +- **agents** — AI agents and self-hosted AI tools (Claude Code, OpenClaw, Codex CLI, ...) - **clouds** — cloud providers to run them on (Sprite, Hetzner, ...) - **matrix** — which `cloud/agent` combinations are `"implemented"` vs `"missing"` diff --git a/README.md b/README.md index 0cce991a7..eca909f93 100644 --- a/README.md +++ b/README.md @@ -46,8 +46,8 @@ spawn delete -c hetzner # Delete a server on Hetzner | `spawn --dry-run` | Preview without provisioning | | `spawn --zone ` | Set zone/region for the cloud | | `spawn --size ` | Set instance size/type for the cloud | -| `spawn -p "text"` | Non-interactive with prompt | -| `spawn --prompt-file f.txt` | Prompt from file | +| `spawn --prompt "text"` | Non-interactive with prompt (or `-p`) | +| `spawn --prompt-file ` | Prompt from file (or `-f`) | | `spawn --headless` | Provision and exit (no interactive session) | | `spawn --output json` | Headless mode with structured JSON on stdout | | `spawn --model ` | Set the model ID (overrides agent default) | @@ -139,7 +139,7 @@ spawn claude hetzner --fast What `--fast` does: - **Parallel boot**: server creation runs concurrently with API key prompt and account checks - **Tarballs**: installs agents from pre-built tarballs instead of live install -- **Skip cloud-init**: for lightweight agents (Claude, OpenCode, ZeroClaw, Hermes), skips the package install wait since the base OS already has what's needed +- **Skip cloud-init**: for lightweight agents (Claude, OpenCode, Hermes), skips the package install wait since the base OS already has what's needed - **Snapshots**: uses pre-built cloud images when available (Hetzner, DigitalOcean) #### Beta Features @@ -206,7 +206,7 @@ export OPENROUTER_API_KEY=sk-or-v1-xxxxx # Cloud-specific credentials (varies by provider) # Note: Sprite uses `sprite login` for authentication export HCLOUD_TOKEN=... # For Hetzner -export DO_API_TOKEN=... # For DigitalOcean +export DIGITALOCEAN_ACCESS_TOKEN=... # For DigitalOcean # Run non-interactively spawn claude hetzner @@ -258,7 +258,7 @@ If spawn fails to install, try these steps: 2. **Set credentials via environment variables** before launching: ```powershell $env:OPENROUTER_API_KEY = "sk-or-v1-xxxxx" - $env:DO_API_TOKEN = "dop_v1_xxxxx" # For DigitalOcean + $env:DIGITALOCEAN_ACCESS_TOKEN = "dop_v1_xxxxx" # For DigitalOcean $env:HCLOUD_TOKEN = "xxxxx" # For Hetzner spawn openclaw digitalocean ``` @@ -324,12 +324,12 @@ If an agent fails to install or launch on a cloud: |---|---|---|---|---|---|---| | [**Claude Code**](https://claude.ai) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | [**OpenClaw**](https://github.com/openclaw/openclaw) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | -| [**ZeroClaw**](https://github.com/zeroclaw-labs/zeroclaw) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | [**Codex CLI**](https://github.com/openai/codex) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | [**OpenCode**](https://github.com/sst/opencode) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | [**Kilo Code**](https://github.com/Kilo-Org/kilocode) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | [**Hermes Agent**](https://github.com/NousResearch/hermes-agent) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | [**Junie**](https://www.jetbrains.com/junie/) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | +| [**Cursor CLI**](https://cursor.com/cli) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ### How it works diff --git a/assets/agents/.sources.json b/assets/agents/.sources.json index bc49a705a..7aac941fc 100644 --- a/assets/agents/.sources.json +++ b/assets/agents/.sources.json @@ -7,10 +7,6 @@ "url": "https://openclaw.ai/apple-touch-icon.png", "ext": "png" }, - "zeroclaw": { - "url": "https://avatars.githubusercontent.com/u/261820148?s=200&v=4", - "ext": "png" - }, "codex": { "url": "https://avatars.githubusercontent.com/u/14957082?s=200&v=4", "ext": "png" diff --git a/biome.json b/biome.json index 307d16423..6862b60a6 100644 --- a/biome.json +++ b/biome.json @@ -8,7 +8,7 @@ }, "files": { "ignoreUnknown": false, - "includes": ["packages/**/*.ts"] + "includes": ["packages/**/*.ts", ".claude/**/*.ts"] }, "formatter": { "enabled": true, @@ -100,6 +100,12 @@ } } } + }, + { + "includes": [".claude/**"], + "linter": { + "enabled": false + } } ], "plugins": [ diff --git a/manifest.json b/manifest.json index ac5887382..3d892f35d 100644 --- a/manifest.json +++ b/manifest.json @@ -37,8 +37,8 @@ "license": "Proprietary", "created": "2025-02", "added": "2025-06", - "github_stars": 81694, - "stars_updated": "2026-03-23", + "github_stars": 84019, + "stars_updated": "2026-03-29", "language": "Shell", "runtime": "node", "category": "cli", @@ -77,8 +77,8 @@ "license": "MIT", "created": "2025-11", "added": "2025-11", - "github_stars": 332099, - "stars_updated": "2026-03-23", + "github_stars": 339820, + "stars_updated": "2026-03-29", "language": "TypeScript", "runtime": "bun", "category": "tui", @@ -89,52 +89,6 @@ "gateway" ] }, - "zeroclaw": { - "name": "ZeroClaw", - "description": "Fast, small, fully autonomous AI assistant infrastructure — deploy anywhere, swap anything", - "url": "https://github.com/zeroclaw-labs/zeroclaw", - "install": "curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/a117be64fdaa31779204beadf2942c8aef57d0e5/scripts/bootstrap.sh | bash -s -- --install-rust --install-system-deps --prefer-prebuilt", - "launch": "zeroclaw agent", - "env": { - "OPENROUTER_API_KEY": "${OPENROUTER_API_KEY}", - "ZEROCLAW_PROVIDER": "openrouter" - }, - "config_files": { - "~/.zeroclaw/config.toml": { - "security": { - "autonomy": "full", - "supervised": false, - "allow_destructive": true - }, - "shell": { - "policy": "allow_all" - } - } - }, - "notes": "Rust-based agent framework built by Harvard/MIT/Sundai.Club communities. Natively supports OpenRouter via OPENROUTER_API_KEY + ZEROCLAW_PROVIDER=openrouter. Requires compilation from source (~5-10 min).", - "icon": "https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/assets/agents/zeroclaw.png", - "featured_cloud": [ - "digitalocean", - "sprite" - ], - "creator": "Sundai.Club", - "repo": "zeroclaw-labs/zeroclaw", - "license": "Apache-2.0", - "created": "2026-02", - "added": "2025-12", - "github_stars": 28521, - "stars_updated": "2026-03-23", - "language": "Rust", - "runtime": "binary", - "category": "cli", - "tagline": "Fast, small, fully autonomous AI infrastructure — deploy anywhere, swap anything", - "tags": [ - "coding", - "terminal", - "rust", - "autonomous" - ] - }, "codex": { "name": "Codex CLI", "description": "OpenAI's open-source coding agent", @@ -157,8 +111,8 @@ "license": "Apache-2.0", "created": "2025-04", "added": "2025-07", - "github_stars": 67099, - "stars_updated": "2026-03-23", + "github_stars": 68201, + "stars_updated": "2026-03-29", "language": "Rust", "runtime": "binary", "category": "cli", @@ -189,8 +143,8 @@ "license": "MIT", "created": "2025-04", "added": "2025-08", - "github_stars": 128767, - "stars_updated": "2026-03-23", + "github_stars": 132079, + "stars_updated": "2026-03-29", "language": "TypeScript", "runtime": "go", "category": "tui", @@ -223,8 +177,8 @@ "license": "MIT", "created": "2025-03", "added": "2025-09", - "github_stars": 17098, - "stars_updated": "2026-03-23", + "github_stars": 17310, + "stars_updated": "2026-03-29", "language": "TypeScript", "runtime": "node", "category": "cli", @@ -258,8 +212,8 @@ "license": "MIT", "created": "2025-06", "added": "2026-02", - "github_stars": 11368, - "stars_updated": "2026-03-23", + "github_stars": 15626, + "stars_updated": "2026-03-29", "language": "Python", "runtime": "python", "category": "cli", @@ -292,8 +246,8 @@ "license": "Proprietary", "created": "2026-03", "added": "2026-03", - "github_stars": 114, - "stars_updated": "2026-03-23", + "github_stars": 123, + "stars_updated": "2026-03-29", "language": "TypeScript", "runtime": "node", "category": "cli", @@ -306,16 +260,13 @@ ] }, "cursor": { - "disabled": true, - "disabled_reason": "Cursor CLI uses a proprietary protocol (ConnectRPC) and validates API keys against Cursor's own servers. Cannot route through OpenRouter. Re-enable when Cursor adds BYOK/custom endpoint support for agent mode.", "name": "Cursor CLI", "description": "Cursor's terminal-based AI coding agent — autonomous coding with plan, agent, and ask modes", "url": "https://cursor.com/cli", "install": "curl https://cursor.com/install -fsS | bash", "launch": "agent", "env": { - "OPENROUTER_API_KEY": "${OPENROUTER_API_KEY}", - "CURSOR_API_KEY": "${OPENROUTER_API_KEY}" + "OPENROUTER_API_KEY": "${OPENROUTER_API_KEY}" }, "config_files": { "~/.cursor/cli-config.json": { @@ -332,19 +283,19 @@ } } }, - "notes": "Works with OpenRouter via --endpoint flag pointing to openrouter.ai/api/v1 and CURSOR_API_KEY set to OpenRouter key. Binary installs to ~/.local/bin/agent.", + "notes": "Routes through OpenRouter via a local ConnectRPC-to-REST translation proxy (Caddy + Node.js). The proxy intercepts Cursor's proprietary protobuf protocol, translates to OpenAI-compatible API calls, and streams responses back. Binary installs to ~/.local/bin/agent.", "icon": "https://raw.githubusercontent.com/OpenRouterTeam/spawn/main/assets/agents/cursor.png", "featured_cloud": [ "digitalocean", "sprite" ], "creator": "Anysphere", - "repo": "anysphere/cursor", + "repo": "cursor/cursor", "license": "Proprietary", "created": "2025-01", "added": "2026-03", - "github_stars": 10000, - "stars_updated": "2026-03-26", + "github_stars": 32526, + "stars_updated": "2026-03-29", "language": "TypeScript", "runtime": "binary", "category": "cli", @@ -411,7 +362,7 @@ "description": "Cloud servers (account + payment method required)", "url": "https://www.digitalocean.com/", "type": "api", - "auth": "DO_API_TOKEN", + "auth": "DIGITALOCEAN_ACCESS_TOKEN", "provision_method": "POST /v2/droplets with user_data", "exec_method": "ssh root@IP", "interactive_method": "ssh -t root@IP", @@ -456,37 +407,31 @@ "matrix": { "local/claude": "implemented", "local/openclaw": "implemented", - "local/zeroclaw": "implemented", "local/codex": "implemented", "local/opencode": "implemented", "local/kilocode": "implemented", "hetzner/claude": "implemented", "hetzner/openclaw": "implemented", - "hetzner/zeroclaw": "implemented", "hetzner/codex": "implemented", "hetzner/opencode": "implemented", "hetzner/kilocode": "implemented", "aws/claude": "implemented", "aws/openclaw": "implemented", - "aws/zeroclaw": "implemented", "aws/codex": "implemented", "aws/opencode": "implemented", "aws/kilocode": "implemented", "digitalocean/claude": "implemented", "digitalocean/openclaw": "implemented", - "digitalocean/zeroclaw": "implemented", "digitalocean/codex": "implemented", "digitalocean/opencode": "implemented", "digitalocean/kilocode": "implemented", "gcp/claude": "implemented", "gcp/openclaw": "implemented", - "gcp/zeroclaw": "implemented", "gcp/codex": "implemented", "gcp/opencode": "implemented", "gcp/kilocode": "implemented", "sprite/claude": "implemented", "sprite/openclaw": "implemented", - "sprite/zeroclaw": "implemented", "sprite/codex": "implemented", "sprite/opencode": "implemented", "sprite/kilocode": "implemented", diff --git a/packages/cli/package.json b/packages/cli/package.json index be1e999f9..0442c1ebc 100644 --- a/packages/cli/package.json +++ b/packages/cli/package.json @@ -1,6 +1,6 @@ { "name": "@openrouter/spawn", - "version": "0.27.6", + "version": "0.29.3", "type": "module", "bin": { "spawn": "cli.js" diff --git a/packages/cli/src/__tests__/agent-setup-cov.test.ts b/packages/cli/src/__tests__/agent-setup-cov.test.ts index 2eeb734a9..c04d95c28 100644 --- a/packages/cli/src/__tests__/agent-setup-cov.test.ts +++ b/packages/cli/src/__tests__/agent-setup-cov.test.ts @@ -191,12 +191,6 @@ describe("createCloudAgents", () => { "ANTHROPIC_BASE_URL", ], ], - [ - "zeroclaw", - [ - "ZEROCLAW_PROVIDER=openrouter", - ], - ], [ "hermes", [ @@ -228,11 +222,6 @@ describe("createCloudAgents", () => { } }); - it("zeroclaw agent configure calls runServer", async () => { - await result.agents.zeroclaw.configure?.("sk-or-v1-test", undefined, new Set()); - expect(runner.runServer).toHaveBeenCalled(); - }); - it("all agents have launchCmd returning non-empty string", () => { for (const agent of Object.values(result.agents)) { const cmd = agent.launchCmd(); @@ -246,6 +235,7 @@ describe("createCloudAgents", () => { expect([ "minimal", "node", + "bun", "full", ]).toContain(agent.cloudInitTier); } diff --git a/packages/cli/src/__tests__/cmd-fix-cov.test.ts b/packages/cli/src/__tests__/cmd-fix-cov.test.ts index 953cd8ead..a8d5950e5 100644 --- a/packages/cli/src/__tests__/cmd-fix-cov.test.ts +++ b/packages/cli/src/__tests__/cmd-fix-cov.test.ts @@ -10,7 +10,7 @@ import type { SpawnRecord } from "../history"; -import { beforeEach, describe, expect, it, mock } from "bun:test"; +import { afterEach, beforeEach, describe, expect, it, mock } from "bun:test"; import { tryCatch } from "@openrouter/spawn-shared"; import { createMockManifest, mockClackPrompts } from "./test-helpers"; @@ -51,13 +51,25 @@ function makeRecord(overrides: Partial = {}): SpawnRecord { // ── Tests: fixSpawn edge cases ────────────────────────────────────────────── describe("fixSpawn (additional coverage)", () => { + let savedApiKey: string | undefined; + beforeEach(() => { + savedApiKey = process.env.OPENROUTER_API_KEY; + process.env.OPENROUTER_API_KEY = "sk-or-test-fix-key"; clack.logError.mockReset(); clack.logInfo.mockReset(); clack.logSuccess.mockReset(); clack.logStep.mockReset(); }); + afterEach(() => { + if (savedApiKey === undefined) { + delete process.env.OPENROUTER_API_KEY; + } else { + process.env.OPENROUTER_API_KEY = savedApiKey; + } + }); + it("shows error for invalid server_name in connection", async () => { const record = makeRecord({ connection: { @@ -139,17 +151,6 @@ describe("fixSpawn (additional coverage)", () => { }); expect(clack.logStep).toHaveBeenCalledWith(expect.stringContaining("1.2.3.4")); }); -}); - -// ── Tests: fixSpawn success message ────────────────────────────────────────── -// (error paths are covered in cmd-fix.test.ts; this covers the exact success message) - -describe("fixSpawn connection edge cases", () => { - beforeEach(() => { - clack.logError.mockReset(); - clack.logSuccess.mockReset(); - clack.logStep.mockReset(); - }); it("shows success when fix script succeeds", async () => { const mockRunner = mock(async () => true); diff --git a/packages/cli/src/__tests__/cmd-fix.test.ts b/packages/cli/src/__tests__/cmd-fix.test.ts index e5ee162f6..d1650a63d 100644 --- a/packages/cli/src/__tests__/cmd-fix.test.ts +++ b/packages/cli/src/__tests__/cmd-fix.test.ts @@ -194,13 +194,25 @@ describe("buildFixScript", () => { // ── Tests: fixSpawn (DI for SSH runner) ───────────────────────────────────── describe("fixSpawn", () => { + let savedApiKey: string | undefined; + beforeEach(() => { + savedApiKey = process.env.OPENROUTER_API_KEY; + process.env.OPENROUTER_API_KEY = "sk-or-test-fix-key"; clack.logError.mockReset(); clack.logSuccess.mockReset(); clack.logInfo.mockReset(); clack.logStep.mockReset(); }); + afterEach(() => { + if (savedApiKey === undefined) { + delete process.env.OPENROUTER_API_KEY; + } else { + process.env.OPENROUTER_API_KEY = savedApiKey; + } + }); + it("shows error for record without connection info", async () => { const record = makeRecord({ connection: undefined, @@ -309,6 +321,7 @@ describe("fixSpawn", () => { describe("cmdFix", () => { let testDir: string; let savedSpawnHome: string | undefined; + let savedApiKey: string | undefined; let processExitSpy: ReturnType; function writeHistory(records: SpawnRecord[]) { @@ -328,6 +341,8 @@ describe("cmdFix", () => { }); savedSpawnHome = process.env.SPAWN_HOME; process.env.SPAWN_HOME = testDir; + savedApiKey = process.env.OPENROUTER_API_KEY; + process.env.OPENROUTER_API_KEY = "sk-or-test-fix-key"; clack.logError.mockReset(); clack.logSuccess.mockReset(); clack.logInfo.mockReset(); @@ -338,6 +353,11 @@ describe("cmdFix", () => { afterEach(() => { process.env.SPAWN_HOME = savedSpawnHome; + if (savedApiKey === undefined) { + delete process.env.OPENROUTER_API_KEY; + } else { + process.env.OPENROUTER_API_KEY = savedApiKey; + } processExitSpy.mockRestore(); if (existsSync(testDir)) { rmSync(testDir, { diff --git a/packages/cli/src/__tests__/cmd-interactive.test.ts b/packages/cli/src/__tests__/cmd-interactive.test.ts index 079ae2c3c..9de87601c 100644 --- a/packages/cli/src/__tests__/cmd-interactive.test.ts +++ b/packages/cli/src/__tests__/cmd-interactive.test.ts @@ -279,7 +279,6 @@ describe("cmdInteractive", () => { await cmdInteractive(); - expect(mockIntro).toHaveBeenCalled(); const introArg = mockIntro.mock.calls[0]?.[0] ?? ""; expect(introArg).toContain("spawn"); }); @@ -345,7 +344,6 @@ describe("cmdInteractive", () => { await cmdInteractive(); - expect(mockOutro).toHaveBeenCalled(); const outroArg = mockOutro.mock.calls[0]?.[0] ?? ""; expect(outroArg).toContain("spawn script"); }); diff --git a/packages/cli/src/__tests__/cmd-status-cov.test.ts b/packages/cli/src/__tests__/cmd-status-cov.test.ts index e2eb25ffa..c34f1c5d7 100644 --- a/packages/cli/src/__tests__/cmd-status-cov.test.ts +++ b/packages/cli/src/__tests__/cmd-status-cov.test.ts @@ -151,6 +151,7 @@ describe("cmdStatus", () => { await cmdStatus({ json: true, + probe: async () => true, }); expect(fetchedUrls.some((u) => u.includes("hetzner.cloud/v1/servers/12345"))).toBe(true); }); @@ -193,6 +194,7 @@ describe("cmdStatus", () => { await cmdStatus({ json: true, + probe: async () => true, }); expect(fetchedUrls.some((u) => u.includes("digitalocean.com/v2/droplets/99999"))).toBe(true); }); @@ -415,10 +417,189 @@ describe("cmdStatus", () => { return new Response(JSON.stringify(mockManifest)); }); - await cmdStatus(); + await cmdStatus({ + probe: async () => true, + }); const infoCalls = clack.logInfo.mock.calls.map((c: unknown[]) => String(c[0])); // Should mention running servers and spawn list expect(infoCalls.some((msg: string) => msg.includes("running"))).toBe(true); }); + + // ── Agent probe tests ─────────────────────────────────────────────────── + + it("probes running server and reports agent_alive true in JSON", async () => { + writeHistory(testDir, [ + { + id: "probe-live", + agent: "claude", + cloud: "hetzner", + timestamp: new Date().toISOString(), + connection: { + ip: "1.2.3.4", + user: "root", + cloud: "hetzner", + server_id: "12345", + }, + }, + ]); + writeCloudConfig("hetzner", { + api_key: "test-token", + }); + + _resetCacheForTesting(); + global.fetch = mock(async (url: string | URL | Request) => { + const u = isString(url) ? url : url instanceof URL ? url.toString() : url.url; + if (u.includes("hetzner.cloud")) { + return new Response( + JSON.stringify({ + server: { + status: "running", + }, + }), + ); + } + return new Response(JSON.stringify(mockManifest)); + }); + + await cmdStatus({ + json: true, + probe: async () => true, + }); + + const output = consoleSpy.mock.calls.map((c: unknown[]) => String(c[0])).join(""); + const parsed = JSON.parse(output); + expect(parsed[0].agent_alive).toBe(true); + }); + + it("probes running server and reports agent_alive false in JSON", async () => { + writeHistory(testDir, [ + { + id: "probe-down", + agent: "claude", + cloud: "hetzner", + timestamp: new Date().toISOString(), + connection: { + ip: "1.2.3.4", + user: "root", + cloud: "hetzner", + server_id: "12345", + }, + }, + ]); + writeCloudConfig("hetzner", { + api_key: "test-token", + }); + + _resetCacheForTesting(); + global.fetch = mock(async (url: string | URL | Request) => { + const u = isString(url) ? url : url instanceof URL ? url.toString() : url.url; + if (u.includes("hetzner.cloud")) { + return new Response( + JSON.stringify({ + server: { + status: "running", + }, + }), + ); + } + return new Response(JSON.stringify(mockManifest)); + }); + + await cmdStatus({ + json: true, + probe: async () => false, + }); + + const output = consoleSpy.mock.calls.map((c: unknown[]) => String(c[0])).join(""); + const parsed = JSON.parse(output); + expect(parsed[0].agent_alive).toBe(false); + }); + + it("does not probe gone servers — agent_alive is null", async () => { + writeHistory(testDir, [ + { + id: "probe-gone", + agent: "claude", + cloud: "hetzner", + timestamp: new Date().toISOString(), + connection: { + ip: "1.2.3.4", + user: "root", + cloud: "hetzner", + server_id: "12345", + }, + }, + ]); + writeCloudConfig("hetzner", { + api_key: "test-token", + }); + + let probeCalled = false; + _resetCacheForTesting(); + global.fetch = mock(async (url: string | URL | Request) => { + const u = isString(url) ? url : url instanceof URL ? url.toString() : url.url; + if (u.includes("hetzner.cloud")) { + return new Response("Not Found", { + status: 404, + }); + } + return new Response(JSON.stringify(mockManifest)); + }); + + await cmdStatus({ + json: true, + probe: async () => { + probeCalled = true; + return true; + }, + }); + + expect(probeCalled).toBe(false); + const output = consoleSpy.mock.calls.map((c: unknown[]) => String(c[0])).join(""); + const parsed = JSON.parse(output); + expect(parsed[0].agent_alive).toBeNull(); + }); + + it("shows unreachable warning when probe fails in table mode", async () => { + writeHistory(testDir, [ + { + id: "probe-warn", + agent: "claude", + cloud: "hetzner", + timestamp: new Date().toISOString(), + connection: { + ip: "1.2.3.4", + user: "root", + cloud: "hetzner", + server_id: "12345", + }, + }, + ]); + writeCloudConfig("hetzner", { + api_key: "test-token", + }); + + _resetCacheForTesting(); + global.fetch = mock(async (url: string | URL | Request) => { + const u = isString(url) ? url : url instanceof URL ? url.toString() : url.url; + if (u.includes("hetzner.cloud")) { + return new Response( + JSON.stringify({ + server: { + status: "running", + }, + }), + ); + } + return new Response(JSON.stringify(mockManifest)); + }); + + await cmdStatus({ + probe: async () => false, + }); + + const infoCalls = clack.logInfo.mock.calls.map((c: unknown[]) => String(c[0])); + expect(infoCalls.some((msg: string) => msg.includes("unreachable"))).toBe(true); + }); }); diff --git a/packages/cli/src/__tests__/cmd-update-cov.test.ts b/packages/cli/src/__tests__/cmd-update-cov.test.ts index 103234908..932148909 100644 --- a/packages/cli/src/__tests__/cmd-update-cov.test.ts +++ b/packages/cli/src/__tests__/cmd-update-cov.test.ts @@ -204,8 +204,6 @@ describe("cmdUpdate", () => { runUpdate: updateFn, }); - // consoleSpy (console.log) should have been called - expect(consoleSpy).toHaveBeenCalled(); expect(clack.logInfo).toHaveBeenCalledWith(expect.stringContaining("Run spawn again")); }); @@ -220,7 +218,10 @@ describe("cmdUpdate", () => { runUpdate: updateFn, }); - // Should show the install command - expect(consoleSpy).toHaveBeenCalled(); + expect(clack.logError).toHaveBeenCalledWith(expect.stringContaining("Auto-update failed")); + const allLoggedLines = consoleSpy.mock.calls.map((c: unknown[]) => String(c[0] ?? "")); + expect(allLoggedLines.some((line: string) => line.includes("install.sh") || line.includes("install.ps1"))).toBe( + true, + ); }); }); diff --git a/packages/cli/src/__tests__/commands-exported-utils.test.ts b/packages/cli/src/__tests__/commands-exported-utils.test.ts index 0d50cb273..9a500bd01 100644 --- a/packages/cli/src/__tests__/commands-exported-utils.test.ts +++ b/packages/cli/src/__tests__/commands-exported-utils.test.ts @@ -47,8 +47,8 @@ describe("parseAuthEnvVars", () => { }); it("should extract env var starting with letter followed by digits", () => { - expect(parseAuthEnvVars("DO_API_TOKEN")).toEqual([ - "DO_API_TOKEN", + expect(parseAuthEnvVars("DIGITALOCEAN_ACCESS_TOKEN")).toEqual([ + "DIGITALOCEAN_ACCESS_TOKEN", ]); }); }); diff --git a/packages/cli/src/__tests__/cursor-proxy.test.ts b/packages/cli/src/__tests__/cursor-proxy.test.ts new file mode 100644 index 000000000..55e21b145 --- /dev/null +++ b/packages/cli/src/__tests__/cursor-proxy.test.ts @@ -0,0 +1,330 @@ +/** + * cursor-proxy.test.ts — Tests for the Cursor CLI → OpenRouter proxy. + * Covers: protobuf encoding, ConnectRPC framing, model details, deployment functions. + */ + +import { describe, expect, it, mock } from "bun:test"; +import { tryCatch } from "../shared/result"; + +// ── Protobuf helpers (mirrors the proxy script's functions) ───────────────── + +function ev(v: number): Buffer { + const b: number[] = []; + while (v > 0x7f) { + b.push((v & 0x7f) | 0x80); + v >>>= 7; + } + b.push(v & 0x7f); + return Buffer.from(b); +} + +function es(f: number, s: string): Buffer { + const sb = Buffer.from(s); + return Buffer.concat([ + ev((f << 3) | 2), + ev(sb.length), + sb, + ]); +} + +function em(f: number, p: Buffer): Buffer { + return Buffer.concat([ + ev((f << 3) | 2), + ev(p.length), + p, + ]); +} + +// ConnectRPC frame +function cf(p: Buffer): Buffer { + const f = Buffer.alloc(5 + p.length); + f[0] = 0x00; + f.writeUInt32BE(p.length, 1); + p.copy(f, 5); + return f; +} + +// ConnectRPC trailer +function ct(): Buffer { + const j = Buffer.from("{}"); + const t = Buffer.alloc(5 + j.length); + t[0] = 0x02; + t.writeUInt32BE(j.length, 1); + j.copy(t, 5); + return t; +} + +// AgentServerMessage.InteractionUpdate.TextDeltaUpdate +function tdf(text: string): Buffer { + return cf(em(1, em(1, es(1, text)))); +} + +// AgentServerMessage.InteractionUpdate.TurnEndedUpdate +function tef(): Buffer { + return cf( + em( + 1, + em( + 14, + Buffer.from([ + 8, + 10, + 16, + 5, + ]), + ), + ), + ); +} + +// ModelDetails +function bmd(id: string, name: string): Buffer { + return Buffer.concat([ + es(1, id), + es(3, id), + es(4, name), + es(5, name), + ]); +} + +// Extract strings from protobuf +function xstr(buf: Buffer, out: string[]): void { + let o = 0; + while (o < buf.length) { + let t = 0; + let s = 0; + while (o < buf.length) { + const b = buf[o++]; + t |= (b & 0x7f) << s; + s += 7; + if (!(b & 0x80)) { + break; + } + } + const wt = t & 7; + if (wt === 0) { + while (o < buf.length && buf[o++] & 0x80) { + /* consume varint */ + } + } else if (wt === 2) { + let len = 0; + let ls = 0; + while (o < buf.length) { + const b = buf[o++]; + len |= (b & 0x7f) << ls; + ls += 7; + if (!(b & 0x80)) { + break; + } + } + const d = buf.slice(o, o + len); + o += len; + const st = d.toString("utf8"); + if (/^[\x20-\x7e]+$/.test(st)) { + out.push(st); + } else { + const r = tryCatch(() => xstr(d, out)); + if (!r.ok) { + /* ignore nested parse errors */ + } + } + } else { + break; + } + } +} + +// ── Tests ─────────────────────────────────────────────────────────────────── + +describe("protobuf encoding", () => { + it("encodes varint correctly", () => { + expect(ev(0)).toEqual( + Buffer.from([ + 0, + ]), + ); + expect(ev(1)).toEqual( + Buffer.from([ + 1, + ]), + ); + expect(ev(127)).toEqual( + Buffer.from([ + 127, + ]), + ); + expect(ev(128)).toEqual( + Buffer.from([ + 0x80, + 0x01, + ]), + ); + expect(ev(300)).toEqual( + Buffer.from([ + 0xac, + 0x02, + ]), + ); + }); + + it("encodes string fields", () => { + const buf = es(1, "hello"); + // field 1, wire type 2 (length-delimited) = tag 0x0a + expect(buf[0]).toBe(0x0a); + // length = 5 + expect(buf[1]).toBe(5); + // string content + expect(buf.slice(2).toString("utf8")).toBe("hello"); + }); + + it("encodes nested messages", () => { + const inner = es(1, "test"); + const outer = em(2, inner); + // field 2, wire type 2 = tag 0x12 + expect(outer[0]).toBe(0x12); + // length of inner message + expect(outer[1]).toBe(inner.length); + }); +}); + +describe("ConnectRPC framing", () => { + it("wraps payload in a frame with 5-byte header", () => { + const payload = Buffer.from("test"); + const frame = cf(payload); + expect(frame.length).toBe(5 + payload.length); + expect(frame[0]).toBe(0x00); // no compression + expect(frame.readUInt32BE(1)).toBe(payload.length); + expect(frame.slice(5).toString()).toBe("test"); + }); + + it("creates a JSON trailer frame", () => { + const trailer = ct(); + expect(trailer[0]).toBe(0x02); // JSON type + expect(trailer.readUInt32BE(1)).toBe(2); // length of "{}" + expect(trailer.slice(5).toString()).toBe("{}"); + }); +}); + +describe("AgentServerMessage encoding", () => { + it("encodes text delta update", () => { + const frame = tdf("Hello world"); + // Should be a ConnectRPC frame (starts with 0x00) + expect(frame[0]).toBe(0x00); + // Payload should contain the text + const payload = frame.slice(5); + const strings: string[] = []; + xstr(payload, strings); + expect(strings).toContain("Hello world"); + }); + + it("encodes turn ended update", () => { + const frame = tef(); + expect(frame[0]).toBe(0x00); + // Payload should be non-empty (contains token counts) + const payloadLen = frame.readUInt32BE(1); + expect(payloadLen).toBeGreaterThan(0); + }); +}); + +describe("ModelDetails encoding", () => { + it("encodes model with all required fields", () => { + const model = bmd("anthropic/claude-sonnet-4-6", "Claude Sonnet 4.6"); + const strings: string[] = []; + xstr(model, strings); + expect(strings).toContain("anthropic/claude-sonnet-4-6"); + expect(strings).toContain("Claude Sonnet 4.6"); + }); + + it("encodes model list response", () => { + const models = [ + [ + "anthropic/claude-sonnet-4-6", + "Claude Sonnet 4.6", + ], + [ + "openai/gpt-5.4", + "GPT-5.4", + ], + ]; + const response = Buffer.concat(models.map(([id, name]) => em(1, bmd(id, name)))); + const strings: string[] = []; + xstr(response, strings); + expect(strings).toContain("anthropic/claude-sonnet-4-6"); + expect(strings).toContain("openai/gpt-5.4"); + }); +}); + +describe("protobuf string extraction", () => { + it("extracts strings from nested protobuf", () => { + // Simulate a request with user message + const msg = em( + 1, + Buffer.concat([ + es(1, "say hello"), + es(2, "uuid-1234-5678"), + ]), + ); + const strings: string[] = []; + xstr(msg, strings); + expect(strings).toContain("say hello"); + expect(strings).toContain("uuid-1234-5678"); + }); + + it("skips binary data", () => { + const binary = Buffer.from([ + 0x0a, + 0x03, + 0xff, + 0xfe, + 0xfd, + ]); + const strings: string[] = []; + xstr(binary, strings); + expect(strings.length).toBe(0); + }); +}); + +describe("setupCursorProxy", () => { + it("calls runner.runServer for caddy install and proxy deployment", async () => { + const runServerCalls: string[] = []; + const runner = { + runServer: mock(async (cmd: string) => { + runServerCalls.push(cmd.slice(0, 50)); + }), + uploadFile: mock(async () => {}), + downloadFile: mock(async () => {}), + }; + + const { setupCursorProxy: setup } = await import("../shared/cursor-proxy"); + await setup(runner); + + // Should have called runServer multiple times (caddy install, deploy, hosts, trust) + expect(runServerCalls.length).toBeGreaterThanOrEqual(3); + // Should include caddy install check + expect(runServerCalls.some((c) => c.includes("caddy"))).toBe(true); + // Should include hosts configuration + expect(runServerCalls.some((c) => c.includes("hosts") || c.includes("cursor.sh"))).toBe(true); + }); +}); + +describe("startCursorProxy", () => { + it("calls runner.runServer with port checks", async () => { + const runServerCalls: string[] = []; + const runner = { + runServer: mock(async (cmd: string) => { + runServerCalls.push(cmd); + }), + uploadFile: mock(async () => {}), + downloadFile: mock(async () => {}), + }; + + const { startCursorProxy: start } = await import("../shared/cursor-proxy"); + await start(runner); + + // Should include port checks for 443, 18644, 18645 + const fullCmd = runServerCalls.join(" "); + expect(fullCmd.includes("18644")).toBe(true); + expect(fullCmd.includes("18645")).toBe(true); + expect(fullCmd.includes("443")).toBe(true); + }); +}); diff --git a/packages/cli/src/__tests__/do-cov.test.ts b/packages/cli/src/__tests__/do-cov.test.ts index a81a5cbb6..19a44b75d 100644 --- a/packages/cli/src/__tests__/do-cov.test.ts +++ b/packages/cli/src/__tests__/do-cov.test.ts @@ -259,7 +259,7 @@ describe("digitalocean/getServerIp", () => { ); const { getServerIp } = await import("../digitalocean/digitalocean"); // Need to set the token state - process.env.DO_API_TOKEN = "test-token"; + process.env.DIGITALOCEAN_ACCESS_TOKEN = "test-token"; // getServerIp calls doApi which uses internal state token - need to set via ensureDoToken // But doApi will use _state.token. Since we can't easily set _state, we test the 404 path // by mocking fetch to always return 404 diff --git a/packages/cli/src/__tests__/do-payment-warning.test.ts b/packages/cli/src/__tests__/do-payment-warning.test.ts index e02fc5020..456c1c6bd 100644 --- a/packages/cli/src/__tests__/do-payment-warning.test.ts +++ b/packages/cli/src/__tests__/do-payment-warning.test.ts @@ -25,9 +25,15 @@ describe("ensureDoToken — payment method warning for first-time users", () => let warnSpy: ReturnType; beforeEach(() => { - // Save and clear DO_API_TOKEN - savedEnv["DO_API_TOKEN"] = process.env.DO_API_TOKEN; - delete process.env.DO_API_TOKEN; + // Save and clear all accepted DigitalOcean token env vars + for (const v of [ + "DIGITALOCEAN_ACCESS_TOKEN", + "DIGITALOCEAN_API_TOKEN", + "DO_API_TOKEN", + ]) { + savedEnv[v] = process.env[v]; + delete process.env[v]; + } // Fail OAuth connectivity check → tryDoOAuth returns null immediately globalThis.fetch = mock(() => Promise.reject(new Error("Network unreachable"))); @@ -73,7 +79,25 @@ describe("ensureDoToken — payment method warning for first-time users", () => expect(warnMessages.some((msg: string) => msg.includes("payment method"))).toBe(false); }); - it("does NOT show payment warning when DO_API_TOKEN env var is set", async () => { + it("does NOT show payment warning when DIGITALOCEAN_ACCESS_TOKEN env var is set", async () => { + process.env.DIGITALOCEAN_ACCESS_TOKEN = "dop_v1_invalid_env_token"; + + await expect(ensureDoToken()).rejects.toThrow(); + + const warnMessages = warnSpy.mock.calls.map((c: unknown[]) => String(c[0])); + expect(warnMessages.some((msg: string) => msg.includes("payment method"))).toBe(false); + }); + + it("does NOT show payment warning when DIGITALOCEAN_API_TOKEN env var is set", async () => { + process.env.DIGITALOCEAN_API_TOKEN = "dop_v1_invalid_env_token"; + + await expect(ensureDoToken()).rejects.toThrow(); + + const warnMessages = warnSpy.mock.calls.map((c: unknown[]) => String(c[0])); + expect(warnMessages.some((msg: string) => msg.includes("payment method"))).toBe(false); + }); + + it("does NOT show payment warning when legacy DO_API_TOKEN env var is set", async () => { process.env.DO_API_TOKEN = "dop_v1_invalid_env_token"; await expect(ensureDoToken()).rejects.toThrow(); diff --git a/packages/cli/src/__tests__/recursive-spawn.test.ts b/packages/cli/src/__tests__/recursive-spawn.test.ts index fac81be26..e9fb14aff 100644 --- a/packages/cli/src/__tests__/recursive-spawn.test.ts +++ b/packages/cli/src/__tests__/recursive-spawn.test.ts @@ -1,8 +1,9 @@ import type { SpawnRecord } from "../history.js"; -import { afterEach, beforeEach, describe, expect, it, spyOn } from "bun:test"; +import { afterEach, beforeEach, describe, expect, it, mock, spyOn } from "bun:test"; import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs"; import { join } from "node:path"; +import * as p from "@clack/prompts"; import { findDescendants, pullChildHistory } from "../commands/delete.js"; import { cmdTree } from "../commands/tree.js"; import { exportHistory, HISTORY_SCHEMA_VERSION, loadHistory, mergeChildHistory, saveSpawnRecord } from "../history.js"; @@ -396,16 +397,13 @@ describe("recursive spawn", () => { describe("cmdTree", () => { it("shows empty message when no history", async () => { - const logs: string[] = []; - const origLog = console.log; - console.log = (...args: unknown[]) => { - logs.push(args.map(String).join(" ")); - }; + const logInfoSpy = spyOn(p.log, "info").mockImplementation(mock(() => {})); await cmdTree(); - console.log = origLog; - // p.log.info writes to stderr, not captured — but cmdTree should not throw + const calls = logInfoSpy.mock.calls.map((args) => String(args[0])); + expect(calls.some((msg) => msg.includes("No spawn history found"))).toBe(true); + logInfoSpy.mockRestore(); }); it("renders tree with parent-child relationships", async () => { @@ -517,19 +515,17 @@ describe("recursive spawn", () => { timestamp: "2026-03-24T01:00:00.000Z", }); - const logs: string[] = []; - const origLog = console.log; - console.log = (...args: unknown[]) => { - logs.push(args.map(String).join(" ")); - }; - + const logInfoSpy = spyOn(p.log, "info").mockImplementation(mock(() => {})); const manifestMod = await import("../manifest.js"); const manifestSpy = spyOn(manifestMod, "loadManifest").mockRejectedValue(new Error("no network")); await cmdTree(); - console.log = origLog; manifestSpy.mockRestore(); + + const calls = logInfoSpy.mock.calls.map((args) => String(args[0])); + expect(calls.some((msg) => msg.includes("no parent-child relationships"))).toBe(true); + logInfoSpy.mockRestore(); }); it("renders deleted and depth labels", async () => { diff --git a/packages/cli/src/__tests__/run-path-credential-display.test.ts b/packages/cli/src/__tests__/run-path-credential-display.test.ts index e361c298e..2dc5ec34e 100644 --- a/packages/cli/src/__tests__/run-path-credential-display.test.ts +++ b/packages/cli/src/__tests__/run-path-credential-display.test.ts @@ -68,7 +68,7 @@ function makeManifest(overrides?: Partial): Manifest { price: "test", url: "https://digitalocean.com", type: "api", - auth: "DO_API_TOKEN", + auth: "DIGITALOCEAN_ACCESS_TOKEN", provision_method: "api", exec_method: "ssh root@IP", interactive_method: "ssh -t root@IP", @@ -138,6 +138,8 @@ describe("prioritizeCloudsByCredentials", () => { // Save and clear credential env vars for (const v of [ "HCLOUD_TOKEN", + "DIGITALOCEAN_ACCESS_TOKEN", + "DIGITALOCEAN_API_TOKEN", "DO_API_TOKEN", "UPCLOUD_USERNAME", "UPCLOUD_PASSWORD", @@ -191,7 +193,7 @@ describe("prioritizeCloudsByCredentials", () => { it("should move multiple credential clouds to front", () => { process.env.HCLOUD_TOKEN = "test-token"; - process.env.DO_API_TOKEN = "test-do-token"; + process.env.DIGITALOCEAN_ACCESS_TOKEN = "test-do-token"; const manifest = makeManifest(); const clouds = [ "upcloud", @@ -217,7 +219,7 @@ describe("prioritizeCloudsByCredentials", () => { expect(result.hintOverrides["hetzner"]).toContain("credentials detected"); expect(result.hintOverrides["hetzner"]).toContain("test"); - expect(result.hintOverrides["digitalocean"]).toBeDefined(); + expect(result.hintOverrides["digitalocean"]).toContain("Simple cloud hosting"); }); it("should handle multi-var auth (both vars must be set)", () => { @@ -290,7 +292,7 @@ describe("prioritizeCloudsByCredentials", () => { it("should preserve relative order within each group", () => { process.env.HCLOUD_TOKEN = "token"; - process.env.DO_API_TOKEN = "token"; + process.env.DIGITALOCEAN_ACCESS_TOKEN = "token"; const manifest = makeManifest(); // Input order: digitalocean before hetzner (both have creds) const clouds = [ @@ -331,7 +333,7 @@ describe("prioritizeCloudsByCredentials", () => { it("should count all credential clouds correctly with all set", () => { process.env.HCLOUD_TOKEN = "t1"; - process.env.DO_API_TOKEN = "t2"; + process.env.DIGITALOCEAN_ACCESS_TOKEN = "t2"; process.env.UPCLOUD_USERNAME = "u"; process.env.UPCLOUD_PASSWORD = "p"; const manifest = makeManifest(); @@ -350,4 +352,30 @@ describe("prioritizeCloudsByCredentials", () => { expect(result.sortedClouds.slice(3)).toContain("sprite"); expect(result.sortedClouds.slice(3)).toContain("localcloud"); }); + + it("should recognize legacy DO_API_TOKEN as alias for DIGITALOCEAN_ACCESS_TOKEN", () => { + process.env.DO_API_TOKEN = "legacy-token"; + const manifest = makeManifest(); + const clouds = [ + "digitalocean", + "hetzner", + ]; + const result = prioritizeCloudsByCredentials(clouds, manifest); + + expect(result.credCount).toBe(1); + expect(result.sortedClouds[0]).toBe("digitalocean"); + }); + + it("should recognize DIGITALOCEAN_API_TOKEN as alias for DIGITALOCEAN_ACCESS_TOKEN", () => { + process.env.DIGITALOCEAN_API_TOKEN = "alt-token"; + const manifest = makeManifest(); + const clouds = [ + "digitalocean", + "hetzner", + ]; + const result = prioritizeCloudsByCredentials(clouds, manifest); + + expect(result.credCount).toBe(1); + expect(result.sortedClouds[0]).toBe("digitalocean"); + }); }); diff --git a/packages/cli/src/__tests__/script-failure-guidance.test.ts b/packages/cli/src/__tests__/script-failure-guidance.test.ts index 0a324d975..cc5ea3b68 100644 --- a/packages/cli/src/__tests__/script-failure-guidance.test.ts +++ b/packages/cli/src/__tests__/script-failure-guidance.test.ts @@ -209,12 +209,12 @@ describe("getScriptFailureGuidance", () => { it("should show specific env var name and setup hint for default case when authHint is provided", () => { const savedOR = process.env.OPENROUTER_API_KEY; - const savedDO = process.env.DO_API_TOKEN; + const savedDO = process.env.DIGITALOCEAN_ACCESS_TOKEN; delete process.env.OPENROUTER_API_KEY; - delete process.env.DO_API_TOKEN; - const lines = stripped_getScriptFailureGuidance(42, "digitalocean", "DO_API_TOKEN"); + delete process.env.DIGITALOCEAN_ACCESS_TOKEN; + const lines = stripped_getScriptFailureGuidance(42, "digitalocean", "DIGITALOCEAN_ACCESS_TOKEN"); const joined = lines.join("\n"); - expect(joined).toContain("DO_API_TOKEN"); + expect(joined).toContain("DIGITALOCEAN_ACCESS_TOKEN"); expect(joined).toContain("OPENROUTER_API_KEY"); expect(joined).toContain("spawn digitalocean"); expect(joined).toContain("setup"); @@ -222,7 +222,7 @@ describe("getScriptFailureGuidance", () => { process.env.OPENROUTER_API_KEY = savedOR; } if (savedDO !== undefined) { - process.env.DO_API_TOKEN = savedDO; + process.env.DIGITALOCEAN_ACCESS_TOKEN = savedDO; } }); @@ -230,7 +230,7 @@ describe("getScriptFailureGuidance", () => { const lines = stripped_getScriptFailureGuidance(42, "digitalocean"); const joined = lines.join("\n"); expect(joined).toContain("spawn digitalocean"); - expect(joined).not.toContain("DO_API_TOKEN"); + expect(joined).not.toContain("DIGITALOCEAN_ACCESS_TOKEN"); }); it("should handle multi-credential auth hint", () => { diff --git a/packages/cli/src/__tests__/security-connection-validation.test.ts b/packages/cli/src/__tests__/security-connection-validation.test.ts index 8099d1372..416942b31 100644 --- a/packages/cli/src/__tests__/security-connection-validation.test.ts +++ b/packages/cli/src/__tests__/security-connection-validation.test.ts @@ -189,7 +189,6 @@ describe("validateLaunchCmd", () => { "source ~/.spawnrc 2>/dev/null; export PATH=$HOME/.npm-global/bin:$HOME/.bun/bin:$HOME/.local/bin:$PATH; openclaw tui", "source ~/.spawnrc 2>/dev/null; source ~/.zshrc 2>/dev/null; opencode", "source ~/.spawnrc 2>/dev/null; source ~/.zshrc 2>/dev/null; kilocode", - "export PATH=$HOME/.cargo/bin:$PATH; source ~/.cargo/env 2>/dev/null; source ~/.spawnrc 2>/dev/null; zeroclaw agent", "source ~/.spawnrc 2>/dev/null; hermes", "claude", "aider", diff --git a/packages/cli/src/__tests__/spawn-skill.test.ts b/packages/cli/src/__tests__/spawn-skill.test.ts index c99ebeb60..ac6cf36b6 100644 --- a/packages/cli/src/__tests__/spawn-skill.test.ts +++ b/packages/cli/src/__tests__/spawn-skill.test.ts @@ -22,10 +22,6 @@ describe("getSpawnSkillPath", () => { "openclaw", "~/.openclaw/skills/spawn/SKILL.md", ], - [ - "zeroclaw", - "~/.zeroclaw/workspace/AGENTS.md", - ], [ "opencode", "~/.config/opencode/AGENTS.md", @@ -67,7 +63,6 @@ describe("isAppendMode", () => { "claude", "codex", "openclaw", - "zeroclaw", "opencode", "kilocode", "junie", @@ -85,7 +80,6 @@ describe("getSkillContent", () => { "claude", "codex", "openclaw", - "zeroclaw", "opencode", "kilocode", "hermes", @@ -114,7 +108,6 @@ describe("getSkillContent", () => { } for (const agent of [ - "zeroclaw", "opencode", "kilocode", "junie", @@ -184,7 +177,6 @@ describe("injectSpawnSkill", () => { "claude", "codex", "openclaw", - "zeroclaw", "opencode", "kilocode", "hermes", diff --git a/packages/cli/src/__tests__/update-check.test.ts b/packages/cli/src/__tests__/update-check.test.ts index 54b00fddf..87d97c4d1 100644 --- a/packages/cli/src/__tests__/update-check.test.ts +++ b/packages/cli/src/__tests__/update-check.test.ts @@ -4,6 +4,7 @@ import { afterEach, beforeEach, describe, expect, it, mock, spyOn } from "bun:te import fs from "node:fs"; import path from "node:path"; import { tryCatch } from "@openrouter/spawn-shared"; +import pkg from "../../package.json"; // ── Test Helpers ─────────────────────────────────────────────────────────────── @@ -118,7 +119,6 @@ describe("update-check", () => { await checkForUpdates(); // Should have printed update message to stderr - expect(consoleErrorSpy).toHaveBeenCalled(); const output = consoleErrorSpy.mock.calls.map((call) => call[0]).join("\n"); expect(output).toContain("Update available"); expect(output).toContain("99.0.0"); @@ -135,7 +135,7 @@ describe("update-check", () => { }); it("should not update when up to date", async () => { - const mockFetch = mock(() => Promise.resolve(new Response("0.2.3\n"))); + const mockFetch = mock(() => Promise.resolve(new Response(`${pkg.version}\n`))); const fetchSpy = spyOn(global, "fetch").mockImplementation(mockFetch); // Mock executor to prevent actual commands @@ -180,7 +180,6 @@ describe("update-check", () => { await checkForUpdates(); // Should have printed error message - expect(consoleErrorSpy).toHaveBeenCalled(); const output = consoleErrorSpy.mock.calls.map((call) => call[0]).join("\n"); expect(output).toContain("Auto-update failed"); @@ -396,7 +395,7 @@ describe("update-check", () => { // Write an old timestamp (2 hours ago) writeUpdateChecked(Date.now() - 2 * 60 * 60 * 1000); - const mockFetch = mock(() => Promise.resolve(new Response("0.2.3\n"))); + const mockFetch = mock(() => Promise.resolve(new Response(`${pkg.version}\n`))); const fetchSpy = spyOn(global, "fetch").mockImplementation(mockFetch); const { checkForUpdates } = await import("../update-check.js"); @@ -407,7 +406,7 @@ describe("update-check", () => { }); it("should write cache file after successful version fetch", async () => { - const mockFetch = mock(() => Promise.resolve(new Response("0.2.3\n"))); + const mockFetch = mock(() => Promise.resolve(new Response(`${pkg.version}\n`))); const fetchSpy = spyOn(global, "fetch").mockImplementation(mockFetch); const { checkForUpdates } = await import("../update-check.js"); diff --git a/packages/cli/src/aws/aws.ts b/packages/cli/src/aws/aws.ts index 2d96ec1dd..accea4a66 100644 --- a/packages/cli/src/aws/aws.ts +++ b/packages/cli/src/aws/aws.ts @@ -1178,15 +1178,15 @@ export async function uploadFile(localPath: string, remotePath: string): Promise } export async function downloadFile(remotePath: string, localPath: string): Promise { - const normalizedRemote = validateRemotePath(remotePath, /^[a-zA-Z0-9/_.~$-]+$/); + const expandedRemote = remotePath.replace(/^\$HOME\//, "~/"); + const normalizedRemote = validateRemotePath(expandedRemote, /^[a-zA-Z0-9/_.~-]+$/); const keyOpts = getSshKeyOpts(await ensureSshKeys()); - const expandedPath = normalizedRemote.replace(/^\$HOME/, "~"); const proc = Bun.spawn( [ "scp", ...SSH_BASE_OPTS, ...keyOpts, - `${SSH_USER}@${_state.instanceIp}:${expandedPath}`, + `${SSH_USER}@${_state.instanceIp}:${normalizedRemote}`, localPath, ], { diff --git a/packages/cli/src/commands/fix.ts b/packages/cli/src/commands/fix.ts index d24d1e3bc..d8f3b0a04 100644 --- a/packages/cli/src/commands/fix.ts +++ b/packages/cli/src/commands/fix.ts @@ -8,6 +8,7 @@ import pc from "picocolors"; import { getActiveServers } from "../history.js"; import { loadManifest } from "../manifest.js"; import { validateConnectionIP, validateIdentifier, validateServerIdentifier, validateUsername } from "../security.js"; +import { loadSavedOpenRouterKey } from "../shared/oauth.js"; import { getHistoryPath } from "../shared/paths.js"; import { asyncTryCatch, tryCatch } from "../shared/result.js"; import { SSH_INTERACTIVE_OPTS } from "../shared/ssh.js"; @@ -176,6 +177,21 @@ export async function fixSpawn(record: SpawnRecord, manifest: Manifest | null, o return; } + // Ensure OPENROUTER_API_KEY is available before building the fix script. + // The normal provisioning flow uses getOrPromptApiKey() which loads from + // ~/.config/spawn/openrouter.json. buildFixScript() resolves env templates + // from process.env, so we must populate it here to avoid injecting empty keys. + if (!process.env.OPENROUTER_API_KEY) { + const savedKey = loadSavedOpenRouterKey(); + if (savedKey) { + process.env.OPENROUTER_API_KEY = savedKey; + } else { + p.log.error("No OpenRouter API key found."); + p.log.info("Set OPENROUTER_API_KEY in your environment, or run a new spawn to authenticate via OAuth."); + return; + } + } + // Build the remote fix script const scriptResult = tryCatch(() => buildFixScript(man!, record.agent)); if (!scriptResult.ok) { diff --git a/packages/cli/src/commands/index.ts b/packages/cli/src/commands/index.ts index bf8a35ca9..e97af9219 100644 --- a/packages/cli/src/commands/index.ts +++ b/packages/cli/src/commands/index.ts @@ -58,6 +58,7 @@ export { getImplementedClouds, hasCloudCli, hasCloudCredentials, + isAuthEnvVarSet, isInteractiveTTY, levenshtein, loadManifestWithSpinner, diff --git a/packages/cli/src/commands/link.ts b/packages/cli/src/commands/link.ts index 95eba0f4e..e84d68734 100644 --- a/packages/cli/src/commands/link.ts +++ b/packages/cli/src/commands/link.ts @@ -66,7 +66,6 @@ function defaultSshCommand(host: string, user: string, keyOpts: string[], cmd: s const KNOWN_AGENTS = [ "claude", "openclaw", - "zeroclaw", "codex", "opencode", "kilocode", @@ -79,7 +78,7 @@ type KnownAgent = (typeof KNOWN_AGENTS)[number]; function detectAgent(host: string, user: string, keyOpts: string[], runCmd: SshCommandFn): string | null { // First: check running processes const psCmd = - "ps aux 2>/dev/null | grep -oE 'claude(-code)?|openclaw|zeroclaw|codex|opencode|kilocode|hermes|junie' | grep -v grep | head -1 || true"; + "ps aux 2>/dev/null | grep -oE 'claude(-code)?|openclaw|codex|opencode|kilocode|hermes|junie' | grep -v grep | head -1 || true"; const psOut = runCmd(host, user, keyOpts, psCmd); if (psOut) { const match = KNOWN_AGENTS.find((b: KnownAgent) => psOut.includes(b)); diff --git a/packages/cli/src/commands/shared.ts b/packages/cli/src/commands/shared.ts index 94219a1d5..ea15881f9 100644 --- a/packages/cli/src/commands/shared.ts +++ b/packages/cli/src/commands/shared.ts @@ -489,9 +489,26 @@ export function parseAuthEnvVars(auth: string): string[] { .filter((s) => /^[A-Z][A-Z0-9_]{3,}$/.test(s)); } +/** Legacy env var names accepted as aliases for the canonical names in the manifest */ +const AUTH_VAR_ALIASES: Record = { + DIGITALOCEAN_ACCESS_TOKEN: [ + "DIGITALOCEAN_API_TOKEN", + "DO_API_TOKEN", + ], +}; + +/** Check if an auth env var (or one of its legacy aliases) is set */ +export function isAuthEnvVarSet(varName: string): boolean { + if (process.env[varName]) { + return true; + } + const aliases = AUTH_VAR_ALIASES[varName]; + return !!aliases?.some((a) => !!process.env[a]); +} + /** Format an auth env var line showing whether it's already set or needs to be exported */ function formatAuthVarLine(varName: string, urlHint?: string): string { - if (process.env[varName]) { + if (isAuthEnvVarSet(varName)) { return ` ${pc.green(varName)} ${pc.dim("-- set")}`; } const hint = urlHint ? ` ${pc.dim(`# ${urlHint}`)}` : ""; @@ -504,12 +521,12 @@ export function hasCloudCredentials(auth: string): boolean { if (vars.length === 0) { return false; } - return vars.every((v) => !!process.env[v]); + return vars.every((v) => isAuthEnvVarSet(v)); } /** Format a single credential env var as a status line (green if set, red if missing) */ export function formatCredStatusLine(varName: string, urlHint?: string): string { - if (process.env[varName]) { + if (isAuthEnvVarSet(varName)) { return ` ${pc.green(varName)} ${pc.dim("-- set")}`; } const suffix = urlHint ? ` ${pc.dim(urlHint)}` : ""; @@ -542,7 +559,7 @@ export function collectMissingCredentials(authVars: string[], cloud?: string): s missing.push("OPENROUTER_API_KEY"); } for (const v of authVars) { - if (!process.env[v]) { + if (!isAuthEnvVarSet(v)) { missing.push(v); } } diff --git a/packages/cli/src/commands/status.ts b/packages/cli/src/commands/status.ts index 69af210af..b91acf4b6 100644 --- a/packages/cli/src/commands/status.ts +++ b/packages/cli/src/commands/status.ts @@ -8,7 +8,8 @@ import { filterHistory, markRecordDeleted } from "../history.js"; import { loadManifest } from "../manifest.js"; import { validateServerIdentifier } from "../security.js"; import { parseJsonObj } from "../shared/parse.js"; -import { asyncTryCatchIf, isNetworkError, tryCatch, unwrapOr } from "../shared/result.js"; +import { asyncTryCatch, asyncTryCatchIf, isNetworkError, tryCatch, unwrapOr } from "../shared/result.js"; +import { SSH_BASE_OPTS } from "../shared/ssh.js"; import { loadApiToken } from "../shared/ui.js"; import { formatRelativeTime } from "./list.js"; import { resolveDisplayName } from "./shared.js"; @@ -20,6 +21,7 @@ type LiveState = "running" | "stopped" | "gone" | "unknown"; interface ServerStatusResult { record: SpawnRecord; liveState: LiveState; + agentAlive: boolean | null; } interface JsonStatusEntry { @@ -29,6 +31,7 @@ interface JsonStatusEntry { ip: string; name: string; state: LiveState; + agent_alive: boolean | null; spawned_at: string; server_id: string; } @@ -148,6 +151,107 @@ async function checkServerStatus(record: SpawnRecord): Promise { } } +// ── Agent alive probe ─────────────────────────────────────────────────────── + +/** + * Resolve the agent binary name from the manifest or the stored launch command. + * Returns the first word of the launch string (e.g. "openclaw tui" → "openclaw"). + */ +function resolveAgentBinary(record: SpawnRecord, manifest: Manifest | null): string | null { + const fromManifest = manifest?.agents[record.agent]?.launch; + if (fromManifest) { + return fromManifest.split(/\s+/)[0] || null; + } + // Fallback: extract the last command from launch_cmd (after all source/export prefixes) + const launchCmd = record.connection?.launch_cmd; + if (launchCmd) { + const parts = launchCmd.split(";").map((s) => s.trim()); + const last = parts[parts.length - 1] || ""; + return last.split(/\s+/)[0] || null; + } + return null; +} + +/** + * Probe a running server by SSHing in and running `{binary} --version`. + * Returns true if the agent binary is installed and executable, false otherwise. + */ +async function probeAgentAlive(record: SpawnRecord, manifest: Manifest | null): Promise { + const conn = record.connection; + if (!conn) { + return false; + } + if (conn.cloud === "local") { + return true; + } + + const binary = resolveAgentBinary(record, manifest); + if (!binary) { + return false; + } + + const versionCmd = `source ~/.spawnrc 2>/dev/null; export PATH="$HOME/.local/bin:$HOME/.claude/local/bin:$HOME/.npm-global/bin:$HOME/.bun/bin:$HOME/.n/bin:$PATH"; ${binary} --version`; + + const result = await asyncTryCatch(async () => { + let proc: { + exited: Promise; + }; + + if (conn.cloud === "sprite") { + const name = conn.server_name || ""; + if (!name) { + return false; + } + proc = Bun.spawn( + [ + "sprite", + "exec", + "-s", + name, + "--", + "bash", + "-c", + versionCmd, + ], + { + stdout: "ignore", + stderr: "ignore", + }, + ); + } else { + const user = conn.user || "root"; + const ip = conn.ip || ""; + if (!ip || ip === "sprite-console") { + return false; + } + proc = Bun.spawn( + [ + "ssh", + ...SSH_BASE_OPTS, + "-o", + "ConnectTimeout=5", + `${user}@${ip}`, + versionCmd, + ], + { + stdout: "ignore", + stderr: "ignore", + }, + ); + } + + const exitCode = await Promise.race([ + proc.exited, + new Promise((_, reject) => { + setTimeout(() => reject(new Error("probe timeout")), 10_000); + }), + ]); + return exitCode === 0; + }); + + return result.ok ? result.data : false; +} + // ── Formatting ─────────────────────────────────────────────────────────────── function fmtState(state: LiveState): string { @@ -163,6 +267,13 @@ function fmtState(state: LiveState): string { } } +function fmtProbe(alive: boolean | null): string { + if (alive === null) { + return pc.dim("—"); + } + return alive ? pc.green("live") : pc.red("down"); +} + function fmtIp(conn: SpawnRecord["connection"]): string { if (!conn) { return "—"; @@ -190,6 +301,7 @@ function renderStatusTable(results: ServerStatusResult[], manifest: Manifest | n const COL_CLOUD = 14; const COL_IP = 16; const COL_STATE = 12; + const COL_PROBE = 10; const COL_SINCE = 12; const header = [ @@ -198,6 +310,7 @@ function renderStatusTable(results: ServerStatusResult[], manifest: Manifest | n col(pc.dim("Cloud"), COL_CLOUD), col(pc.dim("IP"), COL_IP), col(pc.dim("State"), COL_STATE), + col(pc.dim("Probe"), COL_PROBE), pc.dim("Since"), ].join(" "); @@ -208,6 +321,7 @@ function renderStatusTable(results: ServerStatusResult[], manifest: Manifest | n "-".repeat(COL_CLOUD), "-".repeat(COL_IP), "-".repeat(COL_STATE), + "-".repeat(COL_PROBE), "-".repeat(COL_SINCE), ].join("-"), ); @@ -216,13 +330,14 @@ function renderStatusTable(results: ServerStatusResult[], manifest: Manifest | n console.log(header); console.log(divider); - for (const { record, liveState } of results) { + for (const { record, liveState, agentAlive } of results) { const conn = record.connection; const shortId = record.id ? record.id.slice(0, 6) : "??????"; const agentDisplay = resolveDisplayName(manifest, record.agent, "agent"); const cloudDisplay = resolveDisplayName(manifest, record.cloud, "cloud"); const ip = fmtIp(conn); const state = fmtState(liveState); + const probe = fmtProbe(agentAlive); const since = formatRelativeTime(record.timestamp); const row = [ @@ -231,6 +346,7 @@ function renderStatusTable(results: ServerStatusResult[], manifest: Manifest | n col(cloudDisplay, COL_CLOUD), col(ip, COL_IP), col(state, COL_STATE), + col(probe, COL_PROBE), pc.dim(since), ].join(" "); @@ -243,13 +359,14 @@ function renderStatusTable(results: ServerStatusResult[], manifest: Manifest | n // ── JSON output ────────────────────────────────────────────────────────────── function renderStatusJson(results: ServerStatusResult[]): void { - const entries: JsonStatusEntry[] = results.map(({ record, liveState }) => ({ + const entries: JsonStatusEntry[] = results.map(({ record, liveState, agentAlive }) => ({ id: record.id || "", agent: record.agent, cloud: record.cloud, ip: fmtIp(record.connection), name: record.name || record.connection?.server_name || "", state: liveState, + agent_alive: agentAlive, spawned_at: record.timestamp, server_id: record.connection?.server_id || record.connection?.server_name || "", })); @@ -258,9 +375,16 @@ function renderStatusJson(results: ServerStatusResult[]): void { // ── Main command ───────────────────────────────────────────────────────────── -export async function cmdStatus( - opts: { prune?: boolean; json?: boolean; agentFilter?: string; cloudFilter?: string } = {}, -): Promise { +export interface StatusOpts { + prune?: boolean; + json?: boolean; + agentFilter?: string; + cloudFilter?: string; + /** Override the agent probe for testing. Called only for "running" servers. */ + probe?: (record: SpawnRecord, manifest: Manifest | null) => Promise; +} + +export async function cmdStatus(opts: StatusOpts = {}): Promise { const records = filterHistory(opts.agentFilter, opts.cloudFilter); const candidates = records.filter( @@ -284,12 +408,19 @@ export async function cmdStatus( p.log.step(`Checking status of ${candidates.length} server${candidates.length !== 1 ? "s" : ""}...`); } + const probeFn = opts.probe ?? probeAgentAlive; + const results: ServerStatusResult[] = await Promise.all( candidates.map(async (record) => { const liveState = await checkServerStatus(record); + let agentAlive: boolean | null = null; + if (liveState === "running") { + agentAlive = await probeFn(record, manifest); + } return { record, liveState, + agentAlive, }; }), ); @@ -332,6 +463,15 @@ export async function cmdStatus( ); } + const unreachable = results.filter((r) => r.agentAlive === false); + if (unreachable.length > 0) { + p.log.info( + pc.dim( + `${unreachable.length} server${unreachable.length !== 1 ? "s" : ""} running but agent unreachable. The agent may have crashed or still be starting.`, + ), + ); + } + const running = results.filter((r) => r.liveState === "running").length; if (running > 0) { p.log.info( diff --git a/packages/cli/src/digitalocean/digitalocean.ts b/packages/cli/src/digitalocean/digitalocean.ts index 638114b99..ec42926f7 100644 --- a/packages/cli/src/digitalocean/digitalocean.ts +++ b/packages/cli/src/digitalocean/digitalocean.ts @@ -666,14 +666,14 @@ async function tryDoOAuth(): Promise { if (oauthDenied) { logError("OAuth authorization was denied by the user"); logError("Alternative: Use a manual API token instead"); - logError(" export DO_API_TOKEN=dop_v1_..."); + logError(" export DIGITALOCEAN_ACCESS_TOKEN=dop_v1_..."); return null; } if (!oauthCode) { logError("OAuth authentication timed out after 120 seconds"); logError("Alternative: Use a manual API token instead"); - logError(" export DO_API_TOKEN=dop_v1_..."); + logError(" export DIGITALOCEAN_ACCESS_TOKEN=dop_v1_..."); return null; } @@ -729,15 +729,22 @@ async function tryDoOAuth(): Promise { /** Returns true if browser OAuth was triggered (so caller can delay before next OAuth). */ export async function ensureDoToken(): Promise { - // 1. Env var - if (process.env.DO_API_TOKEN) { - _state.token = process.env.DO_API_TOKEN.trim(); + // 1. Env var (DIGITALOCEAN_ACCESS_TOKEN > DIGITALOCEAN_API_TOKEN > DO_API_TOKEN) + const envToken = + process.env.DIGITALOCEAN_ACCESS_TOKEN ?? process.env.DIGITALOCEAN_API_TOKEN ?? process.env.DO_API_TOKEN; + if (envToken) { + const envVarName = process.env.DIGITALOCEAN_ACCESS_TOKEN + ? "DIGITALOCEAN_ACCESS_TOKEN" + : process.env.DIGITALOCEAN_API_TOKEN + ? "DIGITALOCEAN_API_TOKEN" + : "DO_API_TOKEN"; + _state.token = envToken.trim(); if (await testDoToken()) { logInfo("Using DigitalOcean API token from environment"); await saveTokenToConfig(_state.token); return false; } - logWarn("DO_API_TOKEN from environment is invalid"); + logWarn(`${envVarName} from environment is invalid`); _state.token = ""; } @@ -776,7 +783,7 @@ export async function ensureDoToken(): Promise { // 3. Try OAuth browser flow // Show payment method reminder for first-time users (no saved config, no env token) - if (!saved && !process.env.DO_API_TOKEN) { + if (!saved && !envToken) { process.stderr.write("\n"); logWarn("DigitalOcean requires a payment method before you can create servers."); logWarn("If you haven't added one yet, visit: https://cloud.digitalocean.com/account/billing"); @@ -1448,17 +1455,17 @@ export async function uploadFile(localPath: string, remotePath: string, ip?: str export async function downloadFile(remotePath: string, localPath: string, ip?: string): Promise { const serverIp = ip || _state.serverIp; - const normalizedRemote = validateRemotePath(remotePath, /^[a-zA-Z0-9/_.~$-]+$/); + const expandedRemote = remotePath.replace(/^\$HOME\//, "~/"); + const normalizedRemote = validateRemotePath(expandedRemote, /^[a-zA-Z0-9/_.~-]+$/); const keyOpts = getSshKeyOpts(await ensureSshKeys()); - const expandedPath = normalizedRemote.replace(/^\$HOME/, "~"); const proc = Bun.spawn( [ "scp", ...SSH_BASE_OPTS, ...keyOpts, - `root@${serverIp}:${expandedPath}`, + `root@${serverIp}:${normalizedRemote}`, localPath, ], { diff --git a/packages/cli/src/digitalocean/main.ts b/packages/cli/src/digitalocean/main.ts index 3dea61cce..c8baeb710 100644 --- a/packages/cli/src/digitalocean/main.ts +++ b/packages/cli/src/digitalocean/main.ts @@ -35,7 +35,6 @@ const MARKETPLACE_IMAGES: Record = { openclaw: "openrouter-spawnopenclaw", opencode: "openrouter-spawnopencode", kilocode: "openrouter-spawnkilocode", - zeroclaw: "openrouter-spawnzeroclaw", hermes: "openrouter-spawnhermes", junie: "openrouter-spawnjunie", }; diff --git a/packages/cli/src/gcp/gcp.ts b/packages/cli/src/gcp/gcp.ts index 4feef7e60..6d563177b 100644 --- a/packages/cli/src/gcp/gcp.ts +++ b/packages/cli/src/gcp/gcp.ts @@ -1028,10 +1028,9 @@ export async function uploadFile(localPath: string, remotePath: string): Promise logError(`Invalid local path: ${localPath}`); throw new Error("Invalid local path"); } - const normalizedRemote = validateRemotePath(remotePath, /^[a-zA-Z0-9/_.~$-]+$/); + const expandedRemote = remotePath.replace(/^\$HOME\//, "~/"); + const normalizedRemote = validateRemotePath(expandedRemote, /^[a-zA-Z0-9/_.~-]+$/); const username = resolveUsername(); - // Expand $HOME on remote side - const expandedPath = normalizedRemote.replace(/^\$HOME/, "~"); const keyOpts = getSshKeyOpts(await ensureSshKeys()); const proc = Bun.spawn( @@ -1040,7 +1039,7 @@ export async function uploadFile(localPath: string, remotePath: string): Promise ...SSH_BASE_OPTS, ...keyOpts, localPath, - `${username}@${_state.serverIp}:${expandedPath}`, + `${username}@${_state.serverIp}:${normalizedRemote}`, ], { stdio: [ @@ -1067,9 +1066,9 @@ export async function downloadFile(remotePath: string, localPath: string): Promi logError(`Invalid local path: ${localPath}`); throw new Error("Invalid local path"); } - const normalizedRemote = validateRemotePath(remotePath, /^[a-zA-Z0-9/_.~$-]+$/); + const expandedRemote = remotePath.replace(/^\$HOME\//, "~/"); + const normalizedRemote = validateRemotePath(expandedRemote, /^[a-zA-Z0-9/_.~-]+$/); const username = resolveUsername(); - const expandedPath = normalizedRemote.replace(/^\$HOME/, "~"); const keyOpts = getSshKeyOpts(await ensureSshKeys()); const proc = Bun.spawn( @@ -1077,7 +1076,7 @@ export async function downloadFile(remotePath: string, localPath: string): Promi "scp", ...SSH_BASE_OPTS, ...keyOpts, - `${username}@${_state.serverIp}:${expandedPath}`, + `${username}@${_state.serverIp}:${normalizedRemote}`, localPath, ], { diff --git a/packages/cli/src/hetzner/hetzner.ts b/packages/cli/src/hetzner/hetzner.ts index f76c43aed..c38c497ae 100644 --- a/packages/cli/src/hetzner/hetzner.ts +++ b/packages/cli/src/hetzner/hetzner.ts @@ -909,17 +909,17 @@ export async function uploadFile(localPath: string, remotePath: string, ip?: str export async function downloadFile(remotePath: string, localPath: string, ip?: string): Promise { const serverIp = ip || _state.serverIp; - const normalizedRemote = validateRemotePath(remotePath, /^[a-zA-Z0-9/_.~$-]+$/); + const expandedRemote = remotePath.replace(/^\$HOME\//, "~/"); + const normalizedRemote = validateRemotePath(expandedRemote, /^[a-zA-Z0-9/_.~-]+$/); const keyOpts = getSshKeyOpts(await ensureSshKeys()); - const expandedPath = normalizedRemote.replace(/^\$HOME/, "~"); const proc = Bun.spawn( [ "scp", ...SSH_BASE_OPTS, ...keyOpts, - `root@${serverIp}:${expandedPath}`, + `root@${serverIp}:${normalizedRemote}`, localPath, ], { diff --git a/packages/cli/src/index.ts b/packages/cli/src/index.ts index 1fc897ddd..c4c2f63a9 100644 --- a/packages/cli/src/index.ts +++ b/packages/cli/src/index.ts @@ -1070,23 +1070,6 @@ async function main(): Promise { process.exit(3); } - // Validate headless-incompatible flags - if (effectiveHeadless && dryRun) { - if (outputFormat === "json") { - console.log( - JSON.stringify({ - status: "error", - error_code: "VALIDATION_ERROR", - error_message: "--headless and --dry-run cannot be used together", - }), - ); - } else { - console.error(pc.red("Error: --headless and --dry-run cannot be used together")); - console.error(`\nUse ${pc.cyan("--dry-run")} for previewing, or ${pc.cyan("--headless")} for execution.`); - } - process.exit(3); - } - checkUnknownFlags(filteredArgs); const cmd = filteredArgs[0]; diff --git a/packages/cli/src/security.ts b/packages/cli/src/security.ts index 19a2152d1..504b9bc9d 100644 --- a/packages/cli/src/security.ts +++ b/packages/cli/src/security.ts @@ -406,7 +406,7 @@ export function validateLaunchCmd(cmd: string): void { "Invalid launch command in history: invalid agent invocation\n\n" + `Command: "${cmd}"\n` + `Rejected segment: "${lastSegment}"\n\n` + - "The final segment must be a simple binary name (e.g., 'claude', 'zeroclaw agent').\n\n" + + "The final segment must be a simple binary name (e.g., 'claude', 'hermes').\n\n" + "Your spawn history file may be corrupted or tampered with.\n" + `To fix: run 'spawn list --clear' to reset history`, ); diff --git a/packages/cli/src/shared/agent-setup.ts b/packages/cli/src/shared/agent-setup.ts index a7ed108b0..e94627c1c 100644 --- a/packages/cli/src/shared/agent-setup.ts +++ b/packages/cli/src/shared/agent-setup.ts @@ -7,6 +7,7 @@ import type { Result } from "./ui.js"; import { unlinkSync, writeFileSync } from "node:fs"; import { join } from "node:path"; import { getErrorMessage } from "@openrouter/spawn-shared"; +import { setupCursorProxy, startCursorProxy } from "./cursor-proxy.js"; import { getTmpDir } from "./paths.js"; import { asyncTryCatch, asyncTryCatchIf, isOperationalError, tryCatchIf } from "./result.js"; import { validateRemotePath } from "./ssh.js"; @@ -164,59 +165,6 @@ async function setupClaudeCodeConfig(runner: CloudRunner, apiKey: string): Promi // ─── Cursor CLI Config ──────────────────────────────────────────────────────── -async function setupCursorConfig(runner: CloudRunner, _apiKey: string): Promise { - logStep("Configuring Cursor CLI for OpenRouter..."); - - const configJson = `{ - "version": 1, - "permissions": { - "allow": ["Shell(*)", "Read(*)", "Write(*)", "WebFetch(*)", "Mcp(*)"], - "deny": [] - } -}`; - - // Spawn skill rule — injected into .cursor/rules/ so the agent knows - // it can provision child VMs via the spawn CLI. - const spawnRule = [ - "---", - "description: Spawn CLI — provision child cloud VMs with AI coding agents", - "alwaysApply: true", - "---", - "", - "# Spawn — Create Child VMs", - "", - "You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents.", - "", - "## Usage", - "", - "```bash", - 'spawn --headless --output json --prompt "task description"', - "```", - "", - "## Agents: claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie", - "## Clouds: hetzner, digitalocean, aws, gcp, sprite", - "", - "The command returns JSON with connection details. Use this to delegate subtasks", - "to specialized agents running on separate cloud VMs.", - "", - ].join("\n"); - - // Upload config files via SCP — avoids base64 interpolation into shell commands. - await uploadConfigFile(runner, configJson, "$HOME/.cursor/cli-config.json"); - await uploadConfigFile(runner, spawnRule, "$HOME/.cursor/rules/spawn.mdc"); - // Spawn rule should be world-readable (not sensitive) - await runner.runServer("chmod 644 ~/.cursor/rules/spawn.mdc"); - - // Persist PATH so agent binary is available (cursor installs to ~/.local/bin since 2026-03-25) - const pathScript = [ - 'grep -q ".local/bin" ~/.bashrc 2>/dev/null || printf \'\\nexport PATH="$HOME/.local/bin:$PATH"\\n\' >> ~/.bashrc', - 'grep -q ".local/bin" ~/.zshrc 2>/dev/null || printf \'\\nexport PATH="$HOME/.local/bin:$PATH"\\n\' >> ~/.zshrc', - ].join(" && "); - - await runner.runServer(pathScript); - logInfo("Cursor CLI configured"); -} - // ─── GitHub Auth ───────────────────────────────────────────────────────────── let githubAuthRequested = false; @@ -644,51 +592,6 @@ export async function startGateway(runner: CloudRunner): Promise { logInfo("OpenClaw gateway started"); } -// ─── ZeroClaw Config ───────────────────────────────────────────────────────── - -async function setupZeroclawConfig(runner: CloudRunner, _apiKey: string): Promise { - logStep("Configuring ZeroClaw for autonomous operation..."); - - // Remove any pre-existing config (e.g. from Docker image extraction) before - // running onboard, which generates a fresh config with the correct API key. - await runner.runServer("rm -f ~/.zeroclaw/config.toml"); - - // Run onboard first to set up provider/key - await runner.runServer( - `source ~/.spawnrc 2>/dev/null; export PATH="$HOME/.local/bin:$HOME/.cargo/bin:$PATH"; zeroclaw onboard --api-key "\${OPENROUTER_API_KEY}" --provider openrouter`, - ); - - // Patch autonomy settings in-place. `zeroclaw onboard` already generates - // [security] and [shell] sections — so we sed the values instead of - // appending duplicate sections. - const patchScript = [ - "cd ~/.zeroclaw", - // Update existing security values (or append section if missing) - 'if grep -q "^\\[security\\]" config.toml 2>/dev/null; then', - " sed -i 's/^autonomy = .*/autonomy = \"full\"/' config.toml", - " sed -i 's/^supervised = .*/supervised = false/' config.toml", - " sed -i 's/^allow_destructive = .*/allow_destructive = true/' config.toml", - "else", - " printf '\\n[security]\\nautonomy = \"full\"\\nsupervised = false\\nallow_destructive = true\\n' >> config.toml", - "fi", - // Update existing shell policy (or append section if missing) - 'if grep -q "^\\[shell\\]" config.toml 2>/dev/null; then', - " sed -i 's/^policy = .*/policy = \"allow_all\"/' config.toml", - "else", - " printf '\\n[shell]\\npolicy = \"allow_all\"\\n' >> config.toml", - "fi", - // Force native runtime (no Docker) — zeroclaw auto-detects Docker and - // launches in a container otherwise, which hangs the interactive session. - 'if grep -q "^\\[runtime\\]" config.toml 2>/dev/null; then', - " sed -i 's/^adapter = .*/adapter = \"native\"/' config.toml", - "else", - " printf '\\n[runtime]\\nadapter = \"native\"\\n' >> config.toml", - "fi", - ].join("\n"); - await runner.runServer(patchScript); - logInfo("ZeroClaw configured for autonomous operation"); -} - // ─── OpenCode Install Command ──────────────────────────────────────────────── function openCodeInstallCmd(): string { @@ -946,10 +849,6 @@ export async function setupAutoUpdate(runner: CloudRunner, agentName: string, up // ─── Default Agent Definitions ─────────────────────────────────────────────── -// Last zeroclaw release that shipped Linux prebuilt binaries (v0.1.9a has none). -// Used for direct binary install to avoid a Rust source build timeout. -const ZEROCLAW_PREBUILT_TAG = "v0.1.7-beta.30"; - function createAgents(runner: CloudRunner): Record { return { claude: { @@ -1063,50 +962,6 @@ function createAgents(runner: CloudRunner): Record { "npm install -g ${_NPM_G_FLAGS:-} @kilocode/cli@latest", }, - zeroclaw: { - name: "ZeroClaw", - cloudInitTier: "minimal", - modelEnvVar: "ZEROCLAW_MODEL", - preProvision: detectGithubAuth, - install: async () => { - // Direct binary install from pinned release (v0.1.9a "latest" has no assets, - // causing the bootstrap --prefer-prebuilt path to 404-fail and fall back to - // a Rust source build that exceeds the 600s install timeout). - const directInstallCmd = - `_ZC_ARCH="$(uname -m)"; ` + - `if [ "$_ZC_ARCH" = "x86_64" ]; then _ZC_TARGET="x86_64-unknown-linux-gnu"; ` + - `elif [ "$_ZC_ARCH" = "aarch64" ] || [ "$_ZC_ARCH" = "arm64" ]; then _ZC_TARGET="aarch64-unknown-linux-gnu"; ` + - `else echo "Unsupported arch: $_ZC_ARCH" >&2; exit 1; fi; ` + - `_ZC_URL="https://github.com/zeroclaw-labs/zeroclaw/releases/download/${ZEROCLAW_PREBUILT_TAG}/zeroclaw-\${_ZC_TARGET}.tar.gz"; ` + - `_ZC_TMP="$(mktemp -d)"; ` + - `curl --proto '=https' -fsSL "$_ZC_URL" -o "$_ZC_TMP/zeroclaw.tar.gz" && ` + - `tar -xzf "$_ZC_TMP/zeroclaw.tar.gz" -C "$_ZC_TMP" && ` + - `{ mkdir -p "$HOME/.local/bin" && install -m 755 "$_ZC_TMP/zeroclaw" "$HOME/.local/bin/zeroclaw"; } && ` + - `rm -rf "$_ZC_TMP"`; - await installAgent(runner, "ZeroClaw", directInstallCmd, 120); - }, - envVars: (apiKey) => [ - `OPENROUTER_API_KEY=${apiKey}`, - "ZEROCLAW_PROVIDER=openrouter", - "ZEROCLAW_RUNTIME=native", - ], - configure: (apiKey) => setupZeroclawConfig(runner, apiKey), - launchCmd: () => - "export PATH=$HOME/.local/bin:$HOME/.cargo/bin:$PATH; source ~/.spawnrc 2>/dev/null; zeroclaw agent", - updateCmd: - 'export PATH="$HOME/.local/bin:$HOME/.cargo/bin:$PATH"; ' + - `_ZC_ARCH="$(uname -m)"; ` + - `if [ "$_ZC_ARCH" = "x86_64" ]; then _ZC_TARGET="x86_64-unknown-linux-gnu"; ` + - `elif [ "$_ZC_ARCH" = "aarch64" ] || [ "$_ZC_ARCH" = "arm64" ]; then _ZC_TARGET="aarch64-unknown-linux-gnu"; ` + - "else exit 1; fi; " + - `_ZC_URL="https://github.com/zeroclaw-labs/zeroclaw/releases/latest/download/zeroclaw-\${_ZC_TARGET}.tar.gz"; ` + - `_ZC_TMP="$(mktemp -d)"; ` + - `curl --proto '=https' -fsSL "$_ZC_URL" -o "$_ZC_TMP/zeroclaw.tar.gz" && ` + - `tar -xzf "$_ZC_TMP/zeroclaw.tar.gz" -C "$_ZC_TMP" && ` + - `install -m 755 "$_ZC_TMP/zeroclaw" "$HOME/.local/bin/zeroclaw" && ` + - `rm -rf "$_ZC_TMP"`, - }, - hermes: { name: "Hermes Agent", cloudInitTier: "minimal", @@ -1168,7 +1023,7 @@ function createAgents(runner: CloudRunner): Record { cursor: { name: "Cursor CLI", - cloudInitTier: "minimal", + cloudInitTier: "bun", preProvision: detectGithubAuth, install: () => installAgent( @@ -1180,11 +1035,12 @@ function createAgents(runner: CloudRunner): Record { ), envVars: (apiKey) => [ `OPENROUTER_API_KEY=${apiKey}`, - `CURSOR_API_KEY=${apiKey}`, + "CURSOR_API_KEY=spawn-proxy", ], - configure: (apiKey) => setupCursorConfig(runner, apiKey), + configure: () => setupCursorProxy(runner), + preLaunch: () => startCursorProxy(runner), launchCmd: () => - 'source ~/.spawnrc 2>/dev/null; export PATH="$HOME/.local/bin:$PATH"; agent --endpoint https://openrouter.ai/api/v1', + 'source ~/.spawnrc 2>/dev/null; export PATH="$HOME/.local/bin:$PATH"; agent --endpoint https://api2.cursor.sh', updateCmd: 'export PATH="$HOME/.local/bin:$PATH"; agent update', }, }; diff --git a/packages/cli/src/shared/agents.ts b/packages/cli/src/shared/agents.ts index 0c228a0c6..e5161821a 100644 --- a/packages/cli/src/shared/agents.ts +++ b/packages/cli/src/shared/agents.ts @@ -24,7 +24,7 @@ export interface AgentConfig { name: string; /** Default model ID passed to configure() (no interactive prompt — override via MODEL_ID env var). */ modelDefault?: string; - /** Env var name for setting the model on the remote (e.g. ZEROCLAW_MODEL, LLM_MODEL). */ + /** Env var name for setting the model on the remote (e.g. KILOCODE_MODEL, LLM_MODEL). */ modelEnvVar?: string; /** Pre-provision hook (runs before server creation, e.g., prompt for GitHub auth). */ preProvision?: () => Promise; diff --git a/packages/cli/src/shared/cursor-proxy.ts b/packages/cli/src/shared/cursor-proxy.ts new file mode 100644 index 000000000..4b0f1255c --- /dev/null +++ b/packages/cli/src/shared/cursor-proxy.ts @@ -0,0 +1,452 @@ +// cursor-proxy.ts — OpenRouter proxy for Cursor CLI +// Deploys a local translation proxy that intercepts Cursor's proprietary +// ConnectRPC/protobuf protocol and translates it to OpenRouter's OpenAI-compatible API. +// +// Architecture: +// Cursor CLI → Caddy (HTTPS/H2, port 443) → split routing: +// /agent.v1.AgentService/* → H2C Node.js (port 18645, BiDi streaming) +// everything else → HTTP/1.1 Node.js (port 18644, unary RPCs) +// +// /etc/hosts spoofs api2.cursor.sh → 127.0.0.1 so Cursor's hardcoded +// streaming endpoint routes to the local proxy. + +import type { CloudRunner } from "./agent-setup.js"; + +import { wrapSshCall } from "./agent-setup.js"; +import { asyncTryCatchIf, isOperationalError } from "./result.js"; +import { logInfo, logStep, logWarn } from "./ui.js"; + +// ── Protobuf helpers (used in proxy scripts) ──────────────────────────────── + +// These are string-embedded in the proxy scripts that run on the VM. +// They implement minimal protobuf encoding for the specific message types +// Cursor CLI expects: AgentServerMessage, ModelDetails, etc. + +const PROTO_HELPERS = ` +function ev(v){const b=[];while(v>0x7f){b.push((v&0x7f)|0x80);v>>>=7;}b.push(v&0x7f);return Buffer.from(b);} +function es(f,s){const sb=Buffer.from(s);return Buffer.concat([ev((f<<3)|2),ev(sb.length),sb]);} +function em(f,p){return Buffer.concat([ev((f<<3)|2),ev(p.length),p]);} +function cf(p){const f=Buffer.alloc(5+p.length);f[0]=0;f.writeUInt32BE(p.length,1);p.copy(f,5);return f;} +function ct(){const j=Buffer.from("{}");const t=Buffer.alloc(5+j.length);t[0]=2;t.writeUInt32BE(j.length,1);j.copy(t,5);return t;} +function tdf(t){return cf(em(1,em(1,es(1,t))));} +function tef(){return cf(em(1,em(14,Buffer.from([8,10,16,5]))));} +function bmd(id,n){return Buffer.concat([es(1,id),es(3,id),es(4,n),es(5,n)]);} +function bmr(){return Buffer.concat([["anthropic/claude-sonnet-4-6","Claude Sonnet 4.6"],["anthropic/claude-haiku-4-5","Claude Haiku 4.5"],["openai/gpt-5.4","GPT-5.4"],["google/gemini-3.5-pro","Gemini 3.5 Pro"],["google/gemini-3.5-flash","Gemini 3.5 Flash"]].map(([i,n])=>em(1,bmd(i,n))));} +function bdr(){return em(1,bmd("anthropic/claude-sonnet-4-6","Claude Sonnet 4.6"));} +function xstr(buf,out){let o=0;while(o { + const chunks = []; + req.on("data", (c) => chunks.push(c)); + req.on("error", (e) => log("REQ ERR: " + e.message)); + req.on("end", () => { + try { + const buf = Buffer.concat(chunks); + const ct = req.headers["content-type"] || ""; + const url = req.url || ""; + log(req.method + " " + url + " [" + buf.length + "B]"); + + // Auth — return fake JWT + if (url === "/auth/exchange_user_api_key") { + res.writeHead(200, {"content-type":"application/json"}); + res.end(JSON.stringify({ + accessToken: "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJzcGF3bl9wcm94eSJ9.ok", + refreshToken: "spawn-proxy-refresh", + authId: "user_spawn_proxy", + })); + return; + } + + // Analytics — accept silently + if (url.includes("Analytics") || url.includes("TrackEvents") || url.includes("SubmitLogs")) { + res.writeHead(200, {"content-type":"application/json"}); + res.end('{"success":true}'); + return; + } + + // Model list + if (url.includes("GetUsableModels")) { + res.writeHead(200, {"content-type":"application/proto"}); + res.end(bmr()); + return; + } + + // Default model + if (url.includes("GetDefaultModelForCli")) { + res.writeHead(200, {"content-type":"application/proto"}); + res.end(bdr()); + return; + } + + // OTEL traces + if (url.includes("/v1/traces")) { + res.writeHead(200, {"content-type":"application/json"}); + res.end("{}"); + return; + } + + // Other proto endpoints — empty response + if (ct.includes("proto")) { + res.writeHead(200, {"content-type": ct.includes("connect") ? "application/connect+proto" : "application/proto"}); + res.end(); + return; + } + + res.writeHead(200); + res.end("ok"); + } catch(e) { + log("ERR: " + e.message); + try { res.writeHead(500); res.end(); } catch(e2) {} + } + }); +}); +server.on("error", (e) => log("SVR: " + e.message)); +server.listen(18644, "127.0.0.1", () => log("Cursor proxy (unary) on 18644")); +`; +} + +// ── BiDi backend (H2C, port 18645) ────────────────────────────────────────── + +function getBidiScript(): string { + return `import http2 from "node:http2"; +import { appendFileSync } from "node:fs"; +const LOG="/var/log/cursor-proxy-bidi.log"; +function log(msg){try{appendFileSync(LOG,new Date().toISOString()+" "+msg+"\\n");}catch(e){}} + +${PROTO_HELPERS} + +const OPENROUTER_KEY = process.env.OPENROUTER_API_KEY || ""; + +const server = http2.createServer(); +server.on("stream", (stream, headers) => { + const path = headers[":path"] || ""; + log("STREAM " + path); + + // BiDi: respond on first data frame, don't wait for stream end + let gotData = false; + stream.on("data", (chunk) => { + if (gotData) return; + gotData = true; + log(" Data [" + chunk.length + "B]"); + + // Extract user message from protobuf + let msg = "hello"; + const strs = []; + try { xstr(chunk.length > 5 ? chunk.slice(5) : chunk, strs); } catch(e) {} + for (const s of strs) { + if (s.length > 0 && s.length < 500 && !s.match(/^[a-f0-9]{8}-/)) { msg = s; break; } + } + log(" User: " + msg); + + stream.respond({":status": 200, "content-type": "application/connect+proto"}); + + if (OPENROUTER_KEY) { + callOpenRouter(msg, stream); + } else { + stream.write(tdf("Cursor proxy is working but OPENROUTER_API_KEY is not set. ")); + stream.write(tdf("Please configure the API key to connect to real models.")); + stream.write(tef()); + stream.end(ct()); + } + }); + stream.on("error", (e) => { + if (!e.message.includes("cancel")) log(" STREAM ERR: " + e.message); + }); +}); + +async function callOpenRouter(msg, stream) { + try { + const r = await fetch("https://openrouter.ai/api/v1/chat/completions", { + method: "POST", + headers: { + "Authorization": "Bearer " + OPENROUTER_KEY, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + model: "openrouter/auto", + messages: [{ role: "user", content: msg }], + stream: true, + }), + }); + + if (!r.ok) { + const errText = await r.text().catch(() => ""); + stream.write(tdf("OpenRouter error " + r.status + ": " + errText.slice(0, 200))); + stream.write(tef()); + stream.end(ct()); + return; + } + + const reader = r.body.getReader(); + const dec = new TextDecoder(); + let buf = ""; + + while (true) { + const { done, value } = await reader.read(); + if (done) break; + buf += dec.decode(value, { stream: true }); + const lines = buf.split("\\n"); + buf = lines.pop() || ""; + + for (const line of lines) { + if (!line.startsWith("data: ")) continue; + const data = line.slice(6).trim(); + if (data === "[DONE]") continue; + try { + const json = JSON.parse(data); + const content = json.choices?.[0]?.delta?.content; + if (content) stream.write(tdf(content)); + } catch(e) {} + } + } + + stream.write(tef()); + stream.end(ct()); + log(" OpenRouter stream complete"); + } catch(e) { + log(" OpenRouter error: " + e.message); + try { + stream.write(tdf("Proxy error: " + e.message)); + stream.write(tef()); + stream.end(ct()); + } catch(e2) {} + } +} + +server.on("error", (e) => log("SVR: " + e.message)); +server.listen(18645, "127.0.0.1", () => log("Cursor proxy (bidi) on 18645")); +`; +} + +// ── Caddyfile ─────────────────────────────────────────────────────────────── + +function getCaddyfile(): string { + return `{ +\tlocal_certs +\tauto_https disable_redirects +} + +https://api2.cursor.sh, +https://api2geo.cursor.sh, +https://api2direct.cursor.sh, +https://agentn.api5.cursor.sh, +https://agent.api5.cursor.sh { +\ttls internal + +\thandle /agent.v1.AgentService/* { +\t\treverse_proxy h2c://127.0.0.1:18645 { +\t\t\tflush_interval -1 +\t\t} +\t} + +\thandle { +\t\treverse_proxy http://127.0.0.1:18644 { +\t\t\tflush_interval -1 +\t\t} +\t} +} +`; +} + +// ── Hosts entries ─────────────────────────────────────────────────────────── + +const CURSOR_DOMAINS = [ + "api2.cursor.sh", + "api2geo.cursor.sh", + "api2direct.cursor.sh", + "agentn.api5.cursor.sh", + "agent.api5.cursor.sh", +]; + +// ── Deployment ────────────────────────────────────────────────────────────── + +/** + * Deploy the Cursor proxy infrastructure onto the remote VM. + * Installs Caddy, uploads proxy scripts, writes Caddyfile, configures /etc/hosts. + */ +export async function setupCursorProxy(runner: CloudRunner): Promise { + logStep("Deploying Cursor→OpenRouter proxy..."); + + // 1. Install Caddy if not present + const installCaddy = [ + 'if command -v caddy >/dev/null 2>&1; then echo "caddy already installed"; exit 0; fi', + 'echo "Installing Caddy..."', + 'curl -sf "https://caddyserver.com/api/download?os=linux&arch=amd64" -o /usr/local/bin/caddy', + "chmod +x /usr/local/bin/caddy", + "caddy version", + ].join("\n"); + + const caddyResult = await asyncTryCatchIf(isOperationalError, () => wrapSshCall(runner.runServer(installCaddy, 60))); + if (!caddyResult.ok) { + logWarn("Caddy install failed — Cursor proxy will not work"); + return; + } + logInfo("Caddy available"); + + // 2. Upload proxy scripts via base64 + const unaryB64 = Buffer.from(getUnaryScript()).toString("base64"); + const bidiB64 = Buffer.from(getBidiScript()).toString("base64"); + const caddyfileB64 = Buffer.from(getCaddyfile()).toString("base64"); + + for (const b64 of [ + unaryB64, + bidiB64, + caddyfileB64, + ]) { + if (!/^[A-Za-z0-9+/=]+$/.test(b64)) { + throw new Error("Unexpected characters in base64 output"); + } + } + + const deployScript = [ + "mkdir -p ~/.cursor/proxy", + `printf '%s' '${unaryB64}' | base64 -d > ~/.cursor/proxy/unary.mjs`, + `printf '%s' '${bidiB64}' | base64 -d > ~/.cursor/proxy/bidi.mjs`, + `printf '%s' '${caddyfileB64}' | base64 -d > ~/.cursor/proxy/Caddyfile`, + "chmod 600 ~/.cursor/proxy/*.mjs", + "chmod 644 ~/.cursor/proxy/Caddyfile", + ].join(" && "); + + await wrapSshCall(runner.runServer(deployScript)); + logInfo("Proxy scripts deployed"); + + // 3. Configure /etc/hosts for domain spoofing + const hostsScript = [ + // Remove any existing cursor entries + 'sed -i "/cursor\\.sh/d" /etc/hosts 2>/dev/null || true', + // Add our entries + `echo "127.0.0.1 ${CURSOR_DOMAINS.join(" ")}" >> /etc/hosts`, + ].join(" && "); + + await wrapSshCall(runner.runServer(hostsScript)); + logInfo("Hosts spoofing configured"); + + // 4. Install Caddy's internal CA cert + const trustScript = "caddy trust 2>/dev/null || true"; + await wrapSshCall(runner.runServer(trustScript, 30)); + logInfo("Caddy CA trusted"); + + // 5. Write Cursor CLI config (permissions + PATH) + const configScript = [ + "mkdir -p ~/.cursor/rules", + `cat > ~/.cursor/cli-config.json << 'CONF' +{"version":1,"permissions":{"allow":["Shell(*)","Read(*)","Write(*)","WebFetch(*)","Mcp(*)"],"deny":[]}} +CONF`, + "chmod 600 ~/.cursor/cli-config.json", + 'grep -q ".local/bin" ~/.bashrc 2>/dev/null || printf \'\\nexport PATH="$HOME/.local/bin:$PATH"\\n\' >> ~/.bashrc', + 'grep -q ".local/bin" ~/.zshrc 2>/dev/null || printf \'\\nexport PATH="$HOME/.local/bin:$PATH"\\n\' >> ~/.zshrc', + ].join(" && "); + await wrapSshCall(runner.runServer(configScript)); + logInfo("Cursor CLI configured"); +} + +/** + * Start the Cursor proxy services (Caddy + two Node.js backends). + * Uses systemd if available, falls back to setsid/nohup. + */ +export async function startCursorProxy(runner: CloudRunner): Promise { + logStep("Starting Cursor proxy services..."); + + // Find Node.js binary (cursor bundles its own) + const nodeFind = + "NODE=$(find ~/.local/share/cursor-agent -name node -type f 2>/dev/null | head -1); " + + '[ -z "$NODE" ] && NODE=$(command -v node); ' + + 'echo "Using node: $NODE"'; + + // Port check (same pattern as startGateway) + const portCheck = (port: number) => + `ss -tln 2>/dev/null | grep -q ":${port} " || nc -z 127.0.0.1 ${port} 2>/dev/null`; + + const script = [ + "source ~/.spawnrc 2>/dev/null", + nodeFind, + + // Start unary backend + `if ${portCheck(18644)}; then echo "Unary backend already running"; else`, + " if command -v systemctl >/dev/null 2>&1; then", + ' _sudo=""; [ "$(id -u)" != "0" ] && _sudo="sudo"', + " cat > /tmp/cursor-proxy-unary.service << UNIT", + "[Unit]", + "Description=Cursor Proxy (unary)", + "After=network.target", + "[Service]", + "Type=simple", + "ExecStart=$NODE $HOME/.cursor/proxy/unary.mjs", + "Restart=always", + "RestartSec=3", + "User=$(whoami)", + "Environment=HOME=$HOME", + "Environment=PATH=$HOME/.local/bin:/usr/local/bin:/usr/bin:/bin", + "[Install]", + "WantedBy=multi-user.target", + "UNIT", + " $_sudo mv /tmp/cursor-proxy-unary.service /etc/systemd/system/", + " $_sudo systemctl daemon-reload", + " $_sudo systemctl restart cursor-proxy-unary", + " else", + " setsid $NODE ~/.cursor/proxy/unary.mjs < /dev/null &", + " fi", + "fi", + + // Start bidi backend + `if ${portCheck(18645)}; then echo "BiDi backend already running"; else`, + " if command -v systemctl >/dev/null 2>&1; then", + ' _sudo=""; [ "$(id -u)" != "0" ] && _sudo="sudo"', + " cat > /tmp/cursor-proxy-bidi.service << UNIT", + "[Unit]", + "Description=Cursor Proxy (bidi)", + "After=network.target", + "[Service]", + "Type=simple", + "ExecStart=$NODE $HOME/.cursor/proxy/bidi.mjs", + "Restart=always", + "RestartSec=3", + "User=$(whoami)", + "Environment=HOME=$HOME", + 'Environment=OPENROUTER_API_KEY=$(grep OPENROUTER_API_KEY ~/.spawnrc 2>/dev/null | head -1 | cut -d= -f2- | tr -d "\'")', + "Environment=PATH=$HOME/.local/bin:/usr/local/bin:/usr/bin:/bin", + "[Install]", + "WantedBy=multi-user.target", + "UNIT", + " $_sudo mv /tmp/cursor-proxy-bidi.service /etc/systemd/system/", + " $_sudo systemctl daemon-reload", + " $_sudo systemctl restart cursor-proxy-bidi", + " else", + " setsid $NODE ~/.cursor/proxy/bidi.mjs < /dev/null &", + " fi", + "fi", + + // Start Caddy + `if ${portCheck(443)}; then echo "Caddy already running"; else`, + " caddy start --config ~/.cursor/proxy/Caddyfile --adapter caddyfile 2>/dev/null || true", + "fi", + + // Wait for all services + "elapsed=0; while [ $elapsed -lt 30 ]; do", + ` if ${portCheck(443)} && ${portCheck(18644)} && ${portCheck(18645)}; then`, + ' echo "Cursor proxy ready after ${elapsed}s"', + " exit 0", + " fi", + " sleep 1; elapsed=$((elapsed + 1))", + "done", + 'echo "Cursor proxy failed to start"; exit 1', + ].join("\n"); + + const result = await asyncTryCatchIf(isOperationalError, () => wrapSshCall(runner.runServer(script, 60))); + if (result.ok) { + logInfo("Cursor proxy started"); + } else { + logWarn("Cursor proxy start failed — agent may not work"); + } +} diff --git a/packages/cli/src/shared/oauth.ts b/packages/cli/src/shared/oauth.ts index 3bdc45b59..11772f818 100644 --- a/packages/cli/src/shared/oauth.ts +++ b/packages/cli/src/shared/oauth.ts @@ -285,7 +285,7 @@ export function hasSavedOpenRouterKey(): boolean { } /** Load a previously saved OpenRouter API key from ~/.config/spawn/openrouter.json. */ -function loadSavedOpenRouterKey(): string | null { +export function loadSavedOpenRouterKey(): string | null { const result = tryCatch(() => { const configPath = getSpawnCloudConfigPath("openrouter"); const data = parseJsonObj(readFileSync(configPath, "utf-8")); diff --git a/packages/cli/src/shared/orchestrate.ts b/packages/cli/src/shared/orchestrate.ts index fe1b16007..72faaccb5 100644 --- a/packages/cli/src/shared/orchestrate.ts +++ b/packages/cli/src/shared/orchestrate.ts @@ -285,7 +285,7 @@ export async function runOrchestration( // Skip cloud-init for minimal-tier agents when using tarballs or snapshots. // Ubuntu 24.04 base images already have curl + git, so minimal agents (claude, - // opencode, zeroclaw, hermes) don't need the cloud-init package install step. + // opencode, hermes) don't need the cloud-init package install step. // This saves ~30-60s by just waiting for SSH instead of polling for cloud-init completion. if ( cloud.cloudName !== "local" && diff --git a/packages/cli/src/shared/spawn-skill.ts b/packages/cli/src/shared/spawn-skill.ts index a2b6f1546..df12ebf6c 100644 --- a/packages/cli/src/shared/spawn-skill.ts +++ b/packages/cli/src/shared/spawn-skill.ts @@ -30,7 +30,7 @@ You have the \`spawn\` CLI installed. Use it to provision cloud VMs with AI agen spawn --headless --output json --prompt "task description" \`\`\` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: \`{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}\` @@ -74,7 +74,7 @@ const HERMES_SNIPPET = ` You have the \`spawn\` CLI for creating child cloud VMs with AI agents. Use \`spawn --headless --output json --prompt "task"\` to delegate work. -Available agents: claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie. +Available agents: claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie. Cloud credentials are pre-configured. Run \`spawn list --json\` to see children. \`--headless\` only provisions. To run a prompt on the child: \`ssh root@ "bash -lc 'claude -p --dangerously-skip-permissions \\"prompt\\"'"\`. Always use \`bash -lc\` (binaries are in ~/.local/bin/). `; @@ -104,11 +104,6 @@ const AGENT_SKILLS: Record = { content: SKILL_FRONTMATTER + SKILL_BODY, append: false, }, - zeroclaw: { - remotePath: "~/.zeroclaw/workspace/AGENTS.md", - content: SKILL_BODY, - append: false, - }, opencode: { remotePath: "~/.config/opencode/AGENTS.md", content: SKILL_BODY, diff --git a/packages/cli/src/sprite/sprite.ts b/packages/cli/src/sprite/sprite.ts index 047079622..f7284d18f 100644 --- a/packages/cli/src/sprite/sprite.ts +++ b/packages/cli/src/sprite/sprite.ts @@ -657,10 +657,10 @@ export async function uploadFileSprite(localPath: string, remotePath: string): P /** Download a file from the remote sprite by catting it to stdout. */ export async function downloadFileSprite(remotePath: string, localPath: string): Promise { - const normalizedRemote = validateRemotePath(remotePath, /^[a-zA-Z0-9/_.~$-]+$/); + const expandedRemote = remotePath.replace(/^\$HOME\//, "~/"); + const normalizedRemote = validateRemotePath(expandedRemote, /^[a-zA-Z0-9/_.~-]+$/); const spriteCmd = getSpriteCmd()!; - const expandedPath = normalizedRemote.replace(/^\$HOME/, "~"); await spriteRetry("sprite download", async () => { const proc = Bun.spawn( @@ -672,7 +672,7 @@ export async function downloadFileSprite(remotePath: string, localPath: string): _state.name, "--", "cat", - expandedPath, + normalizedRemote, ], { stdio: [ diff --git a/packer/agents.json b/packer/agents.json index b6c25aec5..9ab6bb61b 100644 --- a/packer/agents.json +++ b/packer/agents.json @@ -30,13 +30,6 @@ "mkdir -p ~/.npm-global/bin && npm install -g --prefix ~/.npm-global @kilocode/cli" ] }, - "zeroclaw": { - "tier": "minimal", - "install": [ - "if [ ! -f /swapfile ]; then fallocate -l 4G /swapfile && chmod 600 /swapfile && mkswap /swapfile && swapon /swapfile; fi", - "curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/a117be64fdaa31779204beadf2942c8aef57d0e5/scripts/bootstrap.sh | bash -s -- --install-rust --install-system-deps --prefer-prebuilt" - ] - }, "hermes": { "tier": "minimal", "install": [ diff --git a/packer/digitalocean.pkr.hcl b/packer/digitalocean.pkr.hcl index 2329be016..28bb7b3b5 100644 --- a/packer/digitalocean.pkr.hcl +++ b/packer/digitalocean.pkr.hcl @@ -7,7 +7,7 @@ packer { } } -variable "do_api_token" { +variable "digitalocean_access_token" { type = string sensitive = true } @@ -32,11 +32,11 @@ locals { } source "digitalocean" "spawn" { - api_token = var.do_api_token + api_token = var.digitalocean_access_token image = "ubuntu-24-04-x64" region = "sfo3" - # 2 GB RAM needed — Claude's native installer and zeroclaw's Rust build - # get OOM-killed on s-1vcpu-1gb. Snapshots built here work on all sizes. + # 2 GB RAM needed — Claude's native installer gets OOM-killed on + # s-1vcpu-1gb. Snapshots built here work on all sizes. size = "s-2vcpu-2gb" ssh_username = "root" diff --git a/packer/scripts/capture-agent.sh b/packer/scripts/capture-agent.sh index 4a7bee768..7f5498c2b 100644 --- a/packer/scripts/capture-agent.sh +++ b/packer/scripts/capture-agent.sh @@ -13,9 +13,9 @@ fi # Validate agent name against allowed list to prevent injection case "${AGENT_NAME}" in - openclaw|codex|kilocode|claude|opencode|zeroclaw|hermes|junie) ;; + openclaw|codex|kilocode|claude|opencode|hermes|junie) ;; *) - printf 'Error: Invalid agent name: %s\nAllowed: openclaw, codex, kilocode, claude, opencode, zeroclaw, hermes, junie\n' "${AGENT_NAME}" >&2 + printf 'Error: Invalid agent name: %s\nAllowed: openclaw, codex, kilocode, claude, opencode, hermes, junie\n' "${AGENT_NAME}" >&2 exit 1 ;; esac @@ -44,9 +44,6 @@ case "${AGENT_NAME}" in opencode) echo "/root/.opencode/" >> "${PATHS_FILE}" ;; - zeroclaw) - echo "/root/.cargo/bin/zeroclaw" >> "${PATHS_FILE}" - ;; hermes) echo "/root/.local/bin/hermes" >> "${PATHS_FILE}" echo "/root/.local/share/" >> "${PATHS_FILE}" diff --git a/sh/aws/README.md b/sh/aws/README.md index 962c92581..269d2c35b 100644 --- a/sh/aws/README.md +++ b/sh/aws/README.md @@ -24,12 +24,6 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/aws/claude.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/aws/openclaw.sh) ``` -#### ZeroClaw - -```bash -bash <(curl -fsSL https://openrouter.ai/labs/spawn/aws/zeroclaw.sh) -``` - #### Codex CLI ```bash diff --git a/sh/aws/zeroclaw.sh b/sh/aws/zeroclaw.sh deleted file mode 100644 index 0b6188de6..000000000 --- a/sh/aws/zeroclaw.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash -set -eo pipefail - -# Thin shim: ensures bun is available, runs bundled aws.js (local or from GitHub release) - -_ensure_bun() { - if command -v bun &>/dev/null; then return 0; fi - printf '\033[0;36mInstalling bun...\033[0m\n' >&2 - curl -fsSL --proto '=https' --show-error https://bun.sh/install?version=1.3.9 | bash >/dev/null || { printf '\033[0;31mFailed to install bun\033[0m\n' >&2; exit 1; } - export PATH="$HOME/.bun/bin:$PATH" - command -v bun &>/dev/null || { printf '\033[0;31mbun not found after install\033[0m\n' >&2; exit 1; } -} - -_ensure_bun - -# SPAWN_CLI_DIR override — force local source (used by e2e tests) -if [[ -n "${SPAWN_CLI_DIR:-}" && -f "$SPAWN_CLI_DIR/packages/cli/src/aws/main.ts" ]]; then - exec bun run "$SPAWN_CLI_DIR/packages/cli/src/aws/main.ts" zeroclaw "$@" -fi - -# Remote — download and run compiled TypeScript bundle -AWS_JS=$(mktemp) -trap 'rm -f "$AWS_JS"' EXIT -curl -fsSL --proto '=https' "https://github.com/OpenRouterTeam/spawn/releases/download/aws-latest/aws.js" -o "$AWS_JS" \ - || { printf '\033[0;31mFailed to download aws.js\033[0m\n' >&2; exit 1; } -exec bun run "$AWS_JS" zeroclaw "$@" diff --git a/sh/digitalocean/README.md b/sh/digitalocean/README.md index d8672dc07..6b2c50e74 100644 --- a/sh/digitalocean/README.md +++ b/sh/digitalocean/README.md @@ -16,12 +16,6 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/claude.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/openclaw.sh) ``` -#### ZeroClaw - -```bash -bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/zeroclaw.sh) -``` - #### Codex CLI ```bash @@ -62,7 +56,7 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/cursor.sh) | Variable | Description | Default | |---|---|---| -| `DO_API_TOKEN` | DigitalOcean API token | — (OAuth if unset) | +| `DIGITALOCEAN_ACCESS_TOKEN` | DigitalOcean API token (also accepts `DIGITALOCEAN_API_TOKEN` or `DO_API_TOKEN`) | — (OAuth if unset) | | `DO_DROPLET_NAME` | Name for the created droplet | auto-generated | | `DO_REGION` | Datacenter region (see regions below) | `nyc3` | | `DO_DROPLET_SIZE` | Droplet size slug (see sizes below) | `s-2vcpu-2gb` | @@ -97,7 +91,7 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/cursor.sh) ```bash DO_DROPLET_NAME=dev-mk1 \ -DO_API_TOKEN=your-token \ +DIGITALOCEAN_ACCESS_TOKEN=your-token \ OPENROUTER_API_KEY=sk-or-v1-xxxxx \ bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/claude.sh) ``` @@ -107,7 +101,7 @@ Override region and droplet size: ```bash DO_REGION=fra1 \ DO_DROPLET_SIZE=s-1vcpu-2gb \ -DO_API_TOKEN=your-token \ +DIGITALOCEAN_ACCESS_TOKEN=your-token \ OPENROUTER_API_KEY=sk-or-v1-xxxxx \ bash <(curl -fsSL https://openrouter.ai/labs/spawn/digitalocean/claude.sh) ``` diff --git a/sh/digitalocean/zeroclaw.sh b/sh/digitalocean/zeroclaw.sh deleted file mode 100644 index 097664e95..000000000 --- a/sh/digitalocean/zeroclaw.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/bin/bash -set -eo pipefail - -# Thin shim: ensures bun is available, runs bundled digitalocean.js (local or from GitHub release) -# Includes restart loop for SIGTERM recovery on DigitalOcean - -_AGENT_NAME="zeroclaw" -_MAX_RETRIES=3 - -_ensure_bun() { - if command -v bun &>/dev/null; then return 0; fi - printf '\033[0;36mInstalling bun...\033[0m\n' >&2 - curl -fsSL --proto '=https' --show-error https://bun.sh/install?version=1.3.9 | bash >/dev/null || { printf '\033[0;31mFailed to install bun\033[0m\n' >&2; exit 1; } - export PATH="$HOME/.bun/bin:$PATH" - command -v bun &>/dev/null || { printf '\033[0;31mbun not found after install\033[0m\n' >&2; exit 1; } -} - -# Run command in the foreground so bun gets full terminal access (raw mode, -# arrow keys for interactive prompts). The old pattern backgrounded the child -# with & + wait so a SIGTERM trap could forward the signal, but that removed -# bun from the foreground process group and broke @clack/prompts multiselect. -# Now SIGTERM is detected from exit code 143 (128 + 15) after the child exits. -_run_with_restart() { - # In headless mode (E2E / --headless), skip the restart loop entirely. - # Restarting in headless mode creates duplicate droplets, exhausting the - # account's droplet quota and causing all subsequent agents to fail. - if [ "${SPAWN_HEADLESS:-}" = "1" ]; then - "$@" - return $? - fi - - local attempt=0 - local backoff=2 - while [ "$attempt" -lt "$_MAX_RETRIES" ]; do - attempt=$((attempt + 1)) - - "$@" - local exit_code=$? - - # Normal exit - if [ "$exit_code" -eq 0 ]; then - return 0 - fi - - # SIGTERM (143) or SIGKILL (137) — attempt restart - if [ "$exit_code" -eq 143 ] || [ "$exit_code" -eq 137 ]; then - printf '\033[0;33m[spawn/%s] Agent process terminated (exit %s). The droplet is likely still running.\033[0m\n' \ - "$_AGENT_NAME" "$exit_code" >&2 - printf '\033[0;33m[spawn/%s] Check your DigitalOcean dashboard: https://cloud.digitalocean.com/droplets\033[0m\n' \ - "$_AGENT_NAME" >&2 - if [ "$attempt" -lt "$_MAX_RETRIES" ]; then - printf '\033[0;33m[spawn/%s] Restarting (attempt %s/%s, backoff %ss)...\033[0m\n' \ - "$_AGENT_NAME" "$((attempt + 1))" "$_MAX_RETRIES" "$backoff" >&2 - sleep "$backoff" - backoff=$((backoff * 2)) - continue - else - printf '\033[0;31m[spawn/%s] Max restart attempts reached (%s). Giving up.\033[0m\n' \ - "$_AGENT_NAME" "$_MAX_RETRIES" >&2 - return "$exit_code" - fi - fi - - # Other failure — exit with the original code - return "$exit_code" - done -} - -_ensure_bun - -# SPAWN_CLI_DIR override — force local source (used by e2e tests) -if [[ -n "${SPAWN_CLI_DIR:-}" && -f "$SPAWN_CLI_DIR/packages/cli/src/digitalocean/main.ts" ]]; then - _run_with_restart bun run "$SPAWN_CLI_DIR/packages/cli/src/digitalocean/main.ts" "$_AGENT_NAME" "$@" - exit $? -fi - -# Remote — download bundled digitalocean.js from GitHub release -DO_JS=$(mktemp) -trap 'rm -f "$DO_JS"' EXIT -curl -fsSL --proto '=https' "https://github.com/OpenRouterTeam/spawn/releases/download/digitalocean-latest/digitalocean.js" -o "$DO_JS" \ - || { printf '\033[0;31mFailed to download digitalocean.js\033[0m\n' >&2; exit 1; } - -_run_with_restart bun run "$DO_JS" "$_AGENT_NAME" "$@" -exit $? diff --git a/sh/docker/zeroclaw.Dockerfile b/sh/docker/zeroclaw.Dockerfile deleted file mode 100644 index c02c0ffbd..000000000 --- a/sh/docker/zeroclaw.Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -FROM ubuntu:24.04 - -ENV DEBIAN_FRONTEND=noninteractive - -# Base packages -RUN apt-get update -y && \ - apt-get install -y --no-install-recommends \ - curl git ca-certificates build-essential unzip && \ - rm -rf /var/lib/apt/lists/* - -# ZeroClaw — bootstrap script installs Rust + builds from source -RUN curl --proto '=https' -LsSf \ - https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/a117be64fdaa31779204beadf2942c8aef57d0e5/scripts/bootstrap.sh \ - | bash -s -- --install-rust --install-system-deps --prefer-prebuilt - -# Ensure cargo bin is on PATH for all shells -RUN for rc in /root/.bashrc /root/.zshrc; do \ - grep -q '.cargo/bin' "$rc" 2>/dev/null || \ - echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> "$rc"; \ - done - -CMD ["/bin/sleep", "inf"] diff --git a/sh/e2e/e2e.sh b/sh/e2e/e2e.sh index 76bc8d5e7..840417429 100755 --- a/sh/e2e/e2e.sh +++ b/sh/e2e/e2e.sh @@ -230,7 +230,7 @@ run_single_agent() { # Per-agent timeout: run provision/verify/input_test in a subshell with a # wall-clock timeout. This prevents any single step from hanging indefinitely # and ensures a result file is always written (pass, fail, or timeout). - # Fixes #2714: sprite-zeroclaw and digitalocean-opencode stalling with no result. + # Fixes #2714: digitalocean-opencode stalling with no result. # --------------------------------------------------------------------------- local effective_agent_timeout effective_agent_timeout=$(get_agent_timeout "${agent}") diff --git a/sh/e2e/interactive-harness.ts b/sh/e2e/interactive-harness.ts index bc7e6aacd..cecb26096 100644 --- a/sh/e2e/interactive-harness.ts +++ b/sh/e2e/interactive-harness.ts @@ -9,7 +9,7 @@ // Required env: // ANTHROPIC_API_KEY — For the AI driver (Claude Haiku) // OPENROUTER_API_KEY — Injected into spawn for the agent -// Cloud credentials — HCLOUD_TOKEN, DO_API_TOKEN, AWS_ACCESS_KEY_ID, etc. +// Cloud credentials — HCLOUD_TOKEN, DIGITALOCEAN_ACCESS_TOKEN, AWS_ACCESS_KEY_ID, etc. // // Outputs JSON to stdout: { success: boolean, duration: number, transcript: string, uxIssues?: UxIssue[] } @@ -47,7 +47,7 @@ function buildCredentialHints(): string { const hetzner = process.env.HCLOUD_TOKEN ?? ""; if (hetzner) creds.push(`Hetzner token: ${hetzner}`); - const doToken = process.env.DO_API_TOKEN ?? ""; + const doToken = process.env.DIGITALOCEAN_ACCESS_TOKEN ?? process.env.DIGITALOCEAN_API_TOKEN ?? process.env.DO_API_TOKEN ?? ""; if (doToken) creds.push(`DigitalOcean token: ${doToken}`); const awsKey = process.env.AWS_ACCESS_KEY_ID ?? ""; @@ -79,6 +79,8 @@ function redactSecrets(text: string): string { const secrets = [ process.env.OPENROUTER_API_KEY, process.env.HCLOUD_TOKEN, + process.env.DIGITALOCEAN_ACCESS_TOKEN, + process.env.DIGITALOCEAN_API_TOKEN, process.env.DO_API_TOKEN, process.env.AWS_ACCESS_KEY_ID, process.env.AWS_SECRET_ACCESS_KEY, diff --git a/sh/e2e/lib/clouds/digitalocean.sh b/sh/e2e/lib/clouds/digitalocean.sh index 0f0589dc3..3ffc99cf5 100644 --- a/sh/e2e/lib/clouds/digitalocean.sh +++ b/sh/e2e/lib/clouds/digitalocean.sh @@ -4,11 +4,19 @@ # Implements the standard cloud driver interface (_digitalocean_*) for # provisioning and managing DigitalOcean droplets in the E2E test suite. # -# Requires: DO_API_TOKEN, jq, ssh +# Accepts: DIGITALOCEAN_ACCESS_TOKEN, DIGITALOCEAN_API_TOKEN, or DO_API_TOKEN # API: https://api.digitalocean.com/v2 # SSH user: root set -eo pipefail +# ── Resolve DigitalOcean token (canonical > alternate > legacy) ─────────── +if [ -n "${DIGITALOCEAN_ACCESS_TOKEN:-}" ]; then + DO_API_TOKEN="${DIGITALOCEAN_ACCESS_TOKEN}" +elif [ -n "${DIGITALOCEAN_API_TOKEN:-}" ]; then + DO_API_TOKEN="${DIGITALOCEAN_API_TOKEN}" +fi +export DO_API_TOKEN + # --------------------------------------------------------------------------- # Constants # --------------------------------------------------------------------------- @@ -19,7 +27,7 @@ _DO_DEFAULT_REGION="nyc3" # --------------------------------------------------------------------------- # _do_curl_auth [curl-args...] # -# Wrapper around curl that passes the DO_API_TOKEN via a temp config file +# Wrapper around curl that passes the token via a temp config file # instead of a command-line -H flag. This keeps the token out of `ps` output. # All arguments are forwarded to curl. # --------------------------------------------------------------------------- @@ -37,19 +45,19 @@ _do_curl_auth() { # --------------------------------------------------------------------------- # _digitalocean_validate_env # -# Validates that DO_API_TOKEN is set and the DigitalOcean API is reachable -# with valid credentials. +# Validates that a DigitalOcean token is set and the API is reachable. +# Accepts DIGITALOCEAN_ACCESS_TOKEN, DIGITALOCEAN_API_TOKEN, or DO_API_TOKEN. # Returns 0 on success, 1 on failure. # --------------------------------------------------------------------------- _digitalocean_validate_env() { if [ -z "${DO_API_TOKEN:-}" ]; then - log_err "DO_API_TOKEN is not set" + log_err "DigitalOcean token is not set (set DIGITALOCEAN_ACCESS_TOKEN, DIGITALOCEAN_API_TOKEN, or DO_API_TOKEN)" return 1 fi if ! _do_curl_auth -sf \ "${_DO_API}/account" >/dev/null 2>&1; then - log_err "DigitalOcean API authentication failed — check DO_API_TOKEN" + log_err "DigitalOcean API authentication failed — check your token" return 1 fi @@ -65,7 +73,7 @@ _digitalocean_validate_env() { # --------------------------------------------------------------------------- _digitalocean_headless_env() { local app="$1" - # local agent="$2" # unused but part of the interface + # $2 = agent (unused but part of the interface) printf 'export DO_DROPLET_NAME="%s"\n' "${app}" printf 'export DO_DROPLET_SIZE="%s"\n' "${DO_DROPLET_SIZE:-${_DO_DEFAULT_SIZE}}" @@ -376,7 +384,7 @@ _digitalocean_max_parallel() { _existing=$(_do_curl_auth -sf "${_DO_API}/droplets?per_page=200" 2>/dev/null | grep -o '"id":[0-9]*' | wc -l | tr -d ' ') || { printf '3'; return 0; } _available=$(( _limit - _existing )) if [ "${_available}" -lt 1 ]; then - log_warn "DigitalOcean droplet limit reached: ${_existing}/${_limit} droplets in use (0 available)" + log_warn "DigitalOcean droplet limit reached: ${_existing}/${_limit} droplets in use (0 available)" >&2 printf '0' else printf '%d' "${_available}" diff --git a/sh/e2e/lib/clouds/gcp.sh b/sh/e2e/lib/clouds/gcp.sh index 2c2d66be0..69dfb658c 100644 --- a/sh/e2e/lib/clouds/gcp.sh +++ b/sh/e2e/lib/clouds/gcp.sh @@ -92,7 +92,7 @@ process.stdout.write(d.GCP_ZONE || ''); # --------------------------------------------------------------------------- _gcp_headless_env() { local app="$1" - # local agent="$2" # unused but part of the interface + # $2 = agent (unused but part of the interface) printf 'export GCP_INSTANCE_NAME="%s"\n' "${app}" printf 'export GCP_PROJECT="%s"\n' "${GCP_PROJECT:-}" diff --git a/sh/e2e/lib/clouds/hetzner.sh b/sh/e2e/lib/clouds/hetzner.sh index 82bd015b7..962cc7a94 100644 --- a/sh/e2e/lib/clouds/hetzner.sh +++ b/sh/e2e/lib/clouds/hetzner.sh @@ -378,9 +378,9 @@ _hetzner_cleanup_stale() { # --------------------------------------------------------------------------- # _hetzner_max_parallel # -# Hetzner accounts have a primary IP limit. This QA account supports ~3 -# concurrent provisioning operations before hitting resource_limit_exceeded. +# Hetzner accounts have a primary IP limit. Reduced from 3 to 2 to avoid +# server_limit_reached when pre-existing servers consume quota (#3111). # --------------------------------------------------------------------------- _hetzner_max_parallel() { - printf '3' + printf '2' } diff --git a/sh/e2e/lib/clouds/sprite.sh b/sh/e2e/lib/clouds/sprite.sh index 45b1cba88..64b47f07c 100644 --- a/sh/e2e/lib/clouds/sprite.sh +++ b/sh/e2e/lib/clouds/sprite.sh @@ -129,7 +129,7 @@ _sprite_validate_env() { # --------------------------------------------------------------------------- _sprite_headless_env() { local app="$1" - # local agent="$2" # unused but part of the interface + # $2 = agent (unused but part of the interface) printf 'export SPRITE_NAME="%s"\n' "${app}" if [ -n "${_SPRITE_ORG}" ]; then diff --git a/sh/e2e/lib/common.sh b/sh/e2e/lib/common.sh index 9cdb64e47..c04656a4e 100644 --- a/sh/e2e/lib/common.sh +++ b/sh/e2e/lib/common.sh @@ -5,7 +5,7 @@ set -eo pipefail # --------------------------------------------------------------------------- # Constants # --------------------------------------------------------------------------- -ALL_AGENTS="claude openclaw zeroclaw codex opencode kilocode hermes junie cursor" +ALL_AGENTS="claude openclaw codex opencode kilocode hermes junie cursor" PROVISION_TIMEOUT="${PROVISION_TIMEOUT:-720}" INSTALL_WAIT="${INSTALL_WAIT:-600}" INPUT_TEST_TIMEOUT="${INPUT_TEST_TIMEOUT:-120}" diff --git a/sh/e2e/lib/provision.sh b/sh/e2e/lib/provision.sh index 39553fdc5..f211b980d 100644 --- a/sh/e2e/lib/provision.sh +++ b/sh/e2e/lib/provision.sh @@ -275,11 +275,6 @@ CLOUD_ENV printf 'export OPENAI_BASE_URL=%q\n' "https://openrouter.ai/api/v1" } >> "${env_tmp}" ;; - zeroclaw) - { - printf 'export ZEROCLAW_PROVIDER=%q\n' "openrouter" - } >> "${env_tmp}" - ;; hermes) { printf 'export OPENAI_BASE_URL=%q\n' "https://openrouter.ai/api/v1" @@ -393,10 +388,6 @@ _ensure_agent_binary() { bin_name="codex" install_cmd="mkdir -p ~/.npm-global && npm install -g --prefix ~/.npm-global @openai/codex" ;; - zeroclaw) - bin_name="zeroclaw" - install_cmd="curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/a117be64fdaa31779204beadf2942c8aef57d0e5/scripts/bootstrap.sh | bash -s -- --install-rust --install-system-deps --prefer-prebuilt" - ;; opencode) bin_name="opencode" install_cmd="curl -fsSL https://opencode.ai/install | bash" diff --git a/sh/e2e/lib/verify.sh b/sh/e2e/lib/verify.sh index fd80ed488..0f47080ea 100644 --- a/sh/e2e/lib/verify.sh +++ b/sh/e2e/lib/verify.sh @@ -274,40 +274,6 @@ input_test_openclaw() { return 1 } -input_test_zeroclaw() { - local app="$1" - - _validate_timeout || return 1 - - log_step "Running input test for zeroclaw..." - # Base64-encode the prompt and stage it to a remote temp file. - # Use -m/--message for non-interactive single-message mode (not -p which is --provider). - local encoded_prompt - encoded_prompt=$(printf '%s' "${INPUT_TEST_PROMPT}" | base64 -w 0 2>/dev/null || printf '%s' "${INPUT_TEST_PROMPT}" | base64 | tr -d '\n') - _validate_base64 "${encoded_prompt}" || return 1 - _stage_prompt_remotely "${app}" "${encoded_prompt}" - _stage_timeout_remotely "${app}" "${INPUT_TEST_TIMEOUT}" - - local output - # The prompt and timeout are read from staged temp files — no interpolation in this command. - output=$(cloud_exec "${app}" "\ - source ~/.spawnrc 2>/dev/null; source ~/.cargo/env 2>/dev/null; \ - _TIMEOUT=\$(cat /tmp/.e2e-timeout); \ - rm -rf /tmp/e2e-test && mkdir -p /tmp/e2e-test && cd /tmp/e2e-test && git init -q; \ - PROMPT=\$(cat /tmp/.e2e-prompt | base64 -d); \ - timeout \"\$_TIMEOUT\" zeroclaw agent -m \"\$PROMPT\"" 2>&1) || true - - if printf '%s' "${output}" | grep -qx "${INPUT_TEST_MARKER}"; then - log_ok "zeroclaw input test — marker found in response" - return 0 - else - log_err "zeroclaw input test — marker '${INPUT_TEST_MARKER}' not found in response" - log_err "Response (last 5 lines):" - printf '%s\n' "${output}" | tail -5 >&2 - return 1 - fi -} - input_test_opencode() { log_warn "opencode is TUI-only — skipping input test" return 0 @@ -355,7 +321,6 @@ run_input_test() { claude) input_test_claude "${app}" ;; codex) input_test_codex "${app}" ;; openclaw) input_test_openclaw "${app}" ;; - zeroclaw) input_test_zeroclaw "${app}" ;; opencode) input_test_opencode ;; kilocode) input_test_kilocode ;; hermes) input_test_hermes ;; @@ -557,40 +522,6 @@ _openclaw_verify_gateway_resilience() { fi } -verify_zeroclaw() { - local app="$1" - local failures=0 - - # Binary check (may be in ~/.local/bin or ~/.cargo/bin depending on install method) - log_step "Checking zeroclaw binary..." - if cloud_exec "${app}" "export PATH=\$HOME/.local/bin:\$HOME/.cargo/bin:\$PATH; source ~/.cargo/env 2>/dev/null; command -v zeroclaw" >/dev/null 2>&1; then - log_ok "zeroclaw binary found" - else - log_err "zeroclaw binary not found" - failures=$((failures + 1)) - fi - - # Env check: ZEROCLAW_PROVIDER - log_step "Checking zeroclaw env (ZEROCLAW_PROVIDER)..." - if cloud_exec "${app}" "grep -q ZEROCLAW_PROVIDER ~/.spawnrc" >/dev/null 2>&1; then - log_ok "ZEROCLAW_PROVIDER present in .spawnrc" - else - log_err "ZEROCLAW_PROVIDER not found in .spawnrc" - failures=$((failures + 1)) - fi - - # Env check: provider is openrouter - log_step "Checking zeroclaw uses openrouter..." - if cloud_exec "${app}" "grep ZEROCLAW_PROVIDER ~/.spawnrc | grep -q openrouter" >/dev/null 2>&1; then - log_ok "ZEROCLAW_PROVIDER set to openrouter" - else - log_err "ZEROCLAW_PROVIDER not set to openrouter" - failures=$((failures + 1)) - fi - - return "${failures}" -} - verify_codex() { local app="$1" local failures=0 @@ -810,7 +741,6 @@ verify_agent() { case "${agent}" in claude) verify_claude "${app}" || agent_failures=$? ;; openclaw) verify_openclaw "${app}" || agent_failures=$? ;; - zeroclaw) verify_zeroclaw "${app}" || agent_failures=$? ;; codex) verify_codex "${app}" || agent_failures=$? ;; opencode) verify_opencode "${app}" || agent_failures=$? ;; kilocode) verify_kilocode "${app}" || agent_failures=$? ;; diff --git a/sh/gcp/README.md b/sh/gcp/README.md index 3e94a857a..b55d485ce 100644 --- a/sh/gcp/README.md +++ b/sh/gcp/README.md @@ -18,12 +18,6 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/gcp/claude.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/gcp/openclaw.sh) ``` -#### ZeroClaw - -```bash -bash <(curl -fsSL https://openrouter.ai/labs/spawn/gcp/zeroclaw.sh) -``` - #### Codex CLI ```bash diff --git a/sh/gcp/zeroclaw.sh b/sh/gcp/zeroclaw.sh deleted file mode 100644 index 8f265a3f5..000000000 --- a/sh/gcp/zeroclaw.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -set -eo pipefail - -# Thin shim: ensures bun is available, runs bundled gcp.js (local or from GitHub release) - -_ensure_bun() { - if command -v bun &>/dev/null; then return 0; fi - printf '\033[0;36mInstalling bun...\033[0m\n' >&2 - curl -fsSL --proto '=https' --show-error https://bun.sh/install?version=1.3.9 | bash >/dev/null || { printf '\033[0;31mFailed to install bun\033[0m\n' >&2; exit 1; } - export PATH="$HOME/.bun/bin:$PATH" - command -v bun &>/dev/null || { printf '\033[0;31mbun not found after install\033[0m\n' >&2; exit 1; } -} - -_ensure_bun - -# SPAWN_CLI_DIR override — force local source (used by e2e tests) -if [[ -n "${SPAWN_CLI_DIR:-}" && -f "$SPAWN_CLI_DIR/packages/cli/src/gcp/main.ts" ]]; then - exec bun run "$SPAWN_CLI_DIR/packages/cli/src/gcp/main.ts" zeroclaw "$@" -fi - -# Remote — download bundled gcp.js from GitHub release -GCP_JS=$(mktemp) -trap 'rm -f "$GCP_JS"' EXIT -curl -fsSL --proto '=https' "https://github.com/OpenRouterTeam/spawn/releases/download/gcp-latest/gcp.js" -o "$GCP_JS" \ - || { printf '\033[0;31mFailed to download gcp.js\033[0m\n' >&2; exit 1; } - -exec bun run "$GCP_JS" zeroclaw "$@" diff --git a/sh/hetzner/README.md b/sh/hetzner/README.md index 919be65a9..6a35177a6 100644 --- a/sh/hetzner/README.md +++ b/sh/hetzner/README.md @@ -16,12 +16,6 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/hetzner/claude.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/hetzner/openclaw.sh) ``` -#### ZeroClaw - -```bash -bash <(curl -fsSL https://openrouter.ai/labs/spawn/hetzner/zeroclaw.sh) -``` - #### Codex CLI ```bash diff --git a/sh/hetzner/zeroclaw.sh b/sh/hetzner/zeroclaw.sh deleted file mode 100644 index 4a1fc50be..000000000 --- a/sh/hetzner/zeroclaw.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/bash -set -eo pipefail - -_ensure_bun() { - if command -v bun &>/dev/null; then return 0; fi - printf '\033[0;36mInstalling bun...\033[0m\n' >&2 - curl -fsSL --proto '=https' --show-error https://bun.sh/install?version=1.3.9 | bash >/dev/null || { printf '\033[0;31mFailed to install bun\033[0m\n' >&2; exit 1; } - export PATH="$HOME/.bun/bin:$PATH" - command -v bun &>/dev/null || { printf '\033[0;31mbun not found after install\033[0m\n' >&2; exit 1; } -} -_ensure_bun - -# SPAWN_CLI_DIR override — force local source (used by e2e tests) -if [[ -n "${SPAWN_CLI_DIR:-}" && -f "$SPAWN_CLI_DIR/packages/cli/src/hetzner/main.ts" ]]; then - exec bun run "$SPAWN_CLI_DIR/packages/cli/src/hetzner/main.ts" zeroclaw "$@" -fi - -HETZNER_JS=$(mktemp) -trap 'rm -f "$HETZNER_JS"' EXIT -curl -fsSL --proto '=https' "https://github.com/OpenRouterTeam/spawn/releases/download/hetzner-latest/hetzner.js" -o "$HETZNER_JS" \ - || { printf '\033[0;31mFailed to download hetzner.js\033[0m\n' >&2; exit 1; } -exec bun run "$HETZNER_JS" zeroclaw "$@" diff --git a/sh/local/README.md b/sh/local/README.md index 082db3cf9..945ae14f0 100644 --- a/sh/local/README.md +++ b/sh/local/README.md @@ -11,7 +11,6 @@ If you have the [spawn CLI](https://github.com/OpenRouterTeam/spawn) installed: ```bash spawn claude local spawn openclaw local -spawn zeroclaw local spawn codex local spawn opencode local spawn kilocode local @@ -25,7 +24,6 @@ Or run directly without the CLI: ```bash bash <(curl -fsSL https://openrouter.ai/labs/spawn/local/claude.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/local/openclaw.sh) -bash <(curl -fsSL https://openrouter.ai/labs/spawn/local/zeroclaw.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/local/codex.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/local/opencode.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/local/kilocode.sh) diff --git a/sh/local/zeroclaw.sh b/sh/local/zeroclaw.sh deleted file mode 100644 index 0acf20571..000000000 --- a/sh/local/zeroclaw.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -set -eo pipefail - -# Thin shim: ensures bun is available, runs bundled local.js (local or from GitHub release) - -_ensure_bun() { - if command -v bun &>/dev/null; then return 0; fi - printf '\033[0;36mInstalling bun...\033[0m\n' >&2 - curl -fsSL --proto '=https' --show-error https://bun.sh/install?version=1.3.9 | bash >/dev/null || { printf '\033[0;31mFailed to install bun\033[0m\n' >&2; exit 1; } - export PATH="$HOME/.bun/bin:$PATH" - command -v bun &>/dev/null || { printf '\033[0;31mbun not found after install\033[0m\n' >&2; exit 1; } -} - -_ensure_bun - -# SPAWN_CLI_DIR override — force local source (used by e2e tests) -if [[ -n "${SPAWN_CLI_DIR:-}" && -f "$SPAWN_CLI_DIR/packages/cli/src/local/main.ts" ]]; then - exec bun run "$SPAWN_CLI_DIR/packages/cli/src/local/main.ts" zeroclaw "$@" -fi - -# Remote — download bundled local.js from GitHub release -LOCAL_JS=$(mktemp) -trap 'rm -f "$LOCAL_JS"' EXIT -curl -fsSL --proto '=https' "https://github.com/OpenRouterTeam/spawn/releases/download/local-latest/local.js" -o "$LOCAL_JS" \ - || { printf '\033[0;31mFailed to download local.js\033[0m\n' >&2; exit 1; } - -exec bun run "$LOCAL_JS" zeroclaw "$@" diff --git a/sh/sprite/README.md b/sh/sprite/README.md index 23fb0b118..57e951ce7 100644 --- a/sh/sprite/README.md +++ b/sh/sprite/README.md @@ -16,12 +16,6 @@ bash <(curl -fsSL https://openrouter.ai/labs/spawn/sprite/claude.sh) bash <(curl -fsSL https://openrouter.ai/labs/spawn/sprite/openclaw.sh) ``` -#### ZeroClaw - -```bash -bash <(curl -fsSL https://openrouter.ai/labs/spawn/sprite/zeroclaw.sh) -``` - #### Codex CLI ```bash diff --git a/sh/sprite/zeroclaw.sh b/sh/sprite/zeroclaw.sh deleted file mode 100644 index bf0985665..000000000 --- a/sh/sprite/zeroclaw.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -set -eo pipefail - -# Thin shim: ensures bun is available, runs bundled sprite.js (local or from GitHub release) - -_ensure_bun() { - if command -v bun &>/dev/null; then return 0; fi - printf '\033[0;36mInstalling bun...\033[0m\n' >&2 - curl -fsSL --proto '=https' --show-error https://bun.sh/install?version=1.3.9 | bash >/dev/null || { printf '\033[0;31mFailed to install bun\033[0m\n' >&2; exit 1; } - export PATH="$HOME/.bun/bin:$PATH" - command -v bun &>/dev/null || { printf '\033[0;31mbun not found after install\033[0m\n' >&2; exit 1; } -} - -_ensure_bun - -# SPAWN_CLI_DIR override — force local source (used by e2e tests) -if [[ -n "${SPAWN_CLI_DIR:-}" && -f "$SPAWN_CLI_DIR/packages/cli/src/sprite/main.ts" ]]; then - exec bun run "$SPAWN_CLI_DIR/packages/cli/src/sprite/main.ts" zeroclaw "$@" -fi - -# Remote — download bundled sprite.js from GitHub release -SPRITE_JS=$(mktemp) -trap 'rm -f "$SPRITE_JS"' EXIT -curl -fsSL --proto '=https' "https://github.com/OpenRouterTeam/spawn/releases/download/sprite-latest/sprite.js" -o "$SPRITE_JS" \ - || { printf '\033[0;31mFailed to download sprite.js\033[0m\n' >&2; exit 1; } - -exec bun run "$SPRITE_JS" zeroclaw "$@" diff --git a/sh/test/e2e-lib.sh b/sh/test/e2e-lib.sh index 64ddd98da..e55a61c40 100644 --- a/sh/test/e2e-lib.sh +++ b/sh/test/e2e-lib.sh @@ -106,8 +106,8 @@ result=$(make_app_name "openclaw") assert_match "make_app_name openclaw (aws)" '^e2e-aws-openclaw-[0-9]+$' "${result}" ACTIVE_CLOUD="sprite" -result=$(make_app_name "zeroclaw") -assert_match "make_app_name zeroclaw (sprite)" '^e2e-sprite-zeroclaw-[0-9]+$' "${result}" +result=$(make_app_name "codex") +assert_match "make_app_name codex (sprite)" '^e2e-sprite-codex-[0-9]+$' "${result}" # Reset ACTIVE_CLOUD="" diff --git a/skills/claude/SKILL.md b/skills/claude/SKILL.md index dafb83b96..5b2d77321 100644 --- a/skills/claude/SKILL.md +++ b/skills/claude/SKILL.md @@ -14,7 +14,7 @@ You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents spawn --headless --output json --prompt "task description" ``` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` diff --git a/skills/codex/SKILL.md b/skills/codex/SKILL.md index dafb83b96..5b2d77321 100644 --- a/skills/codex/SKILL.md +++ b/skills/codex/SKILL.md @@ -14,7 +14,7 @@ You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents spawn --headless --output json --prompt "task description" ``` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` diff --git a/skills/hermes/SOUL.md b/skills/hermes/SOUL.md index 56091717b..1caae2838 100644 --- a/skills/hermes/SOUL.md +++ b/skills/hermes/SOUL.md @@ -4,6 +4,6 @@ You have the `spawn` CLI for creating child cloud VMs with AI agents. Use `spawn --headless --output json --prompt "task"` to delegate work. -Available agents: claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie. +Available agents: claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie. Cloud credentials are pre-configured. Run `spawn list --json` to see children. `--headless` only provisions. To run a prompt on the child: `ssh root@ "bash -lc 'claude -p --dangerously-skip-permissions \"prompt\"'"`. Always use `bash -lc` (binaries are in ~/.local/bin/). diff --git a/skills/junie/AGENTS.md b/skills/junie/AGENTS.md index b1288f8c7..b6be851c1 100644 --- a/skills/junie/AGENTS.md +++ b/skills/junie/AGENTS.md @@ -8,7 +8,7 @@ You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents spawn --headless --output json --prompt "task description" ``` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` diff --git a/skills/kilocode/spawn.md b/skills/kilocode/spawn.md index b1288f8c7..b6be851c1 100644 --- a/skills/kilocode/spawn.md +++ b/skills/kilocode/spawn.md @@ -8,7 +8,7 @@ You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents spawn --headless --output json --prompt "task description" ``` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` diff --git a/skills/openclaw/SKILL.md b/skills/openclaw/SKILL.md index dafb83b96..5b2d77321 100644 --- a/skills/openclaw/SKILL.md +++ b/skills/openclaw/SKILL.md @@ -14,7 +14,7 @@ You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents spawn --headless --output json --prompt "task description" ``` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` diff --git a/skills/opencode/AGENTS.md b/skills/opencode/AGENTS.md index b1288f8c7..b6be851c1 100644 --- a/skills/opencode/AGENTS.md +++ b/skills/opencode/AGENTS.md @@ -8,7 +8,7 @@ You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents spawn --headless --output json --prompt "task description" ``` -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie +**Agents:** claude, codex, cursor, openclaw, opencode, kilocode, hermes, junie **Clouds:** hetzner, digitalocean, aws, gcp, sprite Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` diff --git a/skills/zeroclaw/AGENTS.md b/skills/zeroclaw/AGENTS.md deleted file mode 100644 index b1288f8c7..000000000 --- a/skills/zeroclaw/AGENTS.md +++ /dev/null @@ -1,46 +0,0 @@ -# Spawn — Create Child VMs - -You have the `spawn` CLI installed. Use it to provision cloud VMs with AI agents. - -## Provision a Child VM - -```bash -spawn --headless --output json --prompt "task description" -``` - -**Agents:** claude, codex, openclaw, zeroclaw, opencode, kilocode, hermes, junie -**Clouds:** hetzner, digitalocean, aws, gcp, sprite - -Returns JSON: `{"status":"success","ip_address":"...","ssh_user":"root","server_id":"..."}` - -## Running Prompts on Child VMs - -`--headless` only provisions — to run work on the child, SSH in with `bash -lc`: - -```bash -# Claude Code (use -p, NOT --print or --headless) -ssh -o StrictHostKeyChecking=accept-new root@ "bash -lc 'claude -p --dangerously-skip-permissions \"your prompt\"'" - -# Codex CLI -ssh -o StrictHostKeyChecking=accept-new root@ "bash -lc 'codex --quiet \"your prompt\"'" -``` - -### IMPORTANT — do NOT waste tokens rediscovering these: -- **Always use `bash -lc`** — agent binaries are in `~/.local/bin/` which is only on PATH in login shells -- **Claude uses `-p`** for non-interactive output (NOT `--print`, NOT `--headless`) -- **Add `--dangerously-skip-permissions`** to skip approval prompts on child VMs -- **Never try `which claude` or `find`** to locate binaries — they are always at `~/.local/bin/` -- **Never create non-root users** to work around permission issues — just use `-p` - -## Managing Children - -- `spawn list --json` — see running children -- `spawn delete --name --yes` — tear down a child VM (headless) -- `spawn tree` — see the full spawn tree - -## Context - -- You are running inside a spawned VM (SPAWN_DEPTH is set) -- Cloud credentials are pre-configured — no auth prompts -- OpenRouter billing is shared with the parent -