Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 34 additions & 6 deletions cmd/shellforge/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -155,9 +155,26 @@ model := ""
// ── Step 1: Ollama (skip on headless server) ──
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The step header comment says Ollama is “skip[ped] on headless server”, but server mode now performs remote Ollama (OLLAMA_HOST) configuration. Update this comment to match the new behavior so future edits don’t reintroduce the old assumption.

Suggested change
// ── Step 1: Ollama (skip on headless server) ──
// ── Step 1: Ollama setup (local install or remote OLLAMA_HOST on server) ──

Copilot uses AI. Check for mistakes.
steps++
if isServer {
fmt.Printf("── Step %d/%d: Ollama (skipped — server mode) ──\n", steps, total)
fmt.Println(" Detected: Linux, no GPU — skipping local model setup")
fmt.Println(" Use CLI drivers instead: shellforge run claude, copilot, codex, gemini")
fmt.Printf("── Step %d/%d: Ollama (server mode) ──\n", steps, total)
fmt.Println(" Detected: Linux, no GPU — remote Ollama configuration")
fmt.Println()
fmt.Print(" Configure remote Ollama (OLLAMA_HOST) for GPU endpoint? [Y/n] ")
if confirm(reader) {
fmt.Print(" Enter OLLAMA_HOST (e.g., http://192.168.1.100:11434): ")
ollamaHost := readLine(reader)
if ollamaHost != "" {
fmt.Printf(" → Set OLLAMA_HOST=%s before running shellforge\n", ollamaHost)
Comment on lines +161 to +166
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In server mode, these prompts use confirm()/readLine(), which treat EOF/empty input as “yes”. If shellforge setup is run non-interactively (stdin closed/EOF), it will auto-enter the remote configuration flow and emit potentially confusing output. Consider detecting non-TTY/EOF and defaulting to “no” (skip prompts) in server mode.

Copilot uses AI. Check for mistakes.
fmt.Println(" ✓ Remote Ollama configured")
} else {
fmt.Println(" ⚠ No OLLAMA_HOST set — will use default (localhost:11434)")
}
} else {
fmt.Println(" Skipped remote Ollama configuration")
}
fmt.Println(" Note: Use CLI drivers for API-based inference:")
fmt.Println(" shellforge run claude \"review open PRs\"")
fmt.Println(" shellforge run copilot \"update docs\"")
fmt.Println(" shellforge run codex \"generate tests\"")
fmt.Println()
} else {
fmt.Printf("── Step %d/%d: Ollama (local LLM inference) ──\n", steps, total)
Expand Down Expand Up @@ -300,10 +317,12 @@ fmt.Println()
steps++
fmt.Printf("── Step %d/%d: Agent drivers ──\n", steps, total)

// On Mac/GPU: offer Goose (local models via Ollama). On server: skip, show API drivers.
if !isServer {
// Offer Goose for both local and remote Ollama (works with OLLAMA_HOST)
if _, err := exec.LookPath("goose"); err != nil {
fmt.Println(" Goose — AI agent with native Ollama support (actually executes tools)")
if isServer {
fmt.Println(" Note: Works with remote Ollama via OLLAMA_HOST environment variable")
}
fmt.Print(" Install Goose? [Y/n] ")
if confirm(reader) {
fmt.Println(" → Installing Goose...")
Expand All @@ -314,12 +333,19 @@ run("sh", "-c", "curl -fsSL https://github.com/block/goose/releases/download/sta
}
if _, err := exec.LookPath("goose"); err == nil {
fmt.Println(" ✓ Goose installed")
if isServer {
fmt.Println(" → Run 'goose configure' and set OLLAMA_HOST for remote GPU endpoint")
} else {
fmt.Println(" → Run 'goose configure' to set up Ollama provider")
}
} else {
fmt.Println(" ⚠ Install failed — try: brew install --cask block-goose")
}
}
Comment on lines 341 to 344
Copy link

Copilot AI Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The install-failure hint always suggests brew install --cask block-goose, which is misleading on Linux/server mode (where brew likely isn’t available and the attempted installer was the curl/bash script). Adjust the fallback guidance based on GOOS (e.g., re-run the GitHub installer script, link to Goose install docs, or provide apt/yum instructions) so server users aren’t sent down the wrong path.

Suggested change
} else {
fmt.Println(" ⚠ Install failed — try: brew install --cask block-goose")
}
}
} else {
if runtime.GOOS == "darwin" {
fmt.Println(" ⚠ Install failed — try: brew install --cask block-goose")
} else {
fmt.Println(" ⚠ Install failed — try re-running:")
fmt.Println(" curl -fsSL https://github.com/block/goose/releases/download/stable/download_cli.sh | bash")
fmt.Println(" See Goose install docs: https://github.com/block/goose")
}
}
}

Copilot uses AI. Check for mistakes.
} else {
if isServer {
fmt.Println(" ✓ Goose installed (works with remote Ollama via OLLAMA_HOST)")
} else {
fmt.Println(" ✓ Goose installed (local model driver)")
}
}
Expand Down Expand Up @@ -397,7 +423,9 @@ fmt.Println("║ Setup Complete ║")
fmt.Println("╚══════════════════════════════════════╝")
fmt.Println()
if isServer {
fmt.Println(" Server mode — use CLI drivers:")
fmt.Println(" Server mode — remote Ollama configuration available")
fmt.Println(" Set OLLAMA_HOST for remote GPU endpoint")
fmt.Println(" shellforge run goose \"describe this project\" (works with OLLAMA_HOST)")
fmt.Println(" shellforge run claude \"review open PRs\"")
fmt.Println(" shellforge run copilot \"update docs\"")
fmt.Println(" shellforge run codex \"generate tests\"")
Expand Down
Loading