Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughThis PR introduces Ollama local model support by adding environment configuration options, backend detection logic to identify alternative API modes, and frontend UI indicators. Changes span documentation, configuration files, client-side API detection, server-side settings endpoints, and TypeScript UI components. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~15 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@CLAUDE.md`:
- Around line 259-274: In the "Ollama Local Models (Optional)" section update
the markdown to satisfy MD034/MD040 by wrapping the bare URL in angle brackets
(e.g. <https://ollama.com>) and add a language identifier to the fenced env
block (use "dotenv") so the code fence reads ```dotenv; modify the fenced block
under the Ollama section accordingly to include the language and keep the
existing env lines unchanged.
🧹 Nitpick comments (4)
.claude/settings.local.json (1)
1-6: Avoid machine-specific absolute paths in allowed commands.Using a fixed
C:\Projects\...path makes this permission rule brittle for other clones. If the command runs from the repo root, a repo-relative pattern keeps it portable.♻️ Suggested tweak (repo-relative)
- "Bash(copy \"C:\\\\Projects\\\\autocoder\\\\assets\\\\ollama.png\" \"C:\\\\Projects\\\\autocoder\\\\ui\\\\public\\\\ollama.png\")", + "Bash(copy \"./assets/ollama.png\" \"./ui/public/ollama.png\")",server/routers/settings.py (1)
41-51: Forward reference works but logic is duplicated withclient.py.The call to
_is_ollama_mode()at line 45 before its definition at line 48 works in Python (functions are resolved at call time), but this pattern can be confusing. Consider reordering so_is_ollama_mode()is defined before_is_glm_mode().More importantly, this detection logic is duplicated in
client.py(lines 263-264). For maintainability, consider extracting Ollama detection to a shared module.♻️ Reorder functions for clarity
+def _is_ollama_mode() -> bool: + """Check if Ollama API is configured via environment variables.""" + base_url = os.getenv("ANTHROPIC_BASE_URL", "") + return "localhost:11434" in base_url or "127.0.0.1:11434" in base_url + + def _is_glm_mode() -> bool: """Check if GLM API is configured via environment variables.""" base_url = os.getenv("ANTHROPIC_BASE_URL", "") # GLM mode is when ANTHROPIC_BASE_URL is set but NOT pointing to Ollama return bool(base_url) and not _is_ollama_mode() - - -def _is_ollama_mode() -> bool: - """Check if Ollama API is configured via environment variables.""" - base_url = os.getenv("ANTHROPIC_BASE_URL", "") - return "localhost:11434" in base_url or "127.0.0.1:11434" in base_urlui/src/App.tsx (1)
301-310: Use CSS variable for background color to support dark mode.The hardcoded
bg-whitedoesn't adapt to dark mode, unlike the GLM badge which usesbg-[var(--color-neo-glm)]. The Ollama badge should use the design system's card color variable instead.♻️ Suggested improvement for dark mode consistency
{/* Ollama Mode Indicator */} {settings?.ollama_mode && ( <div - className="flex items-center gap-1.5 px-2 py-1 bg-white rounded border-2 border-neo-border shadow-neo-sm" + className="flex items-center gap-1.5 px-2 py-1 bg-[var(--color-neo-card)] rounded border-2 border-neo-border shadow-neo-sm" title="Using Ollama local models (configured via .env)" > <img src="/ollama.png" alt="Ollama" className="w-5 h-5" /> <span className="text-xs font-bold text-neo-text">Ollama</span> </div> )}client.py (1)
260-270: Ollama detection logic is duplicated and has limited coverage.The identical detection logic exists in
server/routers/settings.py(lines 48-51 in_is_ollama_mode()). Consider extracting this to a shared utility module to maintain consistency and avoid duplication.Additionally, the current check only handles the default Ollama port and IPv4 loopback:
- Custom Ollama ports (e.g.,
localhost:8080) won't be detected- IPv6 loopback (
[::1]:11434) is not covered
| ### Ollama Local Models (Optional) | ||
|
|
||
| Run coding agents using local models via Ollama v0.14.0+: | ||
|
|
||
| 1. Install Ollama: https://ollama.com | ||
| 2. Start Ollama: `ollama serve` | ||
| 3. Pull a coding model: `ollama pull qwen3-coder` | ||
| 4. Configure `.env`: | ||
| ``` | ||
| ANTHROPIC_BASE_URL=http://localhost:11434 | ||
| ANTHROPIC_AUTH_TOKEN=ollama | ||
| API_TIMEOUT_MS=3000000 | ||
| ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder | ||
| ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder | ||
| ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder | ||
| ``` |
There was a problem hiding this comment.
Fix markdownlint MD034/MD040 in the new Ollama section.
Wrap the bare URL and add a language to the fenced block to satisfy the lint rules.
✍️ Suggested doc fix
-1. Install Ollama: https://ollama.com
+1. Install Ollama: <https://ollama.com>
...
- ```
+ ```dotenv
ANTHROPIC_BASE_URL=http://localhost:11434
ANTHROPIC_AUTH_TOKEN=ollama
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Ollama Local Models (Optional) | |
| Run coding agents using local models via Ollama v0.14.0+: | |
| 1. Install Ollama: https://ollama.com | |
| 2. Start Ollama: `ollama serve` | |
| 3. Pull a coding model: `ollama pull qwen3-coder` | |
| 4. Configure `.env`: | |
| ``` | |
| ANTHROPIC_BASE_URL=http://localhost:11434 | |
| ANTHROPIC_AUTH_TOKEN=ollama | |
| API_TIMEOUT_MS=3000000 | |
| ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder | |
| ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder | |
| ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder | |
| ``` | |
| ### Ollama Local Models (Optional) | |
| Run coding agents using local models via Ollama v0.14.0+: | |
| 1. Install Ollama: <https://ollama.com> | |
| 2. Start Ollama: `ollama serve` | |
| 3. Pull a coding model: `ollama pull qwen3-coder` | |
| 4. Configure `.env`: |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
263-263: Bare URL used
(MD034, no-bare-urls)
267-267: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@CLAUDE.md` around lines 259 - 274, In the "Ollama Local Models (Optional)"
section update the markdown to satisfy MD034/MD040 by wrapping the bare URL in
angle brackets (e.g. <https://ollama.com>) and add a language identifier to the
fenced env block (use "dotenv") so the code fence reads ```dotenv; modify the
fenced block under the Ollama section accordingly to include the language and
keep the existing env lines unchanged.
- Remove assets/ollama.png (duplicate of ui/public/ollama.png) - Remove .claude/settings.local.json from tracking - Add .claude/settings.local.json to .gitignore Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…er.md) - Added spec-builder agent with YAML frontmatter (name: spec-builder, model: opus, color: green) - Documented all 6 DSPy pipeline stages with api/ module references: 1. detect_task_type() -> api/task_type_detector.py 2. derive_tool_policy() -> api/tool_policy.py 3. derive_budget() -> api/tool_policy.py 4. generate_spec_name() -> api/spec_name_generator.py 5. generate_validators_from_steps() -> api/validator_generator.py 6. SpecBuilder.build() -> api/spec_builder.py - Includes pipeline data flow diagram and API module reference table - Marked feature AutoForgeAI#104 as passing Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
… docs - Remove .env.example (config moved to docker/.env) - Add .dockerignore for clean Docker builds - Add docker/scripts/ (entrypoint, healthcheck, load, build, wait) - Add docker/test-project/repo-concierge snapshot for containerized testing - Add docs/PROJECT_STATUS.md with project overview - Add spec/docker-test-env.md Docker test environment spec - Add anthropic>=0.40.0 to requirements.txt - Update progress notes for Feature AutoForgeAI#104 regression test Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
add ollama support
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.