Skip to content

Comments

add ollama support#104

Merged
leonvanzyl merged 2 commits intomasterfrom
ollama-support
Jan 26, 2026
Merged

add ollama support#104
leonvanzyl merged 2 commits intomasterfrom
ollama-support

Conversation

@leonvanzyl
Copy link
Collaborator

@leonvanzyl leonvanzyl commented Jan 26, 2026

Summary by CodeRabbit

  • New Features

    • Added support for running with local Ollama models as an alternative to the standard API.
    • Added Ollama mode indicator in the application header when local models are configured.
  • Documentation

    • Added comprehensive setup guide for using local Ollama models, including configuration steps and model recommendations.
    • Updated environment variable examples with Ollama configuration options.
  • Chores

    • Updated .gitignore for local settings.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 26, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This PR introduces Ollama local model support by adding environment configuration options, backend detection logic to identify alternative API modes, and frontend UI indicators. Changes span documentation, configuration files, client-side API detection, server-side settings endpoints, and TypeScript UI components.

Changes

Cohort / File(s) Summary
Configuration & Documentation
.env.example, CLAUDE.md, .gitignore
Adds environment variables for Ollama configuration (base URL, auth token, timeouts, model names), new documentation section on local Ollama setup with recommendations, and ignores local Claude Code settings file.
Backend API Detection & Responses
client.py, server/routers/settings.py, server/schemas.py
Detects Ollama mode via ANTHROPIC_BASE_URL, conditionally gates Claude-specific beta features when using alternative APIs, exposes ollama_mode boolean in settings response schema and endpoint handlers.
Frontend Type Definitions & UI
ui/src/lib/types.ts, ui/src/hooks/useProjects.ts, ui/src/App.tsx
Adds ollama_mode field to Settings interface and default settings, renders conditional UI indicator (pill badge with Ollama icon and tooltip) in header when Ollama mode is active.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

Poem

🐰 A local model hops into town,
Ollama brings the brains down,
No cloud needed, just your machine,
The fastest coding you've seen!
Configuration flows through the stack,
There's no looking back! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'add ollama support' clearly and concisely summarizes the main change: introducing Ollama local model integration across configuration, backend logic, and frontend UI.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@CLAUDE.md`:
- Around line 259-274: In the "Ollama Local Models (Optional)" section update
the markdown to satisfy MD034/MD040 by wrapping the bare URL in angle brackets
(e.g. <https://ollama.com>) and add a language identifier to the fenced env
block (use "dotenv") so the code fence reads ```dotenv; modify the fenced block
under the Ollama section accordingly to include the language and keep the
existing env lines unchanged.
🧹 Nitpick comments (4)
.claude/settings.local.json (1)

1-6: Avoid machine-specific absolute paths in allowed commands.

Using a fixed C:\Projects\... path makes this permission rule brittle for other clones. If the command runs from the repo root, a repo-relative pattern keeps it portable.

♻️ Suggested tweak (repo-relative)
-      "Bash(copy \"C:\\\\Projects\\\\autocoder\\\\assets\\\\ollama.png\" \"C:\\\\Projects\\\\autocoder\\\\ui\\\\public\\\\ollama.png\")",
+      "Bash(copy \"./assets/ollama.png\" \"./ui/public/ollama.png\")",
server/routers/settings.py (1)

41-51: Forward reference works but logic is duplicated with client.py.

The call to _is_ollama_mode() at line 45 before its definition at line 48 works in Python (functions are resolved at call time), but this pattern can be confusing. Consider reordering so _is_ollama_mode() is defined before _is_glm_mode().

More importantly, this detection logic is duplicated in client.py (lines 263-264). For maintainability, consider extracting Ollama detection to a shared module.

♻️ Reorder functions for clarity
+def _is_ollama_mode() -> bool:
+    """Check if Ollama API is configured via environment variables."""
+    base_url = os.getenv("ANTHROPIC_BASE_URL", "")
+    return "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
+
+
 def _is_glm_mode() -> bool:
     """Check if GLM API is configured via environment variables."""
     base_url = os.getenv("ANTHROPIC_BASE_URL", "")
     # GLM mode is when ANTHROPIC_BASE_URL is set but NOT pointing to Ollama
     return bool(base_url) and not _is_ollama_mode()
-
-
-def _is_ollama_mode() -> bool:
-    """Check if Ollama API is configured via environment variables."""
-    base_url = os.getenv("ANTHROPIC_BASE_URL", "")
-    return "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
ui/src/App.tsx (1)

301-310: Use CSS variable for background color to support dark mode.

The hardcoded bg-white doesn't adapt to dark mode, unlike the GLM badge which uses bg-[var(--color-neo-glm)]. The Ollama badge should use the design system's card color variable instead.

♻️ Suggested improvement for dark mode consistency
 {/* Ollama Mode Indicator */}
 {settings?.ollama_mode && (
   <div
-    className="flex items-center gap-1.5 px-2 py-1 bg-white rounded border-2 border-neo-border shadow-neo-sm"
+    className="flex items-center gap-1.5 px-2 py-1 bg-[var(--color-neo-card)] rounded border-2 border-neo-border shadow-neo-sm"
     title="Using Ollama local models (configured via .env)"
   >
     <img src="/ollama.png" alt="Ollama" className="w-5 h-5" />
     <span className="text-xs font-bold text-neo-text">Ollama</span>
   </div>
 )}
client.py (1)

260-270: Ollama detection logic is duplicated and has limited coverage.

The identical detection logic exists in server/routers/settings.py (lines 48-51 in _is_ollama_mode()). Consider extracting this to a shared utility module to maintain consistency and avoid duplication.

Additionally, the current check only handles the default Ollama port and IPv4 loopback:

  • Custom Ollama ports (e.g., localhost:8080) won't be detected
  • IPv6 loopback ([::1]:11434) is not covered

Comment on lines +259 to +274
### Ollama Local Models (Optional)

Run coding agents using local models via Ollama v0.14.0+:

1. Install Ollama: https://ollama.com
2. Start Ollama: `ollama serve`
3. Pull a coding model: `ollama pull qwen3-coder`
4. Configure `.env`:
```
ANTHROPIC_BASE_URL=http://localhost:11434
ANTHROPIC_AUTH_TOKEN=ollama
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix markdownlint MD034/MD040 in the new Ollama section.

Wrap the bare URL and add a language to the fenced block to satisfy the lint rules.

✍️ Suggested doc fix
-1. Install Ollama: https://ollama.com
+1. Install Ollama: <https://ollama.com>
 ...
-   ```
+   ```dotenv
    ANTHROPIC_BASE_URL=http://localhost:11434
    ANTHROPIC_AUTH_TOKEN=ollama
    API_TIMEOUT_MS=3000000
    ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
    ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
    ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
    ```
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Ollama Local Models (Optional)
Run coding agents using local models via Ollama v0.14.0+:
1. Install Ollama: https://ollama.com
2. Start Ollama: `ollama serve`
3. Pull a coding model: `ollama pull qwen3-coder`
4. Configure `.env`:
```
ANTHROPIC_BASE_URL=http://localhost:11434
ANTHROPIC_AUTH_TOKEN=ollama
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
```
### Ollama Local Models (Optional)
Run coding agents using local models via Ollama v0.14.0+:
1. Install Ollama: <https://ollama.com>
2. Start Ollama: `ollama serve`
3. Pull a coding model: `ollama pull qwen3-coder`
4. Configure `.env`:
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

263-263: Bare URL used

(MD034, no-bare-urls)


267-267: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@CLAUDE.md` around lines 259 - 274, In the "Ollama Local Models (Optional)"
section update the markdown to satisfy MD034/MD040 by wrapping the bare URL in
angle brackets (e.g. <https://ollama.com>) and add a language identifier to the
fenced env block (use "dotenv") so the code fence reads ```dotenv; modify the
fenced block under the Ollama section accordingly to include the language and
keep the existing env lines unchanged.

- Remove assets/ollama.png (duplicate of ui/public/ollama.png)
- Remove .claude/settings.local.json from tracking
- Add .claude/settings.local.json to .gitignore

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@leonvanzyl leonvanzyl merged commit aa9e8b1 into master Jan 26, 2026
2 of 3 checks passed
@leonvanzyl leonvanzyl deleted the ollama-support branch January 26, 2026 07:50
rudiheydra added a commit to rudiheydra/AutoBuildr that referenced this pull request Jan 28, 2026
…er.md)

- Added spec-builder agent with YAML frontmatter (name: spec-builder, model: opus, color: green)
- Documented all 6 DSPy pipeline stages with api/ module references:
  1. detect_task_type() -> api/task_type_detector.py
  2. derive_tool_policy() -> api/tool_policy.py
  3. derive_budget() -> api/tool_policy.py
  4. generate_spec_name() -> api/spec_name_generator.py
  5. generate_validators_from_steps() -> api/validator_generator.py
  6. SpecBuilder.build() -> api/spec_builder.py
- Includes pipeline data flow diagram and API module reference table
- Marked feature AutoForgeAI#104 as passing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
rudiheydra added a commit to rudiheydra/AutoBuildr that referenced this pull request Feb 2, 2026
… docs

- Remove .env.example (config moved to docker/.env)
- Add .dockerignore for clean Docker builds
- Add docker/scripts/ (entrypoint, healthcheck, load, build, wait)
- Add docker/test-project/repo-concierge snapshot for containerized testing
- Add docs/PROJECT_STATUS.md with project overview
- Add spec/docker-test-env.md Docker test environment spec
- Add anthropic>=0.40.0 to requirements.txt
- Update progress notes for Feature AutoForgeAI#104 regression test

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
CoreAspectStu pushed a commit to CoreAspectStu/autocoder-custom that referenced this pull request Feb 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant