Skip to content

feat: 【wip】support ollama llm#331

Open
zzcr wants to merge 3 commits intodevfrom
zzc/support-ollama-0214
Open

feat: 【wip】support ollama llm#331
zzcr wants to merge 3 commits intodevfrom
zzc/support-ollama-0214

Conversation

@zzcr
Copy link
Member

@zzcr zzcr commented Feb 14, 2026

Pull Request (OpenTiny NEXT-SDKs)

PR Checklist

Please check if your PR fulfills the following requirements:

  • The commit message follows our Commit Message Guidelines
  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)

PR Type

What kind of change does this PR introduce?

  • Bugfix
  • Feature
  • Code style update (formatting, local variables)
  • Refactoring (no functional changes, no api changes)
  • Build-related changes
  • CI-related changes
  • Documentation-related changes
  • Other... Please describe:

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Does this PR introduce a breaking change?

  • Yes
  • No

Other information

Summary by CodeRabbit

  • New Features
    • Added support for qwen3‑vl:8b model with multimodal image uploads (supports image attachments up to 10MB).
    • Continues to include deepseek‑r1:1.5b model powered by Ollama, expanding available model options.

@coderabbitai
Copy link

coderabbitai bot commented Feb 14, 2026

Walkthrough

Adds an Ollama-based model configuration for qwen3-vl:8b and a new dependency ollama-ai-provider-v2 to the next-wxt package.

Changes

Cohort / File(s) Summary
Model config update
packages/next-wxt/entrypoints/sidepanel/model-config.ts
Inserted a new DEFAULT_MODEL_CONFIGS entry for model qwen3-vl:8b with baseURL: "http://localhost:11434/api", providerType: createOllama, and multimodal settings (supportImages: true, maxFileSize: 10, supportedMimeTypes: ['image/']).
Dependency added
packages/next-wxt/package.json
Added dependency ollama-ai-provider-v2 version ~3.3.0.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐇 I nibble configs, neat and spry,

qwen3-vl:8b hops by,
Local Ollama hums a tune,
Images ready, under moon,
A rabbit cheer — deploy soon! ✨

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding support for Ollama LLM with multimodal capabilities (qwen3-vl:8b model configuration).
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into dev

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch zzc/support-ollama-0214

No actionable comments were generated in the recent review. 🎉


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@zzcr zzcr changed the title feat: support ollama llm feat: 【wip】support ollama llm Feb 14, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@packages/next-wxt/entrypoints/sidepanel/model-config.ts`:
- Line 50: DEFAULT_MODEL_CONFIGS currently contains an entry with baseURL:
'http://localhost:11434/api' which will break for users without Ollama; update
that model config (the entry in DEFAULT_MODEL_CONFIGS that references the
localhost baseURL) to either remove or blank the baseURL and mark it as
local-only by adding a property like localOnly: true or featureFlag:
'ollamaLocal', and update its label (e.g., label: 'deepseek-r1:1.5b (Local)') so
the UI can hide or display it behind a feature flag or show a clear
local/dev-only indicator; ensure any UI code that reads DEFAULT_MODEL_CONFIGS
respects the new localOnly/featureFlag property to avoid presenting the option
to general users.
- Line 53: The icon properties for the Deepseek entries are missing markRaw
wrapping which causes Vue to make the icon components reactive; update the icon
assignments (e.g., the line setting "icon: IconModelDeepseek as unknown as
Component" and the DeepSeek-R1 entry) to wrap the icon with Vue's markRaw (so
use markRaw(IconModelDeepseek) and markRaw for the DeepSeek-R1 icon) to prevent
deep reactivity and avoid dev warnings; locate these in the model-config entries
for the Deepseek models and replace the bare icon references with
markRaw-wrapped references.
🧹 Nitpick comments (1)
packages/next-wxt/package.json (1)

23-23: Inconsistent version management — consider using catalog: like other dependencies.

All other entries in dependencies use "catalog:" for centralized version management via the pnpm workspace catalog. Hardcoding "~3.3.0" here breaks that convention and may lead to version drift. Add ollama-ai-provider-v2 to the workspace catalog and reference it as "catalog:" here.

-    "ollama-ai-provider-v2": "~3.3.0",
+    "ollama-ai-provider-v2": "catalog:",

label: 'deepseek-r1:1.5b',
model: 'deepseek-r1:1.5b',
apiKey: 'sk-trial',
baseURL: 'http://localhost:11434/api',
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Localhost URL in default config — will fail for users without a local Ollama instance.

baseURL: 'http://localhost:11434/api' is hardcoded in DEFAULT_MODEL_CONFIGS, which is shipped as the default. Users who haven't set up Ollama locally will see connection errors when this model is selected. Consider either hiding this config behind a feature flag, marking it as a local/dev-only option, or adding a clear label (e.g., label: 'deepseek-r1:1.5b (Local)') so users understand the prerequisite.

🤖 Prompt for AI Agents
In `@packages/next-wxt/entrypoints/sidepanel/model-config.ts` at line 50,
DEFAULT_MODEL_CONFIGS currently contains an entry with baseURL:
'http://localhost:11434/api' which will break for users without Ollama; update
that model config (the entry in DEFAULT_MODEL_CONFIGS that references the
localhost baseURL) to either remove or blank the baseURL and mark it as
local-only by adding a property like localOnly: true or featureFlag:
'ollamaLocal', and update its label (e.g., label: 'deepseek-r1:1.5b (Local)') so
the UI can hide or display it behind a feature flag or show a clear
local/dev-only indicator; ensure any UI code that reads DEFAULT_MODEL_CONFIGS
respects the new localOnly/featureFlag property to avoid presenting the option
to general users.

baseURL: 'http://localhost:11434/api',
providerType: createOllama,
useReActMode: false,
icon: IconModelDeepseek as unknown as Component
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing markRaw() wrapper on icon.

Lines 32, 69, and 77 wrap the icon component with markRaw() to prevent Vue from making it deeply reactive. This entry (and the DeepSeek-R1 entry on line 43) omit it, which can cause Vue reactivity warnings in dev mode and unnecessary overhead.

Proposed fix
-    icon: IconModelDeepseek as unknown as Component
+    icon: markRaw(IconModelDeepseek as unknown as Component)

Also consider fixing line 43 for consistency.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
icon: IconModelDeepseek as unknown as Component
icon: markRaw(IconModelDeepseek as unknown as Component)
🤖 Prompt for AI Agents
In `@packages/next-wxt/entrypoints/sidepanel/model-config.ts` at line 53, The icon
properties for the Deepseek entries are missing markRaw wrapping which causes
Vue to make the icon components reactive; update the icon assignments (e.g., the
line setting "icon: IconModelDeepseek as unknown as Component" and the
DeepSeek-R1 entry) to wrap the icon with Vue's markRaw (so use
markRaw(IconModelDeepseek) and markRaw for the DeepSeek-R1 icon) to prevent deep
reactivity and avoid dev warnings; locate these in the model-config entries for
the Deepseek models and replace the bare icon references with markRaw-wrapped
references.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments