Skip to content

Additional model provider templates — OpenRouter, Groq, Ollama, Anthropic #7

@strangeadvancedmarketing

Description

The Problem

engine/openclaw.template.json is currently wired for NVIDIA's API (Kimi K2.5). This is a great default — free tier, 131K context, fast — but it's the single biggest setup friction point for users who don't have an NVIDIA account or want to use a different provider.

What's Needed

Tested, working openclaw.json config blocks for additional providers. Each one should be a drop-in replacement for the model + provider block in the main template.

High priority:

  • OpenRouter — broadest model selection, single API key for everything
  • Groq — fastest inference, free tier, good for Llama models
  • Ollama — fully local, no API key, no cost. The "air-gapped" use case.
  • Anthropic — Claude as the AI running the framework (meta, but valid)

Format

Each provider config should be a standalone snippet showing:

  1. The model field value
  2. The provider block with baseUrl and auth pattern
  3. Any provider-specific quirks (context limits, rate limits, unsupported features)
  4. Which models are recommended and why

These would live in a new docs/PROVIDERS.md file and be referenced from CONFIG_REFERENCE.md.

How To Contribute

  1. Get the framework running on a non-NVIDIA provider
  2. Document your working config (sanitize your API key — use YOUR_API_KEY_HERE)
  3. Note anything that behaved differently from the NVIDIA setup
  4. Open a PR adding your provider block to docs/PROVIDERS.md

Even a single working provider config is a meaningful contribution — each one removes a setup blocker for a different group of users.

Status

  • NVIDIA (Kimi K2.5) — shipped, production-validated
  • OpenRouter
  • Groq
  • Ollama
  • Anthropic

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions