Skip to content

Support using installed CLI LLM tools instead of API keys for reflection #37

@nicholaswhittle

Description

@nicholaswhittle

Feature Request

Summary: Allow cm to use locally installed CLI-based LLM tools (e.g., claude, codex, gemini) for the reflection/summarization step, instead of requiring an API key (OpenAI/Anthropic) to be configured.

Motivation

Currently, cm requires an API key (e.g., OPENAI_API_KEY) to perform LLM-powered reflection (cm reflect, cm context, etc.). Many users already have CLI tools like claude (Claude Code), codex, or gemini-cli installed and authenticated — these tools handle their own auth, token management, and model selection.

It would be convenient to leverage these existing CLI tools as the LLM backend for reflection, avoiding the need to separately provision and manage API keys just for cm.

Proposed Behavior

  • Add a configuration option (e.g., llm_backend: cli) that tells cm to shell out to an installed CLI tool for LLM calls instead of hitting an API directly.
  • Allow the user to specify which CLI tool to use (e.g., claude, codex, gemini).
  • Fall back to the current API-key-based approach if no CLI tool is configured or available.

Benefits

  • No extra API key management — reuse existing CLI auth
  • Model flexibility — users can use whichever model/provider their CLI tool is configured for
  • Lower barrier to entry — users who have Claude Code or Codex installed can use cm reflection immediately without setting up a separate API key

Example

# In ~/.cass-memory/config.json
{
  "llm_backend": "cli",
  "llm_cli_command": "claude"
}

Then cm reflect would invoke the claude CLI to perform summarization rather than calling the OpenAI API directly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions