Feature Request
Summary: Allow cm to use locally installed CLI-based LLM tools (e.g., claude, codex, gemini) for the reflection/summarization step, instead of requiring an API key (OpenAI/Anthropic) to be configured.
Motivation
Currently, cm requires an API key (e.g., OPENAI_API_KEY) to perform LLM-powered reflection (cm reflect, cm context, etc.). Many users already have CLI tools like claude (Claude Code), codex, or gemini-cli installed and authenticated — these tools handle their own auth, token management, and model selection.
It would be convenient to leverage these existing CLI tools as the LLM backend for reflection, avoiding the need to separately provision and manage API keys just for cm.
Proposed Behavior
- Add a configuration option (e.g.,
llm_backend: cli) that tells cm to shell out to an installed CLI tool for LLM calls instead of hitting an API directly.
- Allow the user to specify which CLI tool to use (e.g.,
claude, codex, gemini).
- Fall back to the current API-key-based approach if no CLI tool is configured or available.
Benefits
- No extra API key management — reuse existing CLI auth
- Model flexibility — users can use whichever model/provider their CLI tool is configured for
- Lower barrier to entry — users who have Claude Code or Codex installed can use
cm reflection immediately without setting up a separate API key
Example
# In ~/.cass-memory/config.json
{
"llm_backend": "cli",
"llm_cli_command": "claude"
}
Then cm reflect would invoke the claude CLI to perform summarization rather than calling the OpenAI API directly.
Feature Request
Summary: Allow
cmto use locally installed CLI-based LLM tools (e.g.,claude,codex,gemini) for the reflection/summarization step, instead of requiring an API key (OpenAI/Anthropic) to be configured.Motivation
Currently,
cmrequires an API key (e.g.,OPENAI_API_KEY) to perform LLM-powered reflection (cm reflect,cm context, etc.). Many users already have CLI tools likeclaude(Claude Code),codex, orgemini-cliinstalled and authenticated — these tools handle their own auth, token management, and model selection.It would be convenient to leverage these existing CLI tools as the LLM backend for reflection, avoiding the need to separately provision and manage API keys just for
cm.Proposed Behavior
llm_backend: cli) that tellscmto shell out to an installed CLI tool for LLM calls instead of hitting an API directly.claude,codex,gemini).Benefits
cmreflection immediately without setting up a separate API keyExample
Then
cm reflectwould invoke theclaudeCLI to perform summarization rather than calling the OpenAI API directly.