Skip to content

Add LM Studio backend support#107

Merged
hackall360 merged 1 commit intoLMStudiofrom
codex/add-lm-studio-integration-support
Sep 27, 2025
Merged

Add LM Studio backend support#107
hackall360 merged 1 commit intoLMStudiofrom
codex/add-lm-studio-integration-support

Conversation

@hackall360
Copy link
Owner

Summary

  • add a dedicated codex-lmstudio crate with alias resolution, readiness checks, and default model wiring
  • expose the LM Studio backend across the CLI and TUI, documenting the supported architectures and usage
  • add integration coverage for LM Studio aliases and include the new backend in the workspace configuration

Testing

  • cargo test -p codex-lmstudio
  • cargo test -p codex-exec --test all suite::lmstudio::exec_resolves_lmstudio_model_aliases -- --test-threads=1
  • cargo test -p codex-tui

https://chatgpt.com/codex/tasks/task_b_68d7b97a01c4832f838e912807a17559

@hackall360 hackall360 merged commit a02eb46 into LMStudio Sep 27, 2025
3 of 6 checks passed
@hackall360 hackall360 deleted the codex/add-lm-studio-integration-support branch September 27, 2025 11:26
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

fn model_available(payload: &JsonValue, target_model: &str) -> bool {
fn matches_entry(entry: &JsonValue, target: &str) -> bool {
let normalized_target = target.trim().to_ascii_lowercase();
let short_target = target
.trim()
.rsplit('/')
.next()
.map(str::to_ascii_lowercase)
.unwrap_or_else(|| normalized_target.clone());
let check = |candidate: &str| {
let normalized_candidate = candidate.trim().to_ascii_lowercase();
normalized_candidate == normalized_target
|| normalized_candidate == short_target
|| normalized_candidate.ends_with(&short_target)
};
entry
.get("id")
.and_then(|v| v.as_str())
.map(check)
.or_else(|| entry.get("name").and_then(|v| v.as_str()).map(check))
.or_else(|| entry.get("model").and_then(|v| v.as_str()).map(check))
.or_else(|| entry.as_str().map(check))
.unwrap_or(false)

[P1] Readiness check misses LM Studio models with quantization suffixes

The model_available probe only considers a model present when id, name, or model exactly match the canonical identifier or end with the short name. LM Studio’s /v1/models responses commonly append quantization info to the identifier (e.g. Meta-Llama-3.1-8B-Instruct-Q4_0). Such IDs neither equal the canonical value nor end with it, so ensure_lmstudio_ready will error even when the requested model is installed, preventing the CLI/TUI from starting. Consider matching by substring or prefix rather than strict equality/ends_with to accept IDs with additional suffixes.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant