Open
Conversation
qiuzhi2046
requested changes
Apr 1, 2026
Owner
qiuzhi2046
left a comment
There was a problem hiding this comment.
-
问题:[P2] 在 src/pages/ModelCenter.tsx 中,持久化到 models.providers 里的 custom-openai 快照现在可能错误地满足无关的自定义 provider 校验。如果用户之后对同一个 base URL 和 model 走手动 custom-provider 鉴权流程,当前匹配逻辑可能会把旧的 custom-openai 快照误当成新 provider 写入成功的证据。
-
问题:[P3] electron/main/openclaw-model-config.ts 里的新 /models HTTP fallback 没有遵守调用方传入的 timeoutMs。CLI 路径会吃超时,但 fallback 里的 fetch() 没有 AbortController,因此一旦 endpoint 部分卡死,模型发现流程可能比 UI 预期的 API 合约挂得更久。
-
总结:这个功能方向本身是对的,happy path 的持久化测试也有价值,但新引入的持久化路径和 fallback 路径分别带来了一个“误判鉴权成功”和一个“超时回归”的边缘问题。
-
测试:已有覆盖包括配置快照持久化、模型页状态合并、HTTP fallback 模型发现;但仍缺少 custom-provider 校验与 custom-openai 存量快照冲突,以及 HTTP fallback 超时约束的测试。
-
感谢铁铁为Qclaw的做出贡献,欢迎优化后提交PR🤗
Contributor
Author
|
已按 review 修复两点:
并补了对应测试:
本地定向测试已通过。 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
变更说明
修复
custom-openai本地兼容端点在 Qclaw 中的三个相关表现:/v1/models有返回时,仍可能显示发现 0 个模型custom-openaiprovider 和模型又显示为 0根因
custom-openai完全依赖openclaw models list,当 CLI 返回空结果时没有 fallback 到真实 OpenAI-compatible/models接口custom-openai快照进入models.providers后,还需要避免被误当成手动 custom-provider 鉴权成功的证据timeoutMs变更类型
影响范围
electron/main/)electron/preload/)src/)测试
已验证的测试文件:
electron/main/__tests__/openclaw-model-config.test.tselectron/main/__tests__/openclaw-model-config.local-env.test.tssrc/pages/__tests__/model-center.test.tsxsrc/pages/__tests__/models-page-state.test.ts补充说明
已根据 review 补充两点收尾修复:
custom-openai/ollama/vllm这类本地兼容快照custom-openai的/modelsHTTP fallback 现在使用AbortController,遵守timeoutMs