Conversation
- Add MIMI_DEEPSEEK_API_URL and deepseek provider support in llm_proxy.c - Extend provider_is_openai() to include deepseek (OpenAI-compatible format) - llm_api_url/host return api.deepseek.com when provider is deepseek - Update set_model_provider CLI help: anthropic|openai|deepseek - Update llm_proxy.h doc comment ================================================================ modify main/mimi_secrets.h: #define MIMI_SECRET_API_KEY "your api key" #define MIMI_SECRET_MODEL "deepseek-chat" #define MIMI_SECRET_MODEL_PROVIDER "deepseek"
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds "deepseek" as a supported LLM provider: CLI help and config updated, provider validation and NVS persistence hardened, routing/auth logic extended to treat Deepseek as OpenAI-compatible for API paths, and request-body token field names adjusted per provider. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant LLM_Proxy
participant NVS
participant ExternalAPI
CLI->>LLM_Proxy: set_model_provider("deepseek")
LLM_Proxy->>LLM_Proxy: validate provider (anthropic|openai|deepseek)
alt valid
LLM_Proxy->>NVS: nvs_open / nvs_set_str / nvs_commit
NVS-->>LLM_Proxy: ESP_OK
LLM_Proxy-->>CLI: "Model provider set."
else invalid
LLM_Proxy-->>CLI: "Error: invalid provider..."
end
CLI->>LLM_Proxy: send chat request
LLM_Proxy->>LLM_Proxy: provider_is_openai() -> true for deepseek
LLM_Proxy->>ExternalAPI: build request (Bearer auth, deepseek URL, max_tokens)
ExternalAPI-->>LLM_Proxy: response
LLM_Proxy-->>CLI: relay response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
main/cli/serial_cli.c (1)
864-868:⚠️ Potential issue | 🟡 MinorCLI help still shows the old provider default.
This command now advertises
deepseekas a valid option, but the help text still prints(default: anthropic)viaMIMI_LLM_PROVIDER_DEFAULT. With the new build-time secret default set todeepseek, that becomes misleading for users about the actual startup provider.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/cli/serial_cli.c` around lines 864 - 868, The CLI help shows the wrong startup provider because the default macro is out of date; update the definition of MIMI_LLM_PROVIDER_DEFAULT (or make the help use the runtime default accessor) so it reflects the new build-time default "deepseek" and then rebuild; specifically, locate the MIMI_LLM_PROVIDER_DEFAULT macro definition and change its value to the new default (or change the provider_cmd help construction to use the runtime provider getter instead of a hardcoded string) so provider_args/provider_cmd displays the correct default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main/llm/llm_proxy.c`:
- Around line 193-202: The helpers llm_api_url() and llm_api_host() currently
default any unrecognized s_provider to Anthropic, which hides misconfigurations;
change them to detect unknown providers and return NULL (or an error indicator)
instead of returning MIMI_LLM_API_URL/"api.anthropic.com", and update callers to
handle that error path; additionally, add validation in llm_set_provider() to
accept only known values ("deepseek","openai","anthropic") and reject/return an
error on typos so s_provider cannot be set to arbitrary strings that later
resolve silently to Anthropic.
- Around line 185-188: The request builder currently treats DeepSeek as OpenAI
(via provider_is_openai and s_provider) and emits max_completion_tokens; update
the request-body construction so DeepSeek is branched separately: detect
s_provider == "deepseek" (or use provider_is_openai plus a deepseek check) and
when building the JSON for DeepSeek use the max_tokens field instead of
max_completion_tokens, leaving the OpenAI path unchanged; ensure the DeepSeek
branch sends the same value under "max_tokens" and keep existing behavior for
"openai".
---
Outside diff comments:
In `@main/cli/serial_cli.c`:
- Around line 864-868: The CLI help shows the wrong startup provider because the
default macro is out of date; update the definition of MIMI_LLM_PROVIDER_DEFAULT
(or make the help use the runtime default accessor) so it reflects the new
build-time default "deepseek" and then rebuild; specifically, locate the
MIMI_LLM_PROVIDER_DEFAULT macro definition and change its value to the new
default (or change the provider_cmd help construction to use the runtime
provider getter instead of a hardcoded string) so provider_args/provider_cmd
displays the correct default.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f33a12d5-654a-4adf-a00a-bada9fb2f047
📒 Files selected for processing (4)
main/cli/serial_cli.cmain/llm/llm_proxy.cmain/llm/llm_proxy.hmain/mimi_config.h
|
@madlas Thanks for your contribution! Did you test it on your board? |
|
@IRONICBo Thank you for your attention! Yes, I have tested a few simple dialogue use cases. my board is esp32-s3 |
- Use max_tokens for DeepSeek instead of max_completion_tokens - Add provider validation in llm_set_provider (anthropic|openai|deepseek) - Validate NVS provider on init, keep default if invalid - CLI help default provider aligns with mimi_secrets - Handle set_model_provider error return in CLI
There was a problem hiding this comment.
🧹 Nitpick comments (1)
main/llm/llm_proxy.c (1)
811-829: Provider validation is correct; consider aligning error handling style.The validation logic correctly rejects invalid providers before NVS persistence. The error handling for NVS operations is now proper (checking return values instead of aborting).
Note that
llm_set_api_key()andllm_set_model()(lines 785-809) still useESP_ERROR_CHECKwhich aborts on NVS failures, whilellm_set_provider()now returns errors gracefully. This inconsistency is minor but could be unified in a follow-up if desired.,
♻️ Optional: Align error handling across all NVS setters
If graceful error returns are preferred over aborting, consider updating
llm_set_api_key()andllm_set_model()similarly:esp_err_t llm_set_api_key(const char *api_key) { nvs_handle_t nvs; - ESP_ERROR_CHECK(nvs_open(MIMI_NVS_LLM, NVS_READWRITE, &nvs)); - ESP_ERROR_CHECK(nvs_set_str(nvs, MIMI_NVS_KEY_API_KEY, api_key)); - ESP_ERROR_CHECK(nvs_commit(nvs)); + esp_err_t err = nvs_open(MIMI_NVS_LLM, NVS_READWRITE, &nvs); + if (err != ESP_OK) return err; + err = nvs_set_str(nvs, MIMI_NVS_KEY_API_KEY, api_key); + if (err == ESP_OK) err = nvs_commit(nvs); nvs_close(nvs); + if (err != ESP_OK) return err; safe_copy(s_api_key, sizeof(s_api_key), api_key);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/llm/llm_proxy.c` around lines 811 - 829, The review notes inconsistent NVS error handling: llm_set_provider returns esp_err_t errors instead of aborting, while llm_set_api_key and llm_set_model still call ESP_ERROR_CHECK which aborts on failures; to align behavior, refactor llm_set_api_key() and llm_set_model() to open NVS with nvs_open, call nvs_set_str and nvs_commit, close the handle, and return esp_err_t error codes instead of using ESP_ERROR_CHECK so their error paths mirror llm_set_provider (refer to functions llm_set_api_key, llm_set_model, and llm_set_provider for the pattern).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@main/llm/llm_proxy.c`:
- Around line 811-829: The review notes inconsistent NVS error handling:
llm_set_provider returns esp_err_t errors instead of aborting, while
llm_set_api_key and llm_set_model still call ESP_ERROR_CHECK which aborts on
failures; to align behavior, refactor llm_set_api_key() and llm_set_model() to
open NVS with nvs_open, call nvs_set_str and nvs_commit, close the handle, and
return esp_err_t error codes instead of using ESP_ERROR_CHECK so their error
paths mirror llm_set_provider (refer to functions llm_set_api_key,
llm_set_model, and llm_set_provider for the pattern).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d7717973-b30e-4ef5-90bd-f3b027ddaa71
📒 Files selected for processing (3)
main/cli/serial_cli.cmain/llm/llm_proxy.cmain/mimi_config.h
🚧 Files skipped from review as they are similar to previous changes (2)
- main/cli/serial_cli.c
- main/mimi_config.h
|
Addressed the code review feedback: max_tokens for DeepSeek, provider validation, CLI help fix. |
#define MIMI_SECRET_API_KEY "your api key"
#define MIMI_SECRET_MODEL "deepseek-chat"
#define MIMI_SECRET_MODEL_PROVIDER "deepseek"
Summary by CodeRabbit
New Features
Bug Fixes