Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (7)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds Avian as a supported LLM provider: docs updated, a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LLMProxy
participant ProviderAPI
Client->>LLMProxy: Send chat request (provider hint)
LLMProxy->>LLMProxy: evaluate provider_is_openai_compat(provider)
alt provider == "avian" or "openai"
LLMProxy->>ProviderAPI: POST /v1/chat/completions (api.avian.io or api.openai.com) with Bearer auth and OpenAI-style body
ProviderAPI-->>LLMProxy: OpenAI-compatible response (choices, messages, finish_reason)
else
LLMProxy->>ProviderAPI: POST /v1/messages (api.anthropic.com) with x-api-key and Anthropic-style body
ProviderAPI-->>LLMProxy: Anthropic-style response (blocks, stop_reason, tool_use)
end
LLMProxy-->>Client: Normalized response (content / tool calls / status)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
main/cli/serial_cli.c (1)
645-645: Enforce provider enum at command entry.Line 645 documents fixed values, but
cmd_set_model_providerstill accepts any string. Add validation before saving.♻️ Suggested guard in
cmd_set_model_providerstatic int cmd_set_model_provider(int argc, char **argv) { @@ - llm_set_provider(provider_args.provider->sval[0]); + const char *p = provider_args.provider->sval[0]; + if (strcmp(p, "anthropic") != 0 && + strcmp(p, "openai") != 0 && + strcmp(p, "avian") != 0) { + printf("Invalid provider: %s. Use anthropic|openai|avian.\n", p); + return 1; + } + llm_set_provider(p); printf("Model provider set.\n"); return 0; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/cli/serial_cli.c` at line 645, The CLI accepts any string for the provider even though provider_args.provider documents fixed values; update cmd_set_model_provider to validate the input against the allowed enum ("anthropic","openai","avian") before saving. In cmd_set_model_provider, read provider_args.provider value, compare it to the allowed set (case-sensitive or normalized as project expects), return or print an error and abort the change if it’s not one of the allowed values, and only proceed to persist the provider when validation succeeds; reference provider_args.provider and cmd_set_model_provider when implementing this guard.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main/llm/llm_proxy.c`:
- Around line 516-520: The code currently uses provider_is_openai_compat() to
decide the token field, causing Avian (which is "OpenAI-compatible" but expects
max_tokens) to receive max_completion_tokens and fail; change the branch so that
max_completion_tokens is sent only for the real OpenAI provider (e.g., use or
add a provider_is_openai() / provider_is_openai_strict() predicate) and
otherwise send max_tokens for Avian and all other providers (keep the
cJSON_AddNumberToObject call but swap field names accordingly). Update the
condition around the block where cJSON_AddNumberToObject(body, ...) is called
(the existing provider_is_openai_compat() check) to use the stricter OpenAI
check or an explicit provider_is_avian() check so Avian gets max_tokens.
In `@README_CN.md`:
- Line 34: Update the Chinese sentence on Line 34 that currently reads “所有数据存在本地
Flash” to the smoother phrasing “所有数据存储在本地 Flash 中” in README_CN.md; locate the
paragraph describing Telegram → ESP32-S3 → Agent loop (the sentence ending with
the provider list Anthropic/OpenAI/Avian) and replace that final sentence only
to use the new wording.
---
Nitpick comments:
In `@main/cli/serial_cli.c`:
- Line 645: The CLI accepts any string for the provider even though
provider_args.provider documents fixed values; update cmd_set_model_provider to
validate the input against the allowed enum ("anthropic","openai","avian")
before saving. In cmd_set_model_provider, read provider_args.provider value,
compare it to the allowed set (case-sensitive or normalized as project expects),
return or print an error and abort the change if it’s not one of the allowed
values, and only proceed to persist the provider when validation succeeds;
reference provider_args.provider and cmd_set_model_provider when implementing
this guard.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
README.mdREADME_CN.mdREADME_JA.mdmain/cli/serial_cli.cmain/llm/llm_proxy.cmain/mimi_config.hmain/mimi_secrets.h.example
|
Thanks for the review @coderabbitai!
|
|
Tip For best results, initiate chat on the files or code changes.
✅ max_tokens: Thanks for confirming that Avian's API supports the standard OpenAI ✅ Chinese README wording: Perfect! Thank you for improving the phrasing to make it more natural. The PR looks great — Avian is now cleanly integrated as an OpenAI-compatible provider! 🎉 If you found this review helpful, would you consider giving us a shout-out on X? Thank you for using CodeRabbit! |
|
Thanks for your contribution! |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
main/llm/llm_proxy.c (1)
150-155: Consider defining host macros for consistency.
llm_api_host()uses hardcoded strings whilellm_api_url()uses macros. IfMIMI_AVIAN_API_URLis changed to a different domain, this function won't reflect the update.♻️ Suggested approach
Define host macros in
mimi_config.halongside the URL macros:`#define` MIMI_AVIAN_API_HOST "api.avian.io" `#define` MIMI_OPENAI_API_HOST "api.openai.com" `#define` MIMI_ANTHROPIC_API_HOST "api.anthropic.com"Then update
llm_api_host():static const char *llm_api_host(void) { - if (strcmp(s_provider, "avian") == 0) return "api.avian.io"; - if (strcmp(s_provider, "openai") == 0) return "api.openai.com"; - return "api.anthropic.com"; + if (strcmp(s_provider, "avian") == 0) return MIMI_AVIAN_API_HOST; + if (strcmp(s_provider, "openai") == 0) return MIMI_OPENAI_API_HOST; + return MIMI_ANTHROPIC_API_HOST; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/llm/llm_proxy.c` around lines 150 - 155, Replace the hardcoded host strings in llm_api_host() with macros defined in mimi_config.h: add MIMI_AVIAN_API_HOST, MIMI_OPENAI_API_HOST, and MIMI_ANTHROPIC_API_HOST (values "api.avian.io", "api.openai.com", "api.anthropic.com") alongside the existing URL macros, then update llm_api_host() to return those macros (e.g., return MIMI_AVIAN_API_HOST when s_provider == "avian") so host values stay consistent with MIMI_*_API_URL changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@main/llm/llm_proxy.c`:
- Around line 150-155: Replace the hardcoded host strings in llm_api_host() with
macros defined in mimi_config.h: add MIMI_AVIAN_API_HOST, MIMI_OPENAI_API_HOST,
and MIMI_ANTHROPIC_API_HOST (values "api.avian.io", "api.openai.com",
"api.anthropic.com") alongside the existing URL macros, then update
llm_api_host() to return those macros (e.g., return MIMI_AVIAN_API_HOST when
s_provider == "avian") so host values stay consistent with MIMI_*_API_URL
changes.
IRONICBo
left a comment
There was a problem hiding this comment.
Clean implementation that follows the existing openai/anthropic pattern well. The provider_is_openai_compat() helper is a good abstraction.
One minor inconsistency: in llm_chat_tools, max_completion_tokens is gated on strcmp(s_provider, "openai") == 0 rather than provider_is_openai_compat(). If Avian supports max_completion_tokens like OpenAI, use the compat helper there too; if not, a short comment explaining the intentional difference would help.
|
Thanks for the review and the approval! Good catch on the inconsistency — the difference is intentional: only real OpenAI supports |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
main/llm/llm_proxy.c (1)
626-627: Potential truncation without null-termination instrncpycalls.The
strncpycalls copy up tosizeof(...) - 1bytes but don't explicitly null-terminate when the source string length equals or exceeds the destination size. While thellm_tool_call_tstruct fields are likely zero-initialized viamemset(resp, 0, ...)at line 509, relying on this is fragile.Consider using a safer pattern or explicitly null-terminating:
Suggested safer pattern
if (id && cJSON_IsString(id)) { - strncpy(call->id, id->valuestring, sizeof(call->id) - 1); + strncpy(call->id, id->valuestring, sizeof(call->id) - 1); + call->id[sizeof(call->id) - 1] = '\0'; }Apply similarly to other
strncpycalls at lines 632, 699, and 704.Also applies to: 632-633, 699-700, 704-705
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/llm/llm_proxy.c` around lines 626 - 627, The strncpy usage copying into call->id (and the other strncpy occurrences that populate fields of llm_tool_call_t) can leave the destination unterminated if the source is as long as the buffer; change each strncpy to a safe pattern: either use a bounded copy function that guarantees null-termination (e.g., strlcpy) or keep strncpy but explicitly set the last byte to '\0' after the copy (e.g., call->id[sizeof(call->id)-1] = '\0'), and apply the same fix to the other strncpy sites that fill llm_tool_call_t fields (the copies around the id/name/args areas referenced in the diff).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@main/llm/llm_proxy.c`:
- Around line 626-627: The strncpy usage copying into call->id (and the other
strncpy occurrences that populate fields of llm_tool_call_t) can leave the
destination unterminated if the source is as long as the buffer; change each
strncpy to a safe pattern: either use a bounded copy function that guarantees
null-termination (e.g., strlcpy) or keep strncpy but explicitly set the last
byte to '\0' after the copy (e.g., call->id[sizeof(call->id)-1] = '\0'), and
apply the same fix to the other strncpy sites that fill llm_tool_call_t fields
(the copies around the id/name/args areas referenced in the diff).
|
Friendly follow-up — this PR has been approved and is ready to merge. Would appreciate getting this landed when you get a chance! cc @crispyberry @IRONICBo |
|
This PR is approved — could a maintainer merge when ready? cc @crispyberry |
Add Avian (https://avian.io) as a named LLM provider alongside Anthropic and OpenAI. Avian exposes an OpenAI-compatible API at api.avian.io/v1, so it reuses the existing OpenAI chat-completions code path with its own endpoint URL. Changes: - Rename provider_is_openai() to provider_is_openai_compat() and include "avian" in the check so all OpenAI-format logic applies - Add MIMI_AVIAN_API_URL to mimi_config.h - Route avian provider to api.avian.io in llm_api_url/host helpers - Update CLI help, secrets example, and all three READMEs
Avian's API expects max_tokens, not max_completion_tokens. Only real OpenAI should use max_completion_tokens; all other providers (including Avian) use max_tokens.
…r_is_openai_compat Address reviewer feedback from @IRONICBo: add comment explaining that only real OpenAI supports max_completion_tokens, while Avian (and Anthropic) expect max_tokens despite being OpenAI-compatible for routing/auth purposes.
cc13efa to
1629eb6
Compare
|
Hi @IRONICBo — this is approved and CI is green. Could you merge when you get a chance? Thanks! |
Summary
Adds Avian as a named LLM provider, alongside Anthropic and OpenAI.
Avian exposes an OpenAI-compatible API at
https://api.avian.io/v1, so it reuses the existing OpenAI chat-completions code path (message format, Bearer auth, tool calling) with its own endpoint URL.Available models:
deepseek/deepseek-v3.2,moonshotai/kimi-k2.5,z-ai/glm-5,minimax/minimax-m2.5Changes
main/llm/llm_proxy.c— Renameprovider_is_openai()toprovider_is_openai_compat()and include"avian"so all OpenAI-format logic (Bearer auth, chat completions format, tool calling) applies. Add"avian"cases tollm_api_url()andllm_api_host()helpers.main/mimi_config.h— AddMIMI_AVIAN_API_URLconstant.main/cli/serial_cli.c— Updateset_model_providerhelp text to listavian.main/mimi_secrets.h.example— Update comment to listavianas a provider option.README.md,README_CN.md,README_JA.md— Add Avian to provider lists, quick start, and CLI docs.Usage
Or in
mimi_secrets.h:Test plan
provider_is_openai_compat()returns true for both"openai"and"avian", false for"anthropic"llm_api_url()returns correct URL for each providerllm_api_host()returns correct host for each provideridf.py build(requires ESP-IDF toolchain)cc @crispyberry
Summary by CodeRabbit
New Features
Behavior
Documentation
CLI