Add WiFi Scan Tool, System Info Tool, and Moonshot Provider Support#154
Add WiFi Scan Tool, System Info Tool, and Moonshot Provider Support#154nybbs2003 wants to merge 1 commit intomemovai:mainfrom
Conversation
📝 WalkthroughWalkthroughThis PR adds support for two new LLM providers (Moonshot and Hunyuan) as OpenAI-compatible endpoints, introduces two new embedded system tools (system_info and wifi_scan) for ESP32 devices, updates configuration and onboarding UI, and enhances error logging in existing tools. Documentation is updated across multiple languages to reflect new capabilities. Changes
Sequence DiagramsequenceDiagram
participant Client as Client Request
participant Router as Provider Router
participant OpenAI as OpenAI API
participant Moonshot as Moonshot API
participant Hunyuan as Hunyuan API
Client->>Router: HTTP Request with provider type
alt Provider is OpenAI
Router->>Router: Use OpenAI endpoint
Router->>OpenAI: Forward request (with API key)
OpenAI->>Router: Response
else Provider is Moonshot
Router->>Router: Treat as OpenAI-compatible
Router->>Moonshot: Forward request (with API key)
Moonshot->>Router: Response
else Provider is Hunyuan
Router->>Router: Treat as OpenAI-compatible
Router->>Hunyuan: Forward request (with API key)
Hunyuan->>Router: Response
else Default/Other Provider
Router->>Router: Use generic LLM endpoints
end
Router->>Client: Formatted response with tool support
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use Trivy to scan for security misconfigurations and secrets in Infrastructure as Code files.Add a .trivyignore file to your project to customize which findings Trivy reports. |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (3)
main/tools/tool_web_search.c (1)
347-356: Minor: Redundant error check can be consolidated.The error is checked at line 347 for logging, then again at line 354 for returning. This works correctly but could be slightly cleaner by combining the log and return.
✨ Optional consolidation
ESP_LOGI(TAG, "Tavily API status: %d", status); - - if (err != ESP_OK) { - ESP_LOGE(TAG, "Tavily API error: %s", esp_err_to_name(err)); - } - + esp_http_client_cleanup(client); free(payload); if (err != ESP_OK) { + ESP_LOGE(TAG, "Tavily API error: %s", esp_err_to_name(err)); return err; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/tools/tool_web_search.c` around lines 347 - 356, Consolidate the two err checks into one: always call esp_http_client_cleanup(client) and free(payload) first, then check if err != ESP_OK and if so call ESP_LOGE(TAG, "Tavily API error: %s", esp_err_to_name(err)) and return err; remove the earlier standalone logging if-block so the cleanup/free happen exactly once and the error is logged and returned in a single post-cleanup block (references: err, ESP_LOGE/esp_err_to_name, esp_http_client_cleanup, free(payload), TAG).main/llm/llm_proxy.c (1)
205-231: Keep direct and proxied endpoint selection in one place.The new provider support now has three copies of the same routing data: full URLs in
main/mimi_config.h, Lines 90-92, hosts here, and a shared path inllm_api_path(). If a provider URL is overridden or one vendor changes its path, direct requests and proxied requests will drift. A single provider config struct (url/host/path/compatibility mode) would remove that split-brain risk.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/llm/llm_proxy.c` around lines 205 - 231, The routing info is duplicated across mimi_config.h and the functions llm_api_url, llm_api_host, and llm_api_path; consolidate by introducing a single provider configuration struct (fields: url, host, path, and a compatibility flag) and a lookup function (e.g., get_provider_config()) that returns that struct for the active provider; then rewrite llm_api_url, llm_api_host, and llm_api_path to read from that struct instead of hardcoding values so direct and proxied endpoints share one canonical source of truth and avoid drift.main/tools/tool_registry.c (1)
209-229: Registry is nearingMAX_TOOLS; consider preventing silent tool drops as features grow.With new entries, you’re close to the fixed cap (
MAX_TOOLS = 16). A small future addition can leave tools unregistered at runtime.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/tools/tool_registry.c` around lines 209 - 229, The tool registry is approaching the fixed cap MAX_TOOLS which can silently drop registrations; before calling register_tool (and during bulk init of tools like tool_system_info_init/tool_wifi_scan_init and the mimi_tool_t structs), check the current tool count against MAX_TOOLS and handle overflow explicitly: either grow the registry (make it dynamic) or return/log an error and skip further registrations. Update register_tool (and any place that increments the tool count) to validate capacity, emit a clear error via the existing logger when MAX_TOOLS would be exceeded, and return a failure code so callers can react instead of silently losing tools.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main/mimi_config.h`:
- Line 83: Revert the hardcoded change to MIMI_TIMEZONE (do not set it to
"CST-8") and restore the previous default value or make it configurable via
build-time config so local timestamps remain unchanged; keep MIMI_TIMEZONE as
the existing default macro and, if you need region-specific overrides, add a new
configurable macro or build flag instead of changing MIMI_TIMEZONE itself so the
local time restore logic in the time-sync code (the restore-local-time block in
tool_get_time.c) continues to behave as before.
In `@main/tools/tool_web_search.c`:
- Around line 314-316: Remove or stop printing the full Tavily API key from
logs: replace the ESP_LOGI(TAG, "Tavily API key: %s", s_tavily_key) usage in
main/tools/tool_web_search.c with a masked or non-sensitive alternative (e.g.,
log only key length or a masked string showing at most the last 4 characters) so
the secret in s_tavily_key is never output in plaintext; keep the URL and
payload logs if needed but ensure any production build disables verbose secrets
logging.
In `@main/tools/tool_wifi_scan.c`:
- Around line 48-71: The code reports ap_max and ignores skipped hidden SSIDs
and doesn't check snprintf truncation; change the loop to increment a
filtered_count for each object added to arr (use filtered_count instead of
ap_max in the header string and ESP_LOGI), then after cJSON_PrintUnformatted
check snprintf's return (int n = snprintf(output, output_size, "Found %d WiFi
networks:\n%s", filtered_count, json)); if n < 0 or n >= (int)output_size treat
as failure: free(json), cJSON_Delete(arr) as needed and return ESP_FAIL to avoid
returning truncated output; keep freeing ap_list and json appropriately.
- Around line 35-46: Call and check esp_wifi_scan_get_ap_num() return value
before allocating and if it returns an error propagate that esp_err_t; if
ap_count is 0 return a successful "no APs found" result (or set output
accordingly) instead of calling calloc(0,...). Only allocate ap_list when
ap_count > 0 (apply the existing cap to 20 first), then call
esp_wifi_scan_get_ap_records() and check its return value (propagate or handle
errors) before using ap_list; reference esp_wifi_scan_get_ap_num, ap_count,
ap_list, ap_max, esp_wifi_scan_get_ap_records, output and output_size when
making these changes.
In `@README.md`:
- Around line 262-263: The tools table in README.md is missing the registered
tool "system_info", so update the table row list (near the `wifi_scan` entry) to
add a new row describing `system_info` (e.g., "system_info | Gather host system
information such as OS, CPU, memory, and disk details"); also mirror the same
addition in all localized README files so documentation matches the registered
tool set (the tool is exposed in tool_registry via the system_info
registration).
In `@test_wifi_scan.py`:
- Around line 5-7: The test uses a hardcoded WS_URL constant and lacks any
timeout or deterministic pass/fail flow; change WS_URL to be read from an
environment variable (e.g., os.environ.get("WS_URL", "ws://127.0.0.1:18789/"))
so CI can override the target, and modify the test routine that opens the
WebSocket (the code referencing WS_URL and the scan/wait loop) to use a bounded
timeout and explicit success/failure return or pytest assertion on timeout (use
a watch deadline or asyncio.wait_for) so the script cannot block indefinitely
and will fail deterministically in CI.
---
Nitpick comments:
In `@main/llm/llm_proxy.c`:
- Around line 205-231: The routing info is duplicated across mimi_config.h and
the functions llm_api_url, llm_api_host, and llm_api_path; consolidate by
introducing a single provider configuration struct (fields: url, host, path, and
a compatibility flag) and a lookup function (e.g., get_provider_config()) that
returns that struct for the active provider; then rewrite llm_api_url,
llm_api_host, and llm_api_path to read from that struct instead of hardcoding
values so direct and proxied endpoints share one canonical source of truth and
avoid drift.
In `@main/tools/tool_registry.c`:
- Around line 209-229: The tool registry is approaching the fixed cap MAX_TOOLS
which can silently drop registrations; before calling register_tool (and during
bulk init of tools like tool_system_info_init/tool_wifi_scan_init and the
mimi_tool_t structs), check the current tool count against MAX_TOOLS and handle
overflow explicitly: either grow the registry (make it dynamic) or return/log an
error and skip further registrations. Update register_tool (and any place that
increments the tool count) to validate capacity, emit a clear error via the
existing logger when MAX_TOOLS would be exceeded, and return a failure code so
callers can react instead of silently losing tools.
In `@main/tools/tool_web_search.c`:
- Around line 347-356: Consolidate the two err checks into one: always call
esp_http_client_cleanup(client) and free(payload) first, then check if err !=
ESP_OK and if so call ESP_LOGE(TAG, "Tavily API error: %s",
esp_err_to_name(err)) and return err; remove the earlier standalone logging
if-block so the cleanup/free happen exactly once and the error is logged and
returned in a single post-cleanup block (references: err,
ESP_LOGE/esp_err_to_name, esp_http_client_cleanup, free(payload), TAG).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4c7cf0be-1137-4acd-b26b-b782104bf9f0
📒 Files selected for processing (15)
README.mdREADME_CN.mdREADME_JA.mdmain/CMakeLists.txtmain/llm/llm_proxy.cmain/mimi_config.hmain/onboard/onboard_html.hmain/tools/tool_get_time.cmain/tools/tool_registry.cmain/tools/tool_system_info.cmain/tools/tool_system_info.hmain/tools/tool_web_search.cmain/tools/tool_wifi_scan.cmain/tools/tool_wifi_scan.htest_wifi_scan.py
|
|
||
| /* Timezone (POSIX TZ format) */ | ||
| #define MIMI_TIMEZONE "PST8PDT,M3.2.0,M11.1.0" | ||
| #define MIMI_TIMEZONE "CST-8" |
There was a problem hiding this comment.
Don't change the global default timezone in this PR.
MIMI_TIMEZONE is used when main/tools/tool_get_time.c, Lines 43-49 restore local time after syncing. Moving the default to CST-8 changes every local timestamp the firmware emits and anything that depends on local time. Unless the device is now intentionally China-only, this needs to stay configurable or preserve the previous default.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/mimi_config.h` at line 83, Revert the hardcoded change to MIMI_TIMEZONE
(do not set it to "CST-8") and restore the previous default value or make it
configurable via build-time config so local timestamps remain unchanged; keep
MIMI_TIMEZONE as the existing default macro and, if you need region-specific
overrides, add a new configurable macro or build flag instead of changing
MIMI_TIMEZONE itself so the local time restore logic in the time-sync code (the
restore-local-time block in tool_get_time.c) continues to behave as before.
| ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search"); | ||
| ESP_LOGI(TAG, "Tavily API key: %s", s_tavily_key); | ||
| ESP_LOGI(TAG, "Tavily API request: %s", payload); |
There was a problem hiding this comment.
Security: Do not log the full API key.
Line 315 logs the complete Tavily API key in plaintext. This exposes the secret through UART/serial output, stored logs, or any log forwarding mechanism. Even on embedded devices, this is a significant credential leak risk.
Consider masking the key or removing this log entirely. The URL and payload logs (lines 314, 316) are acceptable for debugging but may also be overly verbose for production.
🔒 Proposed fix to mask or remove API key logging
ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search");
- ESP_LOGI(TAG, "Tavily API key: %s", s_tavily_key);
+ ESP_LOGD(TAG, "Tavily API key: %.*s****", 4, s_tavily_key); // Show only first 4 chars
ESP_LOGI(TAG, "Tavily API request: %s", payload);Or remove the key logging entirely:
ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search");
- ESP_LOGI(TAG, "Tavily API key: %s", s_tavily_key);
- ESP_LOGI(TAG, "Tavily API request: %s", payload);
+ ESP_LOGD(TAG, "Tavily API request: %s", payload); // Debug level for verbose output📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search"); | |
| ESP_LOGI(TAG, "Tavily API key: %s", s_tavily_key); | |
| ESP_LOGI(TAG, "Tavily API request: %s", payload); | |
| ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search"); | |
| ESP_LOGD(TAG, "Tavily API key: %.*s****", 4, s_tavily_key); // Show only first 4 chars | |
| ESP_LOGI(TAG, "Tavily API request: %s", payload); |
| ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search"); | |
| ESP_LOGI(TAG, "Tavily API key: %s", s_tavily_key); | |
| ESP_LOGI(TAG, "Tavily API request: %s", payload); | |
| ESP_LOGI(TAG, "Tavily API URL: https://api.tavily.com/search"); | |
| ESP_LOGD(TAG, "Tavily API request: %s", payload); // Debug level for verbose output |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/tools/tool_web_search.c` around lines 314 - 316, Remove or stop printing
the full Tavily API key from logs: replace the ESP_LOGI(TAG, "Tavily API key:
%s", s_tavily_key) usage in main/tools/tool_web_search.c with a masked or
non-sensitive alternative (e.g., log only key length or a masked string showing
at most the last 4 characters) so the secret in s_tavily_key is never output in
plaintext; keep the URL and payload logs if needed but ensure any production
build disables verbose secrets logging.
| uint16_t ap_count = 0; | ||
| esp_wifi_scan_get_ap_num(&ap_count); | ||
| if (ap_count > 20) ap_count = 20; // Limit to 20 APs | ||
|
|
||
| wifi_ap_record_t *ap_list = calloc(ap_count, sizeof(wifi_ap_record_t)); | ||
| if (!ap_list) { | ||
| snprintf(output, output_size, "Error: Out of memory"); | ||
| return ESP_ERR_NO_MEM; | ||
| } | ||
|
|
||
| uint16_t ap_max = ap_count; | ||
| esp_wifi_scan_get_ap_records(&ap_max, ap_list); |
There was a problem hiding this comment.
Handle empty scans and driver errors before allocating.
If no APs are found, ap_count is 0 and calloc(0, ...) may return NULL, which turns a valid empty scan into an "Out of memory" error on some libc implementations. Both esp_wifi_scan_get_ap_num() and esp_wifi_scan_get_ap_records() are also unchecked, so scan-driver failures can fall through as misleading success.
Suggested fix
- uint16_t ap_count = 0;
- esp_wifi_scan_get_ap_num(&ap_count);
- if (ap_count > 20) ap_count = 20; // Limit to 20 APs
+ uint16_t ap_count = 0;
+ err = esp_wifi_scan_get_ap_num(&ap_count);
+ if (err != ESP_OK) {
+ snprintf(output, output_size, "Error: Failed to read scan results (%s)", esp_err_to_name(err));
+ return err;
+ }
+ if (ap_count == 0) {
+ snprintf(output, output_size, "Found 0 WiFi networks:\n[]");
+ return ESP_OK;
+ }
+ if (ap_count > MIMI_ONBOARD_MAX_SCAN) ap_count = MIMI_ONBOARD_MAX_SCAN;
@@
- uint16_t ap_max = ap_count;
- esp_wifi_scan_get_ap_records(&ap_max, ap_list);
+ uint16_t ap_max = ap_count;
+ err = esp_wifi_scan_get_ap_records(&ap_max, ap_list);
+ if (err != ESP_OK) {
+ free(ap_list);
+ snprintf(output, output_size, "Error: Failed to read AP records (%s)", esp_err_to_name(err));
+ return err;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/tools/tool_wifi_scan.c` around lines 35 - 46, Call and check
esp_wifi_scan_get_ap_num() return value before allocating and if it returns an
error propagate that esp_err_t; if ap_count is 0 return a successful "no APs
found" result (or set output accordingly) instead of calling calloc(0,...). Only
allocate ap_list when ap_count > 0 (apply the existing cap to 20 first), then
call esp_wifi_scan_get_ap_records() and check its return value (propagate or
handle errors) before using ap_list; reference esp_wifi_scan_get_ap_num,
ap_count, ap_list, ap_max, esp_wifi_scan_get_ap_records, output and output_size
when making these changes.
| cJSON *arr = cJSON_CreateArray(); | ||
| for (uint16_t i = 0; i < ap_max; i++) { | ||
| if (ap_list[i].ssid[0] == '\0') continue; /* skip hidden */ | ||
| cJSON *obj = cJSON_CreateObject(); | ||
| cJSON_AddStringToObject(obj, "ssid", (const char *)ap_list[i].ssid); | ||
| cJSON_AddNumberToObject(obj, "rssi", ap_list[i].rssi); | ||
| cJSON_AddNumberToObject(obj, "channel", ap_list[i].primary); | ||
| cJSON_AddBoolToObject(obj, "secured", ap_list[i].authmode != WIFI_AUTH_OPEN); | ||
| cJSON_AddItemToArray(arr, obj); | ||
| } | ||
| free(ap_list); | ||
|
|
||
| char *json = cJSON_PrintUnformatted(arr); | ||
| cJSON_Delete(arr); | ||
|
|
||
| if (!json) { | ||
| snprintf(output, output_size, "Error: Failed to format results"); | ||
| return ESP_FAIL; | ||
| } | ||
|
|
||
| snprintf(output, output_size, "Found %d WiFi networks:\n%s", ap_max, json); | ||
| free(json); | ||
|
|
||
| ESP_LOGI(TAG, "WiFi scan completed, found %d networks", ap_max); |
There was a problem hiding this comment.
Report the filtered count and fail on truncated output.
Hidden SSIDs are skipped, but the success message and log still use ap_max, so the headline count can disagree with the JSON payload. snprintf() is also unchecked, which means a small caller buffer returns ESP_OK with a truncated result.
Suggested fix
- cJSON *arr = cJSON_CreateArray();
+ cJSON *arr = cJSON_CreateArray();
+ int visible_count = 0;
for (uint16_t i = 0; i < ap_max; i++) {
if (ap_list[i].ssid[0] == '\0') continue; /* skip hidden */
cJSON *obj = cJSON_CreateObject();
cJSON_AddStringToObject(obj, "ssid", (const char *)ap_list[i].ssid);
cJSON_AddNumberToObject(obj, "rssi", ap_list[i].rssi);
cJSON_AddNumberToObject(obj, "channel", ap_list[i].primary);
cJSON_AddBoolToObject(obj, "secured", ap_list[i].authmode != WIFI_AUTH_OPEN);
cJSON_AddItemToArray(arr, obj);
+ visible_count++;
}
@@
- snprintf(output, output_size, "Found %d WiFi networks:\n%s", ap_max, json);
+ int written = snprintf(output, output_size, "Found %d WiFi networks:\n%s", visible_count, json);
+ if (written < 0 || (size_t)written >= output_size) {
+ free(json);
+ return ESP_ERR_NO_MEM;
+ }
free(json);
- ESP_LOGI(TAG, "WiFi scan completed, found %d networks", ap_max);
+ ESP_LOGI(TAG, "WiFi scan completed, found %d networks", visible_count);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cJSON *arr = cJSON_CreateArray(); | |
| for (uint16_t i = 0; i < ap_max; i++) { | |
| if (ap_list[i].ssid[0] == '\0') continue; /* skip hidden */ | |
| cJSON *obj = cJSON_CreateObject(); | |
| cJSON_AddStringToObject(obj, "ssid", (const char *)ap_list[i].ssid); | |
| cJSON_AddNumberToObject(obj, "rssi", ap_list[i].rssi); | |
| cJSON_AddNumberToObject(obj, "channel", ap_list[i].primary); | |
| cJSON_AddBoolToObject(obj, "secured", ap_list[i].authmode != WIFI_AUTH_OPEN); | |
| cJSON_AddItemToArray(arr, obj); | |
| } | |
| free(ap_list); | |
| char *json = cJSON_PrintUnformatted(arr); | |
| cJSON_Delete(arr); | |
| if (!json) { | |
| snprintf(output, output_size, "Error: Failed to format results"); | |
| return ESP_FAIL; | |
| } | |
| snprintf(output, output_size, "Found %d WiFi networks:\n%s", ap_max, json); | |
| free(json); | |
| ESP_LOGI(TAG, "WiFi scan completed, found %d networks", ap_max); | |
| cJSON *arr = cJSON_CreateArray(); | |
| int visible_count = 0; | |
| for (uint16_t i = 0; i < ap_max; i++) { | |
| if (ap_list[i].ssid[0] == '\0') continue; /* skip hidden */ | |
| cJSON *obj = cJSON_CreateObject(); | |
| cJSON_AddStringToObject(obj, "ssid", (const char *)ap_list[i].ssid); | |
| cJSON_AddNumberToObject(obj, "rssi", ap_list[i].rssi); | |
| cJSON_AddNumberToObject(obj, "channel", ap_list[i].primary); | |
| cJSON_AddBoolToObject(obj, "secured", ap_list[i].authmode != WIFI_AUTH_OPEN); | |
| cJSON_AddItemToArray(arr, obj); | |
| visible_count++; | |
| } | |
| free(ap_list); | |
| char *json = cJSON_PrintUnformatted(arr); | |
| cJSON_Delete(arr); | |
| if (!json) { | |
| snprintf(output, output_size, "Error: Failed to format results"); | |
| return ESP_FAIL; | |
| } | |
| int written = snprintf(output, output_size, "Found %d WiFi networks:\n%s", visible_count, json); | |
| if (written < 0 || (size_t)written >= output_size) { | |
| free(json); | |
| return ESP_ERR_NO_MEM; | |
| } | |
| free(json); | |
| ESP_LOGI(TAG, "WiFi scan completed, found %d networks", visible_count); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/tools/tool_wifi_scan.c` around lines 48 - 71, The code reports ap_max
and ignores skipped hidden SSIDs and doesn't check snprintf truncation; change
the loop to increment a filtered_count for each object added to arr (use
filtered_count instead of ap_max in the header string and ESP_LOGI), then after
cJSON_PrintUnformatted check snprintf's return (int n = snprintf(output,
output_size, "Found %d WiFi networks:\n%s", filtered_count, json)); if n < 0 or
n >= (int)output_size treat as failure: free(json), cJSON_Delete(arr) as needed
and return ESP_FAIL to avoid returning truncated output; keep freeing ap_list
and json appropriately.
| | `wifi_scan` | Scan for nearby WiFi networks and return details like SSID, signal strength, and security status | | ||
|
|
There was a problem hiding this comment.
Tools table is missing system_info, so docs are now incomplete.
system_info is registered and exposed (see main/tools/tool_registry.c, Lines 212-218), but it is not listed in this table. Please add it here (and mirror in localized READMEs) so users discover all available tools.
📝 Proposed doc patch
| `cron_remove` | Remove a cron job by ID |
| `wifi_scan` | Scan for nearby WiFi networks and return details like SSID, signal strength, and security status |
+| `system_info` | Get system diagnostics including chip details, memory usage, WiFi status, and uptime |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | `wifi_scan` | Scan for nearby WiFi networks and return details like SSID, signal strength, and security status | | |
| | `cron_remove` | Remove a cron job by ID | | |
| | `wifi_scan` | Scan for nearby WiFi networks and return details like SSID, signal strength, and security status | | |
| | `system_info` | Get system diagnostics including chip details, memory usage, WiFi status, and uptime | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 262 - 263, The tools table in README.md is missing
the registered tool "system_info", so update the table row list (near the
`wifi_scan` entry) to add a new row describing `system_info` (e.g., "system_info
| Gather host system information such as OS, CPU, memory, and disk details");
also mirror the same addition in all localized README files so documentation
matches the registered tool set (the tool is exposed in tool_registry via the
system_info registration).
| # WebSocket服务器地址 - 替换为你的设备IP | ||
| WS_URL = "ws://192.168.1.19:18789/" | ||
|
|
There was a problem hiding this comment.
This is not portable or CI-safe in current form (hardcoded host + unbounded runtime).
Line 6 hardcodes a single device IP, and the script has no deterministic timeout/pass-fail path, so it can block forever.
🧪 Suggested hardening patch
import websocket
import json
-import time
+import os
# WebSocket服务器地址 - 替换为你的设备IP
-WS_URL = "ws://192.168.1.19:18789/"
+WS_URL = os.getenv("MIMICLAW_WS_URL", "ws://127.0.0.1:18789/")
+DONE = False
def on_message(ws, message):
+ global DONE
print("收到消息:")
print(message)
print("-" * 50)
+ DONE = True
+ ws.close()
def on_error(ws, error):
print("错误:", error)
@@
if __name__ == "__main__":
@@
- # 运行WebSocket客户端
- ws.run_forever()
+ # 运行WebSocket客户端(避免无限阻塞)
+ ws.run_forever()
+ if not DONE:
+ raise SystemExit("wifi_scan test did not receive a response")Also applies to: 26-41
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test_wifi_scan.py` around lines 5 - 7, The test uses a hardcoded WS_URL
constant and lacks any timeout or deterministic pass/fail flow; change WS_URL to
be read from an environment variable (e.g., os.environ.get("WS_URL",
"ws://127.0.0.1:18789/")) so CI can override the target, and modify the test
routine that opens the WebSocket (the code referencing WS_URL and the scan/wait
loop) to use a bounded timeout and explicit success/failure return or pytest
assertion on timeout (use a watch deadline or asyncio.wait_for) so the script
cannot block indefinitely and will fail deterministically in CI.
|
Thank you! Very impressive contribution! |
Overview
This PR introduces several new features and enhancements to MimiClaw, expanding its capabilities for ESP32-S3 devices and improving compatibility with China mainland users.
Changes Made
1. New Tools Added
WiFi Scan Tool
main/tools/tool_wifi_scan.candmain/tools/tool_wifi_scan.hSystem Info Tool
main/tools/tool_system_info.candmain/tools/tool_system_info.h2. LLM Provider Support
Moonshot (Kimi) Provider
main/llm/llm_proxy.cMIMI_MOONSHOT_API_URLandMIMI_HUNYUAN_API_URLinmimi_config.h3. Documentation Updates
README.md,README_CN.md,README_JA.md4. Bug Fixes and Improvements
Technical Details
WiFi Scan Implementation
esp_wifi_scan_start()andesp_wifi_scan_get_ap_records()functionsSystem Info Implementation
Moonshot Provider Integration
Testing
All new features have been thoroughly tested:
Compatibility
Use Cases
Conclusion
This PR significantly enhances MimiClaw's capabilities by adding ESP32-specific tools and expanding LLM provider support. The new features are well-integrated into the existing codebase and provide valuable functionality for users, especially those in China mainland.
Please review and merge this PR to bring these enhancements to the main codebase.
Summary by CodeRabbit
New Features
Improvements
Documentation