feat: add MQTT channel support for IoT and bot integration#123
feat: add MQTT channel support for IoT and bot integration#123tianrking wants to merge 3 commits intomemovai:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds an MQTT channel: new mqtt bot module (init/start/stop, publish/subscribe, NVS-backed config, dedupe, JSON parsing, chunked publish), build and config updates, CLI commands, message-bus identifier, and application startup/dispatch integration to route MQTT inbound/outbound messages. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant MQTT as MQTT Bot
participant Broker as MQTT Broker
participant Bus as Message Bus
participant CLI as CLI
App->>MQTT: mqtt_bot_init()
MQTT->>MQTT: load config from NVS
App->>MQTT: mqtt_bot_start()
MQTT->>Broker: CONNECT(uri, client_id, creds)
Broker-->>MQTT: CONNECTED
MQTT->>Broker: SUBSCRIBE <topic>
Broker-->>MQTT: DATA (topic, payload)
MQTT->>MQTT: dedupe (FNV-1a) & parse JSON
MQTT->>Bus: push message (chat_id from topic)
CLI->>MQTT: set_mqtt_config(uri,id,user,pass)
MQTT->>MQTT: save config to NVS
App->>MQTT: mqtt_send_message(topic, text)
MQTT->>Broker: PUBLISH (topic, payload chunks)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can suggest fixes for GitHub Check annotations.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (1)
main/bus/message_bus.h (1)
15-16: UseMIMI_CHAN_NAME_MAXfor the actual buffers too.Right now the new limit only exists as a macro.
mimi_msg_t.channelhere still useschar[16], andmain/cron/cron_service.h:13-27has another hard-codedchar channel[16], so the limit and the storage can silently drift later.♻️ Suggested follow-up
typedef struct { - char channel[16]; /* "telegram", "websocket", "cli" */ + char channel[MIMI_CHAN_NAME_MAX]; /* "telegram", "websocket", "cli" */ char chat_id[96]; /* Telegram/Feishu chat_id, open_id, or WS client id */ char *content; /* Heap-allocated message text (caller must free) */ } mimi_msg_t;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/bus/message_bus.h` around lines 15 - 16, Replace hard-coded char[16] buffers with the macro MIMI_CHAN_NAME_MAX: change the mimi_msg_t.channel field to use MIMI_CHAN_NAME_MAX instead of 16 and update the char channel array in cron_service (and any other occurrences) to char[MIMI_CHAN_NAME_MAX]; ensure includes see message_bus.h so the macro is available and rebuild to confirm no size mismatches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main/channels/mqtt/mqtt_bot.c`:
- Around line 49-61: The dedup logic in dedup_check_and_record incorrectly
treats any identical topic+payload as duplicate and truncates long payloads into
combined[320] before hashing; update dedup_check_and_record to build the dedup
key from a real message identifier (e.g., MQTT packet identifier/client
id/sender id and optional timestamp) rather than raw topic+payload, and stop
using a fixed small stack buffer that truncates payloads—either stream the hash
(feed topic, separator, payload into the FNV hash via an incremental
fnv1a64_update) or allocate a buffer sized to hold the full concatenation before
hashing; ensure you still record/check keys in s_seen_msg_keys using the
computed uint64_t and keep rotation logic with s_seen_msg_idx and
MQTT_DEDUP_CACHE_SIZE.
- Around line 161-165: The code wrongly stores the full MQTT topic into
mimi_msg_t.chat_id (96 bytes) in mqtt_bot.c which truncates long topics and
causes replies sent from main/mimi.c (which reuses chat_id as publish topic) to
go to the wrong topic; fix by not stuffing the full topic into chat_id: either
store a short logical session identifier in mimi_msg_t.chat_id and keep the full
reply topic in a new field (e.g., add reply_topic char* or fixed buffer to
mimi_msg_t) or attach the full topic as a separate heap-allocated string on the
message that main/mimi.c will use when publishing; update creation in mqtt_bot.c
to populate the new reply-topic field and update main/mimi.c to prefer that new
field when publishing replies instead of using mimi_msg_t.chat_id.
- Around line 206-229: NVS-loaded MQTT strings are being truncated into
fixed-size runtime buffers (s_broker_uri, s_client_id, s_username, s_password,
s_subscribe_topic) via strncpy; instead, validate lengths before persisting and
refuse/clear oversized stored values during init: when saving (where nvs_set_str
is called for keys like MIMI_NVS_KEY_MQTT_CLIENT_ID/USERNAME/PASSWORD/SUB_TOPIC)
enforce max length <= sizeof(target_buffer)-1 and return an error to the caller;
and in the init path (where nvs_get_str is used) check the returned len from
nvs_get_str against sizeof(s_...); if len exceeds the target buffer capacity, do
NOT strncpy — log an error, clear the NVS key (nvs_erase_key) and skip loading
that value. Ensure all checks reference the exact buffers (s_client_id,
s_username, s_password, s_subscribe_topic, s_broker_uri) and their sizes to
avoid silent truncation.
- Around line 106-179: The MQTT_EVENT_DATA handler must assemble fragmented
payloads using event->current_data_offset, event->total_data_len and
event->data_len before running the existing JSON parsing, deduplication
(dedup_check_and_record) and message_bus_push_inbound logic; implement a small
per-topic/per-client fragment buffer (e.g., static or module-level struct keyed
by topic) that reallocates/accumulates chunks at offsets until
current_data_offset + data_len == total_data_len, then operate on the assembled
payload (call dedup_check_and_record(topic, assembled_payload), parse JSON into
content, create mimi_msg_t and call message_bus_push_inbound), and finally
free/reset the fragment buffer; ensure error handling for OOM and discard
incomplete fragments on errors or when total_data_len exceeds a safe max to
avoid unbounded allocation.
In `@main/channels/mqtt/mqtt_bot.h`:
- Around line 23-27: Update the stale doc comment in mqtt_bot.h for the send
API: remove the misleading reference to `chat_id` and clarify that the `topic`
parameter is the MQTT topic to publish to (not a chat_id). Edit the comment
above the send/publish function (the block that documents "Send a text message
to an MQTT topic." and the `@param topic` entry) so `@param topic` clearly
states "The MQTT topic to publish to" and leave `@param text` as the message
payload.
- Around line 1-4: Add a direct include of <stdbool.h> to make this header
self-contained: update the top of the mqtt_bot.h header to include the stdbool
definition so the mqtt_is_connected declaration (and any other bool uses) have a
known type; specifically ensure <stdbool.h> is included alongside the existing
includes so mqtt_is_connected and any future bool-typed symbols in this header
compile without relying on transitive includes.
In `@main/cli/serial_cli.c`:
- Around line 121-150: cmd_set_mqtt_config stores MQTT under the MIMI_NVS_MQTT
namespace but the generic config commands (the config_show/display routine and
the config_reset/clear routine) don't include that namespace, so MQTT state is
hidden and survives resets; update the generic handlers to include
MIMI_NVS_MQTT: modify the config_show/display logic to enumerate/read and print
the MIMI_NVS_MQTT namespace (same style as other namespaces) and modify
config_reset/clear logic to delete/erase the MIMI_NVS_MQTT namespace (same
deletion code path used for other namespaces) so that cmd_set_mqtt_config,
config_show and config_reset are consistent.
In `@main/mimi.c`:
- Around line 90-96: The MQTT reply code is publishing back to msg.chat_id (the
inbound request topic) causing the bot to republish to its subscribed request
topic; change mqtt_send_message to publish to a proper response topic instead:
detect when msg.channel == MIMI_CHAN_MQTT and if msg.chat_id ends with
"/request" construct a new topic by replacing that trailing "/request" with
"/response" (or fall back to MIMI_MQTT_DEFAULT_PUB_TOPIC if replacement is not
applicable), then call mqtt_send_message(pub_topic, msg.content) and log using
pub_topic; update references to msg.chat_id in the mqtt reply path accordingly.
- Line 171: Replace the fatal ESP_ERROR_CHECK(mqtt_bot_start()) call with
non-fatal error handling: call mqtt_bot_start() and check its esp_err_t return;
if it != ESP_OK log a warning including esp_err_to_name(err) (using the existing
logging TAG or process logger) and continue booting so other channels
(Telegram/Feishu/WebSocket) are not aborted; keep mqtt_bot_start() call location
and remove the ESP_ERROR_CHECK usage.
---
Nitpick comments:
In `@main/bus/message_bus.h`:
- Around line 15-16: Replace hard-coded char[16] buffers with the macro
MIMI_CHAN_NAME_MAX: change the mimi_msg_t.channel field to use
MIMI_CHAN_NAME_MAX instead of 16 and update the char channel array in
cron_service (and any other occurrences) to char[MIMI_CHAN_NAME_MAX]; ensure
includes see message_bus.h so the macro is available and rebuild to confirm no
size mismatches.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f63d56d6-b132-4802-8430-7fa607a0ebf8
📒 Files selected for processing (9)
README.mdmain/CMakeLists.txtmain/bus/message_bus.hmain/channels/mqtt/mqtt_bot.cmain/channels/mqtt/mqtt_bot.hmain/cli/serial_cli.cmain/mimi.cmain/mimi_config.hmain/mimi_secrets.h.example
| static bool dedup_check_and_record(const char *topic, const char *payload) | ||
| { | ||
| /* Combine topic and payload for unique key */ | ||
| char combined[320]; | ||
| snprintf(combined, sizeof(combined), "%s:%s", topic, payload); | ||
| uint64_t key = fnv1a64(combined); | ||
|
|
||
| for (size_t i = 0; i < MQTT_DEDUP_CACHE_SIZE; i++) { | ||
| if (s_seen_msg_keys[i] == key) return true; | ||
| } | ||
| s_seen_msg_keys[s_seen_msg_idx] = key; | ||
| s_seen_msg_idx = (s_seen_msg_idx + 1) % MQTT_DEDUP_CACHE_SIZE; | ||
| return false; |
There was a problem hiding this comment.
The current dedup cache will drop valid MQTT traffic.
The key is only topic + payload, so sending the same command twice on the same topic within the 64-entry window is treated as a duplicate, even when the second publish is intentional. It also builds that key in combined[320], so long payloads are truncated before hashing and distinct long messages can collapse to the same cache entry. Dedupe needs an explicit sender/message identifier or a short replay window, not raw topic+payload hashing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/channels/mqtt/mqtt_bot.c` around lines 49 - 61, The dedup logic in
dedup_check_and_record incorrectly treats any identical topic+payload as
duplicate and truncates long payloads into combined[320] before hashing; update
dedup_check_and_record to build the dedup key from a real message identifier
(e.g., MQTT packet identifier/client id/sender id and optional timestamp) rather
than raw topic+payload, and stop using a fixed small stack buffer that truncates
payloads—either stream the hash (feed topic, separator, payload into the FNV
hash via an incremental fnv1a64_update) or allocate a buffer sized to hold the
full concatenation before hashing; ensure you still record/check keys in
s_seen_msg_keys using the computed uint64_t and keep rotation logic with
s_seen_msg_idx and MQTT_DEDUP_CACHE_SIZE.
| case MQTT_EVENT_DATA: | ||
| { | ||
| /* Extract topic and data */ | ||
| char *topic = strndup(event->topic, event->topic_len); | ||
| char *data = strndup(event->data, event->data_len); | ||
|
|
||
| if (!topic || !data) { | ||
| free(topic); | ||
| free(data); | ||
| break; | ||
| } | ||
|
|
||
| ESP_LOGI(TAG, "Received on topic [%.*s]: %.60s%s", | ||
| event->topic_len, event->topic, | ||
| data, | ||
| event->data_len > 60 ? "..." : ""); | ||
|
|
||
| /* Skip if this is our own message (deduplication) */ | ||
| if (dedup_check_and_record(topic, data)) { | ||
| ESP_LOGD(TAG, "Duplicate message, skipping"); | ||
| free(topic); | ||
| free(data); | ||
| break; | ||
| } | ||
|
|
||
| /* Parse JSON payload if present, otherwise treat as plain text */ | ||
| char *content = NULL; | ||
| cJSON *root = cJSON_Parse(data); | ||
| if (root) { | ||
| /* Try to extract "text" or "message" field from JSON */ | ||
| cJSON *text = cJSON_GetObjectItem(root, "text"); | ||
| cJSON *message = cJSON_GetObjectItem(root, "message"); | ||
| cJSON *payload_field = cJSON_GetObjectItem(root, "payload"); | ||
|
|
||
| const char *extracted = NULL; | ||
| if (text && cJSON_IsString(text)) { | ||
| extracted = text->valuestring; | ||
| } else if (message && cJSON_IsString(message)) { | ||
| extracted = message->valuestring; | ||
| } else if (payload_field && cJSON_IsString(payload_field)) { | ||
| extracted = payload_field->valuestring; | ||
| } | ||
|
|
||
| if (extracted && extracted[0]) { | ||
| content = strdup(extracted); | ||
| } | ||
| cJSON_Delete(root); | ||
| } | ||
|
|
||
| /* If not JSON or no recognized field, use raw data */ | ||
| if (!content) { | ||
| content = strdup(data); | ||
| } | ||
|
|
||
| if (content && content[0]) { | ||
| /* Use topic as chat_id for session routing */ | ||
| mimi_msg_t msg = {0}; | ||
| strncpy(msg.channel, MIMI_CHAN_MQTT, sizeof(msg.channel) - 1); | ||
| strncpy(msg.chat_id, topic, sizeof(msg.chat_id) - 1); | ||
| msg.content = content; | ||
|
|
||
| if (message_bus_push_inbound(&msg) != ESP_OK) { | ||
| ESP_LOGW(TAG, "Inbound queue full, dropping MQTT message"); | ||
| free(msg.content); | ||
| } else { | ||
| ESP_LOGI(TAG, "Message pushed to inbound bus: %s", topic); | ||
| } | ||
| } else { | ||
| free(content); | ||
| } | ||
|
|
||
| free(topic); | ||
| free(data); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the event structure definition and related includes
find . -type f -name "*.h" | head -20Repository: memovai/mimiclaw
Length of output: 617
🏁 Script executed:
# Look for mqtt event related definitions
rg "esp_mqtt_event" --type c --type h -A 5Repository: memovai/mimiclaw
Length of output: 469
🏁 Script executed:
# Check the broader context of the mqtt_bot.c file to see event handling
fd mqtt_bot.cRepository: memovai/mimiclaw
Length of output: 89
🏁 Script executed:
# Search for current_data_offset or total_data_len in the codebase
rg "current_data_offset|total_data_len" --type c --type hRepository: memovai/mimiclaw
Length of output: 42
🏁 Script executed:
# Look for MQTT event structure definition
rg "struct esp_mqtt_event" --type hRepository: memovai/mimiclaw
Length of output: 42
🏁 Script executed:
# Check what fields are in the event structure
rg "typedef.*esp_mqtt_event|struct.*mqtt_event" --type h -A 20Repository: memovai/mimiclaw
Length of output: 42
🏁 Script executed:
# Find and read the full mqtt_bot.c file to see complete context
wc -l main/channels/mqtt/mqtt_bot.cRepository: memovai/mimiclaw
Length of output: 93
🏁 Script executed:
# Check for esp-idf or mqtt library headers
find . -type d -name mqttRepository: memovai/mimiclaw
Length of output: 80
🏁 Script executed:
# Look for includes in mqtt_bot.c to understand which MQTT library is used
head -50 main/channels/mqtt/mqtt_bot.cRepository: memovai/mimiclaw
Length of output: 1439
🏁 Script executed:
# Read the event handler section around lines 100-180
sed -n '90,190p' main/channels/mqtt/mqtt_bot.cRepository: memovai/mimiclaw
Length of output: 3887
🏁 Script executed:
# Check if there are any other references to event fields
rg "event\->" main/channels/mqtt/mqtt_bot.c | head -20Repository: memovai/mimiclaw
Length of output: 611
🏁 Script executed:
# Look for any deduplication or buffering logic related to fragmentation
rg "offset|fragment|chunk|buffer" main/channels/mqtt/mqtt_bot.cRepository: memovai/mimiclaw
Length of output: 562
🌐 Web query:
ESP-IDF esp_mqtt_event_t structure fields current_data_offset total_data_len
💡 Result:
In ESP-IDF’s MQTT client, esp_mqtt_event_t (for MQTT_EVENT_DATA) uses these fields to handle fragmented incoming payloads:
total_data_len: the total length (bytes) of the full MQTT PUBLISH payload being received. (docs.espressif.com)current_data_offset: the byte offset (from the start of the full payload) corresponding toevent->datain thisMQTT_EVENT_DATAcallback (i.e., where this chunk belongs in the overall payload). (docs.espressif.com)
How they’re used:
- If the message is larger than the internal buffer, the client will post multiple
MQTT_EVENT_DATAevents for the same publish, updatingcurrent_data_offsetand keepingtotal_data_lenconstant so you can reassemble the payload. (docs.espressif.com) - In that fragmented case, only the first
MQTT_EVENT_DATAevent contains the topic (topic/topic_len); subsequent events contain only payload chunks (data,data_len) with updated offsets. (docs.espressif.com)
Citations:
- 1: https://docs.espressif.com/projects/esp-idf/en/release-v4.4/api-reference/protocols/mqtt.html
- 2: https://docs.espressif.com/projects/esp-idf/en/release-v4.4/api-reference/protocols/mqtt.html
- 3: https://docs.espressif.com/projects/esp-idf/en/release-v4.4/api-reference/protocols/mqtt.html
- 4: https://docs.espressif.com/projects/esp-idf/en/release-v4.4/api-reference/protocols/mqtt.html
🌐 Web query:
ESP-MQTT client library fragmented payload delivery MQTT_EVENT_DATA
💡 Result:
In ESP-IDF’s ESP-MQTT (mqtt_client.h), a single incoming PUBLISH can be delivered in multiple MQTT_EVENT_DATA callbacks when the payload is larger than the client’s internal buffer. In that case you must reassemble the payload using:
event->total_data_len= total payload bytes for the whole PUBLISHevent->current_data_offset= where this fragment belongs in the total payloadevent->data_len/event->data= bytes carried by this callback
Espressif explicitly documents this behavior: multiple MQTT_EVENT_DATA events are posted for oversized data and the offset/total fields are updated to track fragmentation. 1
Minimal reassembly pattern (C)
static uint8_t *rx_buf = NULL;
static int rx_total = 0;
case MQTT_EVENT_DATA: {
const esp_mqtt_event_handle_t e = event_data;
if (e->current_data_offset == 0) {
// first chunk of a (possibly fragmented) message
rx_total = e->total_data_len;
free(rx_buf);
rx_buf = malloc(rx_total);
// (optional) also copy topic here; safest is to treat topic as valid per-event
}
if (rx_buf && (e->current_data_offset + e->data_len) <= rx_total) {
memcpy(rx_buf + e->current_data_offset, e->data, e->data_len);
}
if (rx_buf && (e->current_data_offset + e->data_len) == rx_total) {
// complete message assembled in rx_buf (length rx_total)
// process(rx_buf, rx_total);
}
break;
}Avoiding fragmentation (optional)
If you want fewer/no fragments, increase the ESP-MQTT input/output buffer sizes via the client config’s buffer.size / buffer.out_size (or corresponding menuconfig defaults). 2
Sources: ESP-IDF ESP-MQTT events / fragmentation notes 1, ESP-IDF esp_mqtt_client_config_t::buffer_t sizing fields 2.
Citations:
- 1: https://docs.espressif.com/projects/esp-idf/en/v5.1/esp32h2/api-reference/protocols/mqtt.html
- 2: https://docs.espressif.com/projects/esp-idf/en/v5.1/esp32h2/api-reference/protocols/mqtt.html
Buffer fragmented MQTT payloads before parsing and deduplication.
ESP-IDF's MQTT client delivers large payloads across multiple MQTT_EVENT_DATA callbacks using current_data_offset and total_data_len. The current code treats each callback independently, causing partial payloads to be deduplicated, parsed, and pushed to the message bus separately. This breaks JSON parsing and creates duplicate incomplete messages. Accumulate payload chunks until current_data_offset + data_len == total_data_len, then run dedup and dispatch only once per complete message.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/channels/mqtt/mqtt_bot.c` around lines 106 - 179, The MQTT_EVENT_DATA
handler must assemble fragmented payloads using event->current_data_offset,
event->total_data_len and event->data_len before running the existing JSON
parsing, deduplication (dedup_check_and_record) and message_bus_push_inbound
logic; implement a small per-topic/per-client fragment buffer (e.g., static or
module-level struct keyed by topic) that reallocates/accumulates chunks at
offsets until current_data_offset + data_len == total_data_len, then operate on
the assembled payload (call dedup_check_and_record(topic, assembled_payload),
parse JSON into content, create mimi_msg_t and call message_bus_push_inbound),
and finally free/reset the fragment buffer; ensure error handling for OOM and
discard incomplete fragments on errors or when total_data_len exceeds a safe max
to avoid unbounded allocation.
| len = sizeof(tmp); | ||
| if (nvs_get_str(nvs, MIMI_NVS_KEY_MQTT_URI, tmp, &len) == ESP_OK && tmp[0]) { | ||
| strncpy(s_broker_uri, tmp, sizeof(s_broker_uri) - 1); | ||
| } | ||
|
|
||
| len = sizeof(tmp); | ||
| if (nvs_get_str(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID, tmp, &len) == ESP_OK && tmp[0]) { | ||
| strncpy(s_client_id, tmp, sizeof(s_client_id) - 1); | ||
| } | ||
|
|
||
| len = sizeof(tmp); | ||
| if (nvs_get_str(nvs, MIMI_NVS_KEY_MQTT_USERNAME, tmp, &len) == ESP_OK && tmp[0]) { | ||
| strncpy(s_username, tmp, sizeof(s_username) - 1); | ||
| } | ||
|
|
||
| len = sizeof(tmp); | ||
| if (nvs_get_str(nvs, MIMI_NVS_KEY_MQTT_PASSWORD, tmp, &len) == ESP_OK && tmp[0]) { | ||
| strncpy(s_password, tmp, sizeof(s_password) - 1); | ||
| } | ||
|
|
||
| len = sizeof(tmp); | ||
| if (nvs_get_str(nvs, MIMI_NVS_KEY_MQTT_SUB_TOPIC, tmp, &len) == ESP_OK && tmp[0]) { | ||
| strncpy(s_subscribe_topic, tmp, sizeof(s_subscribe_topic) - 1); | ||
| } |
There was a problem hiding this comment.
Reject overlong MQTT config values instead of truncating them silently.
These paths accept up to 255-byte NVS/CLI strings, but the runtime buffers are much smaller (s_client_id[64], s_username[128], s_password[128], s_subscribe_topic[128]). A long value is saved successfully and then truncated when copied into s_*, which can turn valid credentials or topic filters into broken ones with no clear error. Please validate lengths before persisting, and reject or clear oversized stored values during init.
Also applies to: 393-417
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/channels/mqtt/mqtt_bot.c` around lines 206 - 229, NVS-loaded MQTT
strings are being truncated into fixed-size runtime buffers (s_broker_uri,
s_client_id, s_username, s_password, s_subscribe_topic) via strncpy; instead,
validate lengths before persisting and refuse/clear oversized stored values
during init: when saving (where nvs_set_str is called for keys like
MIMI_NVS_KEY_MQTT_CLIENT_ID/USERNAME/PASSWORD/SUB_TOPIC) enforce max length <=
sizeof(target_buffer)-1 and return an error to the caller; and in the init path
(where nvs_get_str is used) check the returned len from nvs_get_str against
sizeof(s_...); if len exceeds the target buffer capacity, do NOT strncpy — log
an error, clear the NVS key (nvs_erase_key) and skip loading that value. Ensure
all checks reference the exact buffers (s_client_id, s_username, s_password,
s_subscribe_topic, s_broker_uri) and their sizes to avoid silent truncation.
| /** | ||
| * Send a text message to an MQTT topic. | ||
| * @param topic The topic to publish to (chat_id is used as topic) | ||
| * @param text Message text to publish | ||
| */ |
There was a problem hiding this comment.
Fix the stale chat_id wording in the send API docs.
The comment says chat_id is used as the topic, which does not match this MQTT API and can confuse callers reading the header.
📝 Suggested doc fix
/**
* Send a text message to an MQTT topic.
- * `@param` topic The topic to publish to (chat_id is used as topic)
+ * `@param` topic MQTT topic to publish to
* `@param` text Message text to publish
*/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/channels/mqtt/mqtt_bot.h` around lines 23 - 27, Update the stale doc
comment in mqtt_bot.h for the send API: remove the misleading reference to
`chat_id` and clarify that the `topic` parameter is the MQTT topic to publish to
(not a chat_id). Edit the comment above the send/publish function (the block
that documents "Send a text message to an MQTT topic." and the `@param topic`
entry) so `@param topic` clearly states "The MQTT topic to publish to" and leave
`@param text` as the message payload.
| /* --- set_mqtt_config command --- */ | ||
| static struct { | ||
| struct arg_str *uri; | ||
| struct arg_str *client_id; | ||
| struct arg_str *username; | ||
| struct arg_str *password; | ||
| struct arg_end *end; | ||
| } mqtt_config_args; | ||
|
|
||
| static int cmd_set_mqtt_config(int argc, char **argv) | ||
| { | ||
| int nerrors = arg_parse(argc, argv, (void **)&mqtt_config_args); | ||
| if (nerrors != 0) { | ||
| arg_print_errors(stderr, mqtt_config_args.end, argv[0]); | ||
| return 1; | ||
| } | ||
|
|
||
| const char *uri = mqtt_config_args.uri->sval[0]; | ||
| const char *client_id = mqtt_config_args.client_id->count > 0 ? mqtt_config_args.client_id->sval[0] : NULL; | ||
| const char *username = mqtt_config_args.username->count > 0 ? mqtt_config_args.username->sval[0] : NULL; | ||
| const char *password = mqtt_config_args.password->count > 0 ? mqtt_config_args.password->sval[0] : NULL; | ||
|
|
||
| esp_err_t err = mqtt_set_config(uri, client_id, username, password); | ||
| if (err != ESP_OK) { | ||
| printf("Failed to save MQTT config: %s\n", esp_err_to_name(err)); | ||
| return 1; | ||
| } | ||
| printf("MQTT config saved. Restart to apply.\n"); | ||
| return 0; | ||
| } |
There was a problem hiding this comment.
Please wire the new MQTT namespace into the generic config commands.
cmd_set_mqtt_config() persists MQTT settings, but Lines 593-607 still don't show them and Lines 611-625 still don't erase MIMI_NVS_MQTT. After this lands, config_show hides the live MQTT state and config_reset will claim all NVS config was cleared while the broker URI/credentials survive.
Also applies to: 903-915
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/cli/serial_cli.c` around lines 121 - 150, cmd_set_mqtt_config stores
MQTT under the MIMI_NVS_MQTT namespace but the generic config commands (the
config_show/display routine and the config_reset/clear routine) don't include
that namespace, so MQTT state is hidden and survives resets; update the generic
handlers to include MIMI_NVS_MQTT: modify the config_show/display logic to
enumerate/read and print the MIMI_NVS_MQTT namespace (same style as other
namespaces) and modify config_reset/clear logic to delete/erase the
MIMI_NVS_MQTT namespace (same deletion code path used for other namespaces) so
that cmd_set_mqtt_config, config_show and config_reset are consistent.
main/mimi.c
Outdated
| ESP_ERROR_CHECK(agent_loop_start()); | ||
| ESP_ERROR_CHECK(telegram_bot_start()); | ||
| ESP_ERROR_CHECK(feishu_bot_start()); | ||
| ESP_ERROR_CHECK(mqtt_bot_start()); |
There was a problem hiding this comment.
Treat MQTT startup as optional here.
mqtt_bot_start() can fail because of a bad broker URI or client init/start failure, but ESP_ERROR_CHECK will abort the whole firmware and take Telegram/Feishu/WebSocket down with it. Since MQTT is optional, log the failure and continue booting the other channels.
🛠️ Minimal fix
- ESP_ERROR_CHECK(mqtt_bot_start());
+ esp_err_t mqtt_err = mqtt_bot_start();
+ if (mqtt_err != ESP_OK) {
+ ESP_LOGW(TAG, "MQTT start failed: %s", esp_err_to_name(mqtt_err));
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ESP_ERROR_CHECK(mqtt_bot_start()); | |
| esp_err_t mqtt_err = mqtt_bot_start(); | |
| if (mqtt_err != ESP_OK) { | |
| ESP_LOGW(TAG, "MQTT start failed: %s", esp_err_to_name(mqtt_err)); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/mimi.c` at line 171, Replace the fatal ESP_ERROR_CHECK(mqtt_bot_start())
call with non-fatal error handling: call mqtt_bot_start() and check its
esp_err_t return; if it != ESP_OK log a warning including esp_err_to_name(err)
(using the existing logging TAG or process logger) and continue booting so other
channels (Telegram/Feishu/WebSocket) are not aborted; keep mqtt_bot_start() call
location and remove the ESP_ERROR_CHECK usage.
|
@tianrking Very useful feature, I will test it. |
I’ll check it first and do some local testing. |
Add a new MQTT communication channel following the existing channel pattern (init/start/send_message lifecycle). This enables MimiClaw to integrate with any MQTT broker for IoT applications and chat bots. Changes: - main/channels/mqtt/mqtt_bot.h: MQTT channel public API - main/channels/mqtt/mqtt_bot.c: Full MQTT client implementation with message deduplication, JSON payload parsing, and message bus integration - main/mimi_config.h: MQTT configuration constants and NVS keys - main/bus/message_bus.h: Add MIMI_CHAN_MQTT constant - main/mimi.c: Initialize and start MQTT bot, handle outbound dispatch - main/CMakeLists.txt: Add mqtt_bot.c source and mqtt dependency - main/cli/serial_cli.c: Add set_mqtt_config, mqtt_send, mqtt_status CLI commands - main/mimi_secrets.h.example: Add MQTT configuration examples - README.md: Document MQTT channel usage and CLI commands
- MQTT: Send responses to /response topic instead of /request - MQTT: Load build-time secrets (MIMI_SECRET_MQTT_*) as defaults - LLM: Add GLM (智谱) provider support with OpenAI-compatible format - LLM: GLM uses convert_tools_openai() for proper tool formatting - LLM: GLM uses OpenAI response parsing (choices[].message.content) - Add stdbool.h include to mqtt_bot.h for bool type - Update mimi_secrets.h.example with GLM configuration options
|
ping @IRONICBo @crispyberry done |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (6)
main/mimi.c (1)
171-171:⚠️ Potential issue | 🟠 MajorKeep MQTT startup non-fatal.
mqtt_bot_start()is optional, butESP_ERROR_CHECK()here aborts the whole firmware on broker/config errors and takes the other channels down with it. Please warn and continue in the surviving startup block.🛠️ Minimal fix
- ESP_ERROR_CHECK(mqtt_bot_start()); + esp_err_t mqtt_err = mqtt_bot_start(); + if (mqtt_err != ESP_OK) { + ESP_LOGW(TAG, "MQTT start failed: %s", esp_err_to_name(mqtt_err)); + }Also applies to: 206-206
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/mimi.c` at line 171, The call ESP_ERROR_CHECK(mqtt_bot_start()) makes MQTT startup fatal; change it to invoke mqtt_bot_start(), capture the returned esp_err_t (e.g., esp_err_t err = mqtt_bot_start()), and if err != ESP_OK log a non-fatal warning (ESP_LOGW or equivalent) that includes the error value and continue without aborting; do the same replacement for the other occurrence of ESP_ERROR_CHECK(mqtt_bot_start()) so both startup sites treat broker/config errors as warnings rather than terminating the firmware.main/channels/mqtt/mqtt_bot.c (4)
160-165:⚠️ Potential issue | 🟠 MajorDon't store the full MQTT topic in
mimi_msg_t.chat_id.
main/bus/message_bus.hcapschat_idat 96 bytes, andmain/mimi.clater republishes MQTT responses using that value as the topic. Longer topics are truncated here, so replies can go to the wrong destination.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 160 - 165, The code stores the entire MQTT topic into mimi_msg_t.chat_id which is limited (96 bytes) and gets truncated causing replies to the wrong topic; change mqtt_bot.c so instead of copying the full topic into msg.chat_id you extract a stable, short identifier (for example the last topic segment after the final '/' or another agreed client ID segment) and copy that into msg.chat_id using a safe copy that guarantees null-termination (e.g. snprintf/strlcpy) to fit sizeof(msg.chat_id)-1; update the strncpy usage around the msg.chat_id assignment (and any code that expects topic format) to use the extracted segment so replies republished from mimi.c go to the correct destination.
106-179:⚠️ Potential issue | 🔴 CriticalReassemble fragmented MQTT payloads before parsing or deduping.
ESP-MQTT can split one publish across multiple
MQTT_EVENT_DATAcallbacks. This handler treats each callback as a complete message, so large JSON payloads are parsed/deduped/pushed as partial chunks, and continuation chunks may not even carrytopic.Does ESP-IDF's `MQTT_EVENT_DATA` deliver large payloads across multiple callbacks using `current_data_offset` and `total_data_len`, and is `topic` only present on the first fragment?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 106 - 179, The handler currently treats each MQTT_EVENT_DATA fragment as a full message; change it to reassemble fragments using event->current_data_offset and event->total_data_len (and note topic is only provided on the first fragment) before running dedup_check_and_record, JSON parsing, or calling message_bus_push_inbound; implement a per-topic (or per-client) fragment buffer (e.g., mqtt_fragment_buffer keyed by topic or client id) that appends event->data chunks until current_data_offset + event->data_len == total_data_len, then operate on the full assembled payload, free the buffer after use, and handle error/timeout cases to avoid memory leaks or stale fragments.
229-250:⚠️ Potential issue | 🟠 MajorReject overlong MQTT config values instead of truncating them.
mqtt_set_config()andmqtt_set_subscribe_topic()accept arbitrary strings, but the runtime state is fixed-size—especiallys_client_id[64],s_username[128],s_password[128], ands_subscribe_topic[128].mqtt_bot_init()then loads them withstrncpy(), so a saved value can silently turn into a broken client ID, credential, or topic filter on the next boot.Also applies to: 417-482
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 229 - 250, The code currently allows saving arbitrarily long MQTT config strings which later get truncated into fixed-size runtime buffers (s_client_id, s_username, s_password, s_subscribe_topic) during mqtt_bot_init() via strncpy; update the setters (mqtt_set_config(), mqtt_set_subscribe_topic() and any related save paths) to validate input lengths against the target buffer sizes (64 for s_client_id, 128 for s_username/s_password/s_subscribe_topic) and return an error (or reject) when the provided string length is >= the buffer size, instead of saving/truncating; also remove reliance on silent truncation in mqtt_bot_init() (where strncpy is used) by assuming saved values are already validated.
49-60:⚠️ Potential issue | 🟠 MajorThis dedup cache still drops legitimate repeated commands.
Hashing only
topic + payloadmeans an intentional second publish with the same body is discarded within the 64-entry window, andcombined[320]truncates long payloads before hashing so distinct messages can collapse onto the same key.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 49 - 60, The dedup routine dedup_check_and_record currently truncates long payloads into combined[320] (losing uniqueness) and also lacks a final return value; fix it by building the combined key without a fixed-size truncating buffer (allocate size = strlen(topic) + 1 + strlen(payload) + 1, concatenate topic, ':' and payload, call fnv1a64 on the full buffer, then free it), then use that uint64_t key to check/record in s_seen_msg_keys using s_seen_msg_idx and wrap by MQTT_DEDUP_CACHE_SIZE; ensure the function returns true for duplicates and false for newly recorded keys. Include references to dedup_check_and_record, combined (replace), fnv1a64, s_seen_msg_keys, s_seen_msg_idx, and MQTT_DEDUP_CACHE_SIZE.main/channels/mqtt/mqtt_bot.h (1)
24-27:⚠️ Potential issue | 🟡 MinorFix the stale
chat_idwording in the send API docs.This comment still describes
topicas achat_id, which does not match the MQTT API and will mislead callers reading the header.📝 Minimal doc fix
- * `@param` topic The topic to publish to (chat_id is used as topic) + * `@param` topic The MQTT topic to publish to🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.h` around lines 24 - 27, Update the header comment above the MQTT send function in mqtt_bot.h: remove the stale "chat_id is used as topic" wording and clarify that the parameter topic is an MQTT topic string (e.g., "topic — The MQTT topic to publish to") and not a chat_id; update the param description for topic in the doc block that documents the send/publish API (the comment block that lists `@param` topic and `@param` text) so it accurately describes MQTT topics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main/channels/mqtt/mqtt_bot.c`:
- Around line 433-437: The setter currently only stores non-empty client_id and
never clears a previously stored value; update the branch that handles empty or
NULL client_id to remove the persisted key and reset the in-memory client id so
auto-generation can resume: call nvs_erase_key(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID)
(or equivalent NVS delete) when client_id is empty, and set s_client_id[0] =
'\0' (or use strncpy with an empty string) so any previously saved custom ID is
cleared; keep the existing nvs_set_str and strncpy logic in the non-empty
client_id path.
- Around line 357-367: The current logic uses strstr() and strncpy() which can
rewrite non-trailing "/request" occurrences and can overflow or leave
response_topic unsafe; change to check for a trailing suffix only by comparing
the end of topic to the literal "/request" (use topic_len = strlen(topic) and
suffix_len = strlen("/request") and strcmp(topic + topic_len - suffix_len,
"/request") == 0), and only then build the response by writing the prefix +
"/response" into response_topic with snprintf and check its return value against
sizeof(response_topic) to detect truncation and reject/return an error; for
non-matching topics either safely copy with snprintf (ensuring null-termination)
or reject when topic_len >= sizeof(response_topic) to avoid overflow. Ensure you
update logic around symbols response_topic, topic, and any error/return path to
handle rejection when output would be truncated.
In `@main/llm/llm_proxy.c`:
- Around line 197-209: Add a new helper provider_is_openai_compat() that returns
true for provider_is_openai() OR provider_is_glm(), then replace checks that
currently use provider_is_openai() for auth/compat behavior with
provider_is_openai_compat() — specifically update the Authorization header
branches in llm_http_direct() and llm_http_via_proxy() so GLM uses Bearer auth,
and update the OpenAI-style request payload branches in llm_chat_tools() (the
branches that currently gate OpenAI-style JSON) to use
provider_is_openai_compat() so GLM follows the same endpoint/payload/response
handling as OpenAI.
---
Duplicate comments:
In `@main/channels/mqtt/mqtt_bot.c`:
- Around line 160-165: The code stores the entire MQTT topic into
mimi_msg_t.chat_id which is limited (96 bytes) and gets truncated causing
replies to the wrong topic; change mqtt_bot.c so instead of copying the full
topic into msg.chat_id you extract a stable, short identifier (for example the
last topic segment after the final '/' or another agreed client ID segment) and
copy that into msg.chat_id using a safe copy that guarantees null-termination
(e.g. snprintf/strlcpy) to fit sizeof(msg.chat_id)-1; update the strncpy usage
around the msg.chat_id assignment (and any code that expects topic format) to
use the extracted segment so replies republished from mimi.c go to the correct
destination.
- Around line 106-179: The handler currently treats each MQTT_EVENT_DATA
fragment as a full message; change it to reassemble fragments using
event->current_data_offset and event->total_data_len (and note topic is only
provided on the first fragment) before running dedup_check_and_record, JSON
parsing, or calling message_bus_push_inbound; implement a per-topic (or
per-client) fragment buffer (e.g., mqtt_fragment_buffer keyed by topic or client
id) that appends event->data chunks until current_data_offset + event->data_len
== total_data_len, then operate on the full assembled payload, free the buffer
after use, and handle error/timeout cases to avoid memory leaks or stale
fragments.
- Around line 229-250: The code currently allows saving arbitrarily long MQTT
config strings which later get truncated into fixed-size runtime buffers
(s_client_id, s_username, s_password, s_subscribe_topic) during mqtt_bot_init()
via strncpy; update the setters (mqtt_set_config(), mqtt_set_subscribe_topic()
and any related save paths) to validate input lengths against the target buffer
sizes (64 for s_client_id, 128 for s_username/s_password/s_subscribe_topic) and
return an error (or reject) when the provided string length is >= the buffer
size, instead of saving/truncating; also remove reliance on silent truncation in
mqtt_bot_init() (where strncpy is used) by assuming saved values are already
validated.
- Around line 49-60: The dedup routine dedup_check_and_record currently
truncates long payloads into combined[320] (losing uniqueness) and also lacks a
final return value; fix it by building the combined key without a fixed-size
truncating buffer (allocate size = strlen(topic) + 1 + strlen(payload) + 1,
concatenate topic, ':' and payload, call fnv1a64 on the full buffer, then free
it), then use that uint64_t key to check/record in s_seen_msg_keys using
s_seen_msg_idx and wrap by MQTT_DEDUP_CACHE_SIZE; ensure the function returns
true for duplicates and false for newly recorded keys. Include references to
dedup_check_and_record, combined (replace), fnv1a64, s_seen_msg_keys,
s_seen_msg_idx, and MQTT_DEDUP_CACHE_SIZE.
In `@main/channels/mqtt/mqtt_bot.h`:
- Around line 24-27: Update the header comment above the MQTT send function in
mqtt_bot.h: remove the stale "chat_id is used as topic" wording and clarify that
the parameter topic is an MQTT topic string (e.g., "topic — The MQTT topic to
publish to") and not a chat_id; update the param description for topic in the
doc block that documents the send/publish API (the comment block that lists
`@param` topic and `@param` text) so it accurately describes MQTT topics.
In `@main/mimi.c`:
- Line 171: The call ESP_ERROR_CHECK(mqtt_bot_start()) makes MQTT startup fatal;
change it to invoke mqtt_bot_start(), capture the returned esp_err_t (e.g.,
esp_err_t err = mqtt_bot_start()), and if err != ESP_OK log a non-fatal warning
(ESP_LOGW or equivalent) that includes the error value and continue without
aborting; do the same replacement for the other occurrence of
ESP_ERROR_CHECK(mqtt_bot_start()) so both startup sites treat broker/config
errors as warnings rather than terminating the firmware.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b9298fc0-0e01-4e46-83be-0b94c2a5a43d
📒 Files selected for processing (5)
main/channels/mqtt/mqtt_bot.cmain/channels/mqtt/mqtt_bot.hmain/llm/llm_proxy.cmain/mimi.cmain/mimi_secrets.h.example
| /* Convert request topic to response topic */ | ||
| char response_topic[256]; | ||
| if (strstr(topic, "/request")) { | ||
| /* Replace /request with /response */ | ||
| size_t len = strlen(topic); | ||
| size_t prefix_len = strstr(topic, "/request") - topic; | ||
| snprintf(response_topic, sizeof(response_topic), "%.*s/response", (int)prefix_len, topic); | ||
| } else { | ||
| /* No /request suffix, just use as-is */ | ||
| strncpy(response_topic, topic, sizeof(response_topic) - 1); | ||
| } |
There was a problem hiding this comment.
Only rewrite a trailing /request, and reject topics that overflow response_topic.
strstr() rewrites the first occurrence anywhere in the topic, so foo/request/bar becomes foo/response. The fallback strncpy() also leaves response_topic unsafe for oversized inputs.
🛠️ Safer topic rewrite
- /* Convert request topic to response topic */
- char response_topic[256];
- if (strstr(topic, "/request")) {
- /* Replace /request with /response */
- size_t len = strlen(topic);
- size_t prefix_len = strstr(topic, "/request") - topic;
- snprintf(response_topic, sizeof(response_topic), "%.*s/response", (int)prefix_len, topic);
- } else {
- /* No /request suffix, just use as-is */
- strncpy(response_topic, topic, sizeof(response_topic) - 1);
- }
+ /* Convert a trailing /request suffix to /response */
+ char response_topic[256];
+ static const char *suffix = "/request";
+ size_t topic_len = strlen(topic);
+ size_t suffix_len = strlen(suffix);
+ int written;
+
+ if (topic_len >= suffix_len &&
+ strcmp(topic + topic_len - suffix_len, suffix) == 0) {
+ written = snprintf(response_topic, sizeof(response_topic),
+ "%.*s/response",
+ (int)(topic_len - suffix_len), topic);
+ } else {
+ written = snprintf(response_topic, sizeof(response_topic), "%s", topic);
+ }
+
+ if (written < 0 || written >= (int)sizeof(response_topic)) {
+ return ESP_ERR_INVALID_ARG;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/channels/mqtt/mqtt_bot.c` around lines 357 - 367, The current logic uses
strstr() and strncpy() which can rewrite non-trailing "/request" occurrences and
can overflow or leave response_topic unsafe; change to check for a trailing
suffix only by comparing the end of topic to the literal "/request" (use
topic_len = strlen(topic) and suffix_len = strlen("/request") and strcmp(topic +
topic_len - suffix_len, "/request") == 0), and only then build the response by
writing the prefix + "/response" into response_topic with snprintf and check its
return value against sizeof(response_topic) to detect truncation and
reject/return an error; for non-matching topics either safely copy with snprintf
(ensuring null-termination) or reject when topic_len >= sizeof(response_topic)
to avoid overflow. Ensure you update logic around symbols response_topic, topic,
and any error/return path to handle rejection when output would be truncated.
| /* Save client ID (optional) */ | ||
| if (client_id && client_id[0]) { | ||
| nvs_set_str(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID, client_id); | ||
| strncpy(s_client_id, client_id, sizeof(s_client_id) - 1); | ||
| } |
There was a problem hiding this comment.
Empty client_id never switches back to auto-generated mode.
The public API says an empty client ID should auto-generate, but this setter only handles the non-empty case. After a custom ID is stored once, there is no way to clear the NVS key and return to automatic IDs.
🛠️ Minimal fix
/* Save client ID (optional) */
if (client_id && client_id[0]) {
nvs_set_str(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID, client_id);
strncpy(s_client_id, client_id, sizeof(s_client_id) - 1);
+ } else {
+ nvs_erase_key(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID);
+ s_client_id[0] = '\0';
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /* Save client ID (optional) */ | |
| if (client_id && client_id[0]) { | |
| nvs_set_str(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID, client_id); | |
| strncpy(s_client_id, client_id, sizeof(s_client_id) - 1); | |
| } | |
| /* Save client ID (optional) */ | |
| if (client_id && client_id[0]) { | |
| nvs_set_str(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID, client_id); | |
| strncpy(s_client_id, client_id, sizeof(s_client_id) - 1); | |
| } else { | |
| nvs_erase_key(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID); | |
| s_client_id[0] = '\0'; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/channels/mqtt/mqtt_bot.c` around lines 433 - 437, The setter currently
only stores non-empty client_id and never clears a previously stored value;
update the branch that handles empty or NULL client_id to remove the persisted
key and reset the in-memory client id so auto-generation can resume: call
nvs_erase_key(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID) (or equivalent NVS delete) when
client_id is empty, and set s_client_id[0] = '\0' (or use strncpy with an empty
string) so any previously saved custom ID is cleared; keep the existing
nvs_set_str and strncpy logic in the non-empty client_id path.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (9)
main/mimi.c (1)
187-192:⚠️ Potential issue | 🟠 MajorDon't abort boot on optional MQTT startup failures.
ESP_ERROR_CHECK(mqtt_bot_start())turns a bad broker URI or client init failure into a full firmware abort even though Telegram, Feishu, and WebSocket can run independently.🛠️ Minimal fix
- ESP_ERROR_CHECK(mqtt_bot_start()); + esp_err_t mqtt_err = mqtt_bot_start(); + if (mqtt_err != ESP_OK) { + ESP_LOGW(TAG, "MQTT start failed: %s", esp_err_to_name(mqtt_err)); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/mimi.c` around lines 187 - 192, The startup currently calls ESP_ERROR_CHECK(mqtt_bot_start()) which aborts boot on any MQTT init failure; change this to call mqtt_bot_start() directly, capture its esp_err_t result, and if it is not ESP_OK log a warning/error (e.g., using ESP_LOGW/ESP_LOGE) but do not call ESP_ERROR_CHECK or abort so other services (telegram_bot_start, feishu_bot_start, cron_service_start, agent_loop_start) can continue; update any surrounding error handling/comments to reflect MQTT is optional and ensure mqtt_bot_start() is still called normally when it succeeds.main/llm/llm_proxy.c (1)
190-209:⚠️ Potential issue | 🔴 CriticalGLM routing is still using the Anthropic auth path.
Adding
provider_is_glm()here is incomplete:llm_http_direct()on Lines 276-285 andllm_http_via_proxy()on Lines 304-323 still send Bearer auth only forprovider_is_openai(). GLM requests therefore fall through tox-api-key/anthropic-versionand fail authentication.🛠️ Suggested direction
static bool provider_is_glm(void) { return strcmp(s_provider, "glm") == 0; } + +static bool provider_is_openai_compat(void) +{ + return provider_is_openai() || provider_is_glm(); +}Then use
provider_is_openai_compat()for the auth/header branches and the OpenAI-compatible payload/response branches instead of mixing separateprovider_is_openai()andprovider_is_glm()checks.Does Zhipu AI GLM's OpenAI-compatible chat/completions API at open.bigmodel.cn require `Authorization: Bearer <API_KEY>` headers?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/llm/llm_proxy.c` around lines 190 - 209, The GLM provider check was added for URL/path but not for authorization headers, causing GLM requests to use Anthropic-style x-api-key headers in llm_http_direct() and llm_http_via_proxy(); update those auth/header branches to use provider_is_openai_compat() (instead of provider_is_openai()) so GLM (provider_is_glm()) and OpenAI-compatible providers send Authorization: Bearer <key> and the OpenAI-compatible payload/response branches consistently use provider_is_openai_compat(); adjust header selection logic in llm_http_direct(), llm_http_via_proxy() and any other auth-related code that currently checks provider_is_openai() to reference provider_is_openai_compat() so GLM follows the Bearer auth path.main/channels/mqtt/mqtt_bot.c (5)
357-367:⚠️ Potential issue | 🟠 MajorOnly rewrite a trailing
/request, and reject overflow.
strstr()rewrites the first occurrence anywhere in the topic, sofoo/request/barbecomesfoo/response. The fallbackstrncpy()also silently truncates oversized topics.🔒 Safer topic rewrite
- /* Convert request topic to response topic */ - char response_topic[256]; - if (strstr(topic, "/request")) { - /* Replace /request with /response */ - size_t len = strlen(topic); - size_t prefix_len = strstr(topic, "/request") - topic; - snprintf(response_topic, sizeof(response_topic), "%.*s/response", (int)prefix_len, topic); - } else { - /* No /request suffix, just use as-is */ - strncpy(response_topic, topic, sizeof(response_topic) - 1); - } + char response_topic[256]; + static const char *suffix = "/request"; + size_t topic_len = strlen(topic); + size_t suffix_len = strlen(suffix); + int written; + + if (topic_len >= suffix_len && + strcmp(topic + topic_len - suffix_len, suffix) == 0) { + written = snprintf(response_topic, sizeof(response_topic), + "%.*s/response", + (int)(topic_len - suffix_len), topic); + } else { + written = snprintf(response_topic, sizeof(response_topic), "%s", topic); + } + + if (written < 0 || written >= (int)sizeof(response_topic)) { + return ESP_ERR_INVALID_ARG; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 357 - 367, The current logic in the MQTT handler that builds response_topic incorrectly replaces the first "/request" anywhere and silently truncates long topics; change it to only rewrite a trailing "/request" by checking if topic length >= strlen("/request") and using strncmp(topic + len - 8, "/request", 8) (or equivalent) to detect a suffix, compute the new length, and ensure the resulting response_topic fits within sizeof(response_topic); if it doesn't fit, return or reject the message instead of truncating, and always NUL-terminate response_topic (references: response_topic, topic, the request->response rewrite logic in the MQTT handler).
160-165:⚠️ Potential issue | 🟠 MajorKeep the full MQTT reply topic outside
chat_id[96].
mimi_msg_t.chat_idis only 96 bytes inmain/bus/message_bus.hLines 20-21, butmain/mimi.cLines 91-96 later reuses it as the publish topic. Long MQTT topics will be truncated here and replies can go to the wrong destination.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 160 - 165, The MQTT topic is being copied into mimi_msg_t.chat_id which is only 96 bytes and gets truncated, breaking reply routing; update the message model and relay usage so the full MQTT topic is preserved: add a new field (e.g., reply_topic or full_topic) to mimi_msg_t in main/bus/message_bus.h, populate that field in mqtt_bot.c instead of truncating into chat_id (or strdup the topic into the new field), and change main/mimi.c to publish replies using the new full_topic field; ensure all copies use the correct buffer size or dynamic allocation and keep chat_id as the short session identifier if needed.
49-61:⚠️ Potential issue | 🟠 MajorDon't dedupe on raw
topic + payload.This drops legitimate repeated commands on the same topic, and
combined[320]truncates long payloads before hashing so different messages can collapse to the same cache entry.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 49 - 61, The current dedup routine in dedup_check_and_record builds a fixed-size combined buffer (combined[320]) which truncates long payloads and causes false positives; replace the buffer-based approach by computing a full 64-bit signature without concatenation: compute a deterministic hash by hashing the topic and then feeding the entire payload (not a truncated copy) into the same hash stream (or compute fnv1a64(topic) then update/mix with fnv1a64(payload) in a collision-resistant way) to produce the key used against s_seen_msg_keys; keep the existing cache write logic (s_seen_msg_keys[s_seen_msg_idx] and s_seen_msg_idx = (s_seen_msg_idx + 1) % MQTT_DEDUP_CACHE_SIZE) but ensure you never truncate payload bytes when calling fnv1a64 so different long messages cannot collapse to the same key.
417-458:⚠️ Potential issue | 🟠 MajorReject oversized MQTT settings and clear empty
client_id.These setters can persist values that are larger than
s_client_id[64],s_username[128],s_password[128], ors_subscribe_topic[128]and then silently truncate them on load/copy. Also, an emptyclient_idnever erases the old NVS key, so the documented auto-generated mode cannot be restored once a custom ID has been saved.🛠️ Expected shape of the fix
+ if (client_id && strlen(client_id) >= sizeof(s_client_id)) { + nvs_close(nvs); + return ESP_ERR_INVALID_ARG; + } + if (username && strlen(username) >= sizeof(s_username)) { + nvs_close(nvs); + return ESP_ERR_INVALID_ARG; + } + if (password && strlen(password) >= sizeof(s_password)) { + nvs_close(nvs); + return ESP_ERR_INVALID_ARG; + } /* Save client ID (optional) */ if (client_id && client_id[0]) { nvs_set_str(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID, client_id); strncpy(s_client_id, client_id, sizeof(s_client_id) - 1); + } else { + nvs_erase_key(nvs, MIMI_NVS_KEY_MQTT_CLIENT_ID); + s_client_id[0] = '\0'; }The init path should also reject and clear oversized stored values instead of truncating them into the runtime buffers.
Also applies to: 466-483
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 417 - 458, The mqtt_set_config function currently accepts and silently truncates oversized inputs into runtime buffers (s_client_id, s_username, s_password, s_subscribe_topic) and never erases the saved client_id when an empty client_id is provided; update mqtt_set_config to validate lengths before writing to NVS: if any input exceeds its buffer capacity (e.g., s_client_id[64], s_username[128], s_password[128], s_subscribe_topic[128]) return an error and do not call nvs_set_str or copy, and when client_id is empty call nvs_erase_key(MIMI_NVS_KEY_MQTT_CLIENT_ID) and clear s_client_id[0] = '\0' so the auto-generated mode can be restored; also check nvs_set_str/nvs_erase_key return values and ensure any existing oversized stored values are cleared on init (where stored values are loaded) instead of being truncated into the runtime buffers.
106-179:⚠️ Potential issue | 🔴 CriticalAssemble fragmented
MQTT_EVENT_DATAbefore parsing.ESP-MQTT can deliver one PUBLISH across multiple
MQTT_EVENT_DATAcallbacks viacurrent_data_offset/total_data_len. Processing each callback independently means large JSON payloads get deduped, parsed, and pushed as partial messages; later fragments may also arrive without the topic at all.In ESP-IDF ESP-MQTT, can a single incoming publish be delivered across multiple `MQTT_EVENT_DATA` callbacks using `current_data_offset` and `total_data_len`, with the topic only present on the first callback?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.c` around lines 106 - 179, The MQTT_EVENT_DATA handler currently parses and dedups each callback fragment independently, but ESP-MQTT may deliver one PUBLISH across multiple callbacks using event->current_data_offset and event->total_data_len (topic is often present only on the first fragment); change the handler so it assembles fragments into a temporary buffer keyed by the incoming packet (use topic or a per-connection assembly state) until event->current_data_offset + event->data_len == event->total_data_len, then perform dedup_check_and_record, JSON parsing (cJSON_Parse), strdup content, and message_bus_push_inbound on the fully assembled payload; ensure you capture the topic from the first fragment, manage allocation/free of the assembly buffer, handle errors (out-of-memory, queue full) exactly where full message processing happens, and keep symbols like MQTT_EVENT_DATA, event->current_data_offset, event->total_data_len, dedup_check_and_record, cJSON_Parse, and message_bus_push_inbound to locate the change.main/cli/serial_cli.c (1)
121-150:⚠️ Potential issue | 🟠 MajorWire MQTT into
config_showandconfig_reset.
cmd_set_mqtt_config()stores state underMIMI_NVS_MQTT, butcmd_config_show()on Lines 619-633 never prints it andcmd_config_reset()on Lines 639-650 never erases it. Users will think reset succeeded while the broker URI/credentials survive.🧩 Minimal follow-up
static int cmd_config_show(int argc, char **argv) { printf("=== Current Configuration ===\n"); @@ print_config("Search Key", MIMI_NVS_SEARCH, MIMI_NVS_KEY_API_KEY, MIMI_SECRET_SEARCH_KEY, true); print_config("Tavily Key", MIMI_NVS_SEARCH, MIMI_NVS_KEY_TAVILY_KEY, MIMI_SECRET_TAVILY_KEY, true); + print_config("MQTT URI", MIMI_NVS_MQTT, MIMI_NVS_KEY_MQTT_URI, MIMI_SECRET_MQTT_URI, false); + print_config("MQTT User", MIMI_NVS_MQTT, MIMI_NVS_KEY_MQTT_USERNAME, MIMI_SECRET_MQTT_USERNAME, true); + print_config("MQTT Pass", MIMI_NVS_MQTT, MIMI_NVS_KEY_MQTT_PASSWORD, MIMI_SECRET_MQTT_PASSWORD, true); + print_config("MQTT Topic", MIMI_NVS_MQTT, MIMI_NVS_KEY_MQTT_SUB_TOPIC, MIMI_MQTT_DEFAULT_SUB_TOPIC, false); printf("=============================\n"); return 0; } static int cmd_config_reset(int argc, char **argv) { const char *namespaces[] = { - MIMI_NVS_WIFI, MIMI_NVS_TG, MIMI_NVS_LLM, MIMI_NVS_PROXY, MIMI_NVS_SEARCH + MIMI_NVS_WIFI, MIMI_NVS_TG, MIMI_NVS_LLM, MIMI_NVS_PROXY, MIMI_NVS_SEARCH, MIMI_NVS_MQTT }; - for (int i = 0; i < 5; i++) { + for (int i = 0; i < 6; i++) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/cli/serial_cli.c` around lines 121 - 150, cmd_set_mqtt_config stores MQTT state under MIMI_NVS_MQTT but cmd_config_show and cmd_config_reset currently ignore it; update cmd_config_show to load the stored MQTT config (retrieve URI, client_id, username — e.g. via your mqtt_get_config or reading the NVS key MIMI_NVS_MQTT) and print those fields alongside the other config items, and update cmd_config_reset to remove the MQTT entry (erase the MIMI_NVS_MQTT key or call the mqtt erase/clear function and commit) so reset actually clears broker URI/credentials; changes should reference the same symbols: MIMI_NVS_MQTT, cmd_set_mqtt_config, cmd_config_show, and cmd_config_reset.main/channels/mqtt/mqtt_bot.h (1)
24-27:⚠️ Potential issue | 🟡 MinorUpdate the send API docs to say
topic, notchat_id.The current comment documents MQTT publishing in terms of
chat_id, which does not match this API and will confuse callers reading the header.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/channels/mqtt/mqtt_bot.h` around lines 24 - 27, Update the doc comment in mqtt_bot.h for the send text API to consistently refer to the parameter name "topic" instead of "chat_id": locate the comment above the send/sendText declaration that currently reads "The topic to publish to (chat_id is used as topic)" and change the parenthetical and any other occurrences to reference "topic" (e.g., "The topic to publish to (topic is used as topic)" or simply "The topic to publish to"). Ensure the `@param` lines and any related description use the parameter identifier topic so the documentation matches the function signature.
🧹 Nitpick comments (1)
main/bus/message_bus.h (1)
15-16: UseMIMI_CHAN_NAME_MAXinmimi_msg_t.The new limit is still duplicated as
char channel[16]on Line 20, so the macro can drift from the actual buffer size.♻️ Minimal cleanup
- char channel[16]; /* "telegram", "websocket", "cli" */ + char channel[MIMI_CHAN_NAME_MAX]; /* "telegram", "websocket", "cli" */🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main/bus/message_bus.h` around lines 15 - 16, Replace the hardcoded channel buffer size in the mimi_msg_t definition with the MIMI_CHAN_NAME_MAX macro so the limit is not duplicated; locate the mimi_msg_t struct and change the field declared as char channel[16] to use char channel[MIMI_CHAN_NAME_MAX], ensuring the macro defined above is used consistently for buffer sizing and any related allocations or checks reference MIMI_CHAN_NAME_MAX as well.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main/mimi_secrets.h.example`:
- Around line 16-20: Update the example MQTT URI in the MQTT Configuration
section so it promotes secure transport: change the sample value for
MIMI_SECRET_MQTT_URI to use TLS (e.g., "mqtts://broker.example.com:8883") or add
an explicit comment that the plain "mqtt://...:1883" example is local-test-only;
update any adjacent explanatory comment text for MIMI_SECRET_MQTT_CLIENT_ID,
MIMI_SECRET_MQTT_USERNAME, and MIMI_SECRET_MQTT_PASSWORD if needed to reflect
that credentials over non-TLS transports are insecure.
---
Duplicate comments:
In `@main/channels/mqtt/mqtt_bot.c`:
- Around line 357-367: The current logic in the MQTT handler that builds
response_topic incorrectly replaces the first "/request" anywhere and silently
truncates long topics; change it to only rewrite a trailing "/request" by
checking if topic length >= strlen("/request") and using strncmp(topic + len -
8, "/request", 8) (or equivalent) to detect a suffix, compute the new length,
and ensure the resulting response_topic fits within sizeof(response_topic); if
it doesn't fit, return or reject the message instead of truncating, and always
NUL-terminate response_topic (references: response_topic, topic, the
request->response rewrite logic in the MQTT handler).
- Around line 160-165: The MQTT topic is being copied into mimi_msg_t.chat_id
which is only 96 bytes and gets truncated, breaking reply routing; update the
message model and relay usage so the full MQTT topic is preserved: add a new
field (e.g., reply_topic or full_topic) to mimi_msg_t in main/bus/message_bus.h,
populate that field in mqtt_bot.c instead of truncating into chat_id (or strdup
the topic into the new field), and change main/mimi.c to publish replies using
the new full_topic field; ensure all copies use the correct buffer size or
dynamic allocation and keep chat_id as the short session identifier if needed.
- Around line 49-61: The current dedup routine in dedup_check_and_record builds
a fixed-size combined buffer (combined[320]) which truncates long payloads and
causes false positives; replace the buffer-based approach by computing a full
64-bit signature without concatenation: compute a deterministic hash by hashing
the topic and then feeding the entire payload (not a truncated copy) into the
same hash stream (or compute fnv1a64(topic) then update/mix with
fnv1a64(payload) in a collision-resistant way) to produce the key used against
s_seen_msg_keys; keep the existing cache write logic
(s_seen_msg_keys[s_seen_msg_idx] and s_seen_msg_idx = (s_seen_msg_idx + 1) %
MQTT_DEDUP_CACHE_SIZE) but ensure you never truncate payload bytes when calling
fnv1a64 so different long messages cannot collapse to the same key.
- Around line 417-458: The mqtt_set_config function currently accepts and
silently truncates oversized inputs into runtime buffers (s_client_id,
s_username, s_password, s_subscribe_topic) and never erases the saved client_id
when an empty client_id is provided; update mqtt_set_config to validate lengths
before writing to NVS: if any input exceeds its buffer capacity (e.g.,
s_client_id[64], s_username[128], s_password[128], s_subscribe_topic[128])
return an error and do not call nvs_set_str or copy, and when client_id is empty
call nvs_erase_key(MIMI_NVS_KEY_MQTT_CLIENT_ID) and clear s_client_id[0] = '\0'
so the auto-generated mode can be restored; also check nvs_set_str/nvs_erase_key
return values and ensure any existing oversized stored values are cleared on
init (where stored values are loaded) instead of being truncated into the
runtime buffers.
- Around line 106-179: The MQTT_EVENT_DATA handler currently parses and dedups
each callback fragment independently, but ESP-MQTT may deliver one PUBLISH
across multiple callbacks using event->current_data_offset and
event->total_data_len (topic is often present only on the first fragment);
change the handler so it assembles fragments into a temporary buffer keyed by
the incoming packet (use topic or a per-connection assembly state) until
event->current_data_offset + event->data_len == event->total_data_len, then
perform dedup_check_and_record, JSON parsing (cJSON_Parse), strdup content, and
message_bus_push_inbound on the fully assembled payload; ensure you capture the
topic from the first fragment, manage allocation/free of the assembly buffer,
handle errors (out-of-memory, queue full) exactly where full message processing
happens, and keep symbols like MQTT_EVENT_DATA, event->current_data_offset,
event->total_data_len, dedup_check_and_record, cJSON_Parse, and
message_bus_push_inbound to locate the change.
In `@main/channels/mqtt/mqtt_bot.h`:
- Around line 24-27: Update the doc comment in mqtt_bot.h for the send text API
to consistently refer to the parameter name "topic" instead of "chat_id": locate
the comment above the send/sendText declaration that currently reads "The topic
to publish to (chat_id is used as topic)" and change the parenthetical and any
other occurrences to reference "topic" (e.g., "The topic to publish to (topic is
used as topic)" or simply "The topic to publish to"). Ensure the `@param` lines
and any related description use the parameter identifier topic so the
documentation matches the function signature.
In `@main/cli/serial_cli.c`:
- Around line 121-150: cmd_set_mqtt_config stores MQTT state under MIMI_NVS_MQTT
but cmd_config_show and cmd_config_reset currently ignore it; update
cmd_config_show to load the stored MQTT config (retrieve URI, client_id,
username — e.g. via your mqtt_get_config or reading the NVS key MIMI_NVS_MQTT)
and print those fields alongside the other config items, and update
cmd_config_reset to remove the MQTT entry (erase the MIMI_NVS_MQTT key or call
the mqtt erase/clear function and commit) so reset actually clears broker
URI/credentials; changes should reference the same symbols: MIMI_NVS_MQTT,
cmd_set_mqtt_config, cmd_config_show, and cmd_config_reset.
In `@main/llm/llm_proxy.c`:
- Around line 190-209: The GLM provider check was added for URL/path but not for
authorization headers, causing GLM requests to use Anthropic-style x-api-key
headers in llm_http_direct() and llm_http_via_proxy(); update those auth/header
branches to use provider_is_openai_compat() (instead of provider_is_openai()) so
GLM (provider_is_glm()) and OpenAI-compatible providers send Authorization:
Bearer <key> and the OpenAI-compatible payload/response branches consistently
use provider_is_openai_compat(); adjust header selection logic in
llm_http_direct(), llm_http_via_proxy() and any other auth-related code that
currently checks provider_is_openai() to reference provider_is_openai_compat()
so GLM follows the Bearer auth path.
In `@main/mimi.c`:
- Around line 187-192: The startup currently calls
ESP_ERROR_CHECK(mqtt_bot_start()) which aborts boot on any MQTT init failure;
change this to call mqtt_bot_start() directly, capture its esp_err_t result, and
if it is not ESP_OK log a warning/error (e.g., using ESP_LOGW/ESP_LOGE) but do
not call ESP_ERROR_CHECK or abort so other services (telegram_bot_start,
feishu_bot_start, cron_service_start, agent_loop_start) can continue; update any
surrounding error handling/comments to reflect MQTT is optional and ensure
mqtt_bot_start() is still called normally when it succeeds.
---
Nitpick comments:
In `@main/bus/message_bus.h`:
- Around line 15-16: Replace the hardcoded channel buffer size in the mimi_msg_t
definition with the MIMI_CHAN_NAME_MAX macro so the limit is not duplicated;
locate the mimi_msg_t struct and change the field declared as char channel[16]
to use char channel[MIMI_CHAN_NAME_MAX], ensuring the macro defined above is
used consistently for buffer sizing and any related allocations or checks
reference MIMI_CHAN_NAME_MAX as well.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 78cb48ee-8bea-4338-9bf6-c288779fd8c8
📒 Files selected for processing (10)
README.mdmain/CMakeLists.txtmain/bus/message_bus.hmain/channels/mqtt/mqtt_bot.cmain/channels/mqtt/mqtt_bot.hmain/cli/serial_cli.cmain/llm/llm_proxy.cmain/mimi.cmain/mimi_config.hmain/mimi_secrets.h.example
🚧 Files skipped from review as they are similar to previous changes (1)
- main/CMakeLists.txt
| /* MQTT Configuration */ | ||
| #define MIMI_SECRET_MQTT_URI "" /* e.g. "mqtt://broker.hivemq.com:1883" */ | ||
| #define MIMI_SECRET_MQTT_CLIENT_ID "" /* optional: auto-generated if empty */ | ||
| #define MIMI_SECRET_MQTT_USERNAME "" /* optional */ | ||
| #define MIMI_SECRET_MQTT_PASSWORD "" /* optional */ |
There was a problem hiding this comment.
Use a TLS MQTT example here.
Line 17’s only sample URI is plaintext mqtt://...:1883, which nudges users toward an insecure transport for bot traffic and optional credentials. Please switch the example to mqtts://...:8883 or explicitly label mqtt:// as local-test-only.
Suggested tweak
-#define MIMI_SECRET_MQTT_URI "" /* e.g. "mqtt://broker.hivemq.com:1883" */
+#define MIMI_SECRET_MQTT_URI "" /* e.g. "mqtts://broker.example.com:8883" */📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /* MQTT Configuration */ | |
| #define MIMI_SECRET_MQTT_URI "" /* e.g. "mqtt://broker.hivemq.com:1883" */ | |
| #define MIMI_SECRET_MQTT_CLIENT_ID "" /* optional: auto-generated if empty */ | |
| #define MIMI_SECRET_MQTT_USERNAME "" /* optional */ | |
| #define MIMI_SECRET_MQTT_PASSWORD "" /* optional */ | |
| /* MQTT Configuration */ | |
| `#define` MIMI_SECRET_MQTT_URI "" /* e.g. "mqtts://broker.example.com:8883" */ | |
| `#define` MIMI_SECRET_MQTT_CLIENT_ID "" /* optional: auto-generated if empty */ | |
| `#define` MIMI_SECRET_MQTT_USERNAME "" /* optional */ | |
| `#define` MIMI_SECRET_MQTT_PASSWORD "" /* optional */ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main/mimi_secrets.h.example` around lines 16 - 20, Update the example MQTT
URI in the MQTT Configuration section so it promotes secure transport: change
the sample value for MIMI_SECRET_MQTT_URI to use TLS (e.g.,
"mqtts://broker.example.com:8883") or add an explicit comment that the plain
"mqtt://...:1883" example is local-test-only; update any adjacent explanatory
comment text for MIMI_SECRET_MQTT_CLIENT_ID, MIMI_SECRET_MQTT_USERNAME, and
MIMI_SECRET_MQTT_PASSWORD if needed to reflect that credentials over non-TLS
transports are insecure.
Add a new MQTT communication channel following the existing channel pattern (init/start/send_message lifecycle). This enables MimiClaw to integrate with any MQTT broker for IoT applications and chat bots.
Changes:
Summary by CodeRabbit
New Features
Improvements
Documentation