-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Description
Running :n, loading a thread from :t, or pressing Escape in normal mode while the LLM is streaming can cause a race condition. The streaming goroutine continues sending LLMResponseMsg/LLMDoneMsg after the chat state has been reset, which can:
- Append stale response chunks to the new/loaded conversation's
chatHistory - Persist the assistant message under the wrong (new or empty)
threadID - Race on
model.messages(goroutine writes on line 30 ofchat_stream.go)
Proposed solution
Add a per-stream cancellable context and a streaming flag to ChatModel:
streamCtx, streamCancel := context.WithCancel(model.ctx)created instartLLMStream()streaming boolsettrueinstartLLMStream(),falseinhandleLLMDoneMsg/handleLLMErrorMsg/cancel handler- A
cancelStream()method that callsstreamCancel(), clearscurrentResponse, and setsstreaming = false
Cancel trigger points
Escapein normal mode while streaming → cancel stream:n(newChat()) → cancel stream, then reset stateloadThread()(selecting a thread from:tlist) → cancel stream, then load thread
:t itself does not cancel — it just opens the list overlay. The stream can continue underneath until a thread is actually selected.
Post-cancel cleanup
handleLLMResponseMsgshould early-return if!model.streaming- Distinguish intentional cancel from errors (e.g.
LLMCancelledMsgvsLLMErrorMsg) so cancelled streams don't show error messages
Split from #247.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels
Projects
Status
Backlog