You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 16, 2025. It is now read-only.
I am trying to migrate project from direct api calls to llm-chain and I can't find anywhere how should I use it for multi-turn conversation.
I have this specific example: LLM is tasked with generating something and hits maximum response limit. I can check that with stop_reason, add "continue" as next message and call API again. How should I do this with lllm-chain? There isn't stop_reason in response and I also cannot add last message for MessageBag because there tool messages abstracted away.
Is there a way to continue chain of conversation that I am missing? Or is my general approach wrong and this type of task should be handled somehow through tools?
Similar issue is with feedback loop where I want to check result (for example syntax check) and return it for fixes but that should probably be done with mandatory tools and not multi-turn chat.
Thank you for any help. I really like how llm-chain works (and that it even exists in PHP) but I just cannot wrap my head around this specific problem.