fix(smolagents): Reuse AnyLLM client instance to avoid event loop errors#902
Merged
fix(smolagents): Reuse AnyLLM client instance to avoid event loop errors#902
Conversation
Codecov Report❌ Patch coverage is
... and 43 files with indirect coverage changes 🚀 New features to boost your workflow:
|
cd07ac7 to
8e961f1
Compare
HareeshBahuleyan
approved these changes
Feb 4, 2026
Contributor
HareeshBahuleyan
left a comment
There was a problem hiding this comment.
Tested and worked with a sample script using smolagents with openai/gpt-4.1 model
0dcfde2 to
9bcb046
Compare
Create an AnyLLM client instance once via AnyLLM.create() instead of returning the any_llm module from create_client(). This fixes "Event loop is closed" errors that occurred when making multiple completion calls, as the functional API created per-call async resources tied to an event loop that could be closed between calls. Changes: - Parse model_id using AnyLLM.split_model_provider() to extract provider - Store provider config separately from completion kwargs - Create AnyLLM instance in create_client() for reuse by ApiModel - Add error handling for malformed model_id with clear message - Add unit tests for AnyLLMModel class including regression test Fixes #824
- Remove unnecessary create_client() call in test (already called by ApiModel.__init__) - Remove redundant comment about simulating completion calls
9bcb046 to
5d87019
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
AnyLLM.create()instead of returning theany_llmmoduleChanges
model_idusingAnyLLM.split_model_provider()to extract provider and modelcreate_client()for reuse by ApiModel base classallow_running_loop=Trueto completion calls to permit sync API in async contextsmodel_idwith clear error messageAnyLLMModelclass including regression test for issue smolagents AnyLLM client should be created only once and reused for completions #824Test plan
test_malformed_model_id_raises_clear_errortest_parses_model_id_correctlytest_create_client_creates_anyllm_instancetest_client_reused_across_multiple_calls(regression test)Fixes #824