-
Notifications
You must be signed in to change notification settings - Fork 12
Enhance LLM integration and configuration #114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Added support for multiple LLM providers (OpenAI, Anthropic, LiteLLM) through a unified client interface. - Updated environment configuration to include new API keys and options for LLM providers. - Refactored analysis and tagging functions to utilize the new LLM client, improving modularity and maintainability. - Enhanced test coverage for LLM-related functionalities, ensuring robust integration and error handling. This change streamlines the integration of various LLM services and improves the overall architecture of the analysis components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This PR is being reviewed by Cursor Bugbot
Details
Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
Comment @cursor review or bugbot run to trigger another review on this PR
| assert response.content is not None | ||
| assert "4" in response.content | ||
| assert response.provider == "anthropic" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Integration tests use invalid Anthropic model names
Medium Severity
The Anthropic integration tests use model names claude-sonnet-4-5-20250514 and claude-haiku-4-5-20250514 which don't follow Anthropic's standard naming convention (claude-{version}-{variant}-{date} like claude-3-opus-20240229). The unit tests in the same codebase correctly use claude-3-opus-20240229, claude-3-sonnet-20240229, etc. These invalid names will cause API errors when running integration tests with a valid ANTHROPIC_API_KEY.
Additional Locations (1)
| api_key=self.config.azure_api_key, | ||
| azure_endpoint=self.config.azure_api_base, | ||
| api_version=self.config.azure_api_version or "2024-02-15-preview", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Azure OpenAI client missing timeout and retry configuration
Medium Severity
The AzureOpenAI client initialization is missing timeout and max_retries=0 parameters that are present in the regular OpenAI client initialization (lines 165-169). This causes inconsistent behavior: Azure clients use default timeout/retry settings while regular OpenAI clients use the configured self.config.timeout and disable SDK retries to let tenacity handle them. This could cause Azure requests to timeout unexpectedly or trigger double retries (both SDK and tenacity).
The tests/storage directory was being confused with server/storage due to pythonpath settings in pytest.ini. This conftest.py ensures proper module discovery in the Docker test environment. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Move test_llm_client.py and test_llm_client_integration.py from server/lib/tests/ to tests/lib/ to follow project test structure - Update imports to use 'lib.llm_client' (not 'server.lib.llm_client') since pytest.ini adds 'server' to pythonpath - Update tests/conftest.py to properly set up sys.path for both project root and server directory - Add testpaths = tests to pytest.ini to limit test discovery This fixes the CI import errors where pytest was confused by multiple 'tests' directories in the project. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use modern ENV key=value format instead of legacy ENV key value - Remove undefined $PYTHONPATH variable reference (not needed in fresh container) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This change streamlines the integration of various LLM services and improves the overall architecture of the analysis components.
Note
Adds a unified LLM client with retry/tracking and migrates analysis links to it for multi-provider support.
lib/llm_client.pyabstracts OpenAI, Anthropic, and LiteLLM with standardizedLLMConfig,LLMResponse, retries, usage tracking, and vendor detectionlinks/analyzeandlinks/analyze_and_labelto use the new client, updating metrics, vendor metadata, and JSON response handling.env.testwith fallback toenv.test.exampleanthropic,litellm, related deps) and docs (OpenAI config wording, how-to guides); addsenv.test.example.gitignoreand lockfile updatesWritten by Cursor Bugbot for commit 59f64a5. Configure here.