Skip to content

Conversation

@howethomas
Copy link
Contributor

@howethomas howethomas commented Jan 22, 2026

  • Added support for multiple LLM providers (OpenAI, Anthropic, LiteLLM) through a unified client interface.
  • Updated environment configuration to include new API keys and options for LLM providers.
  • Refactored analysis and tagging functions to utilize the new LLM client, improving modularity and maintainability.
  • Enhanced test coverage for LLM-related functionalities, ensuring robust integration and error handling.

This change streamlines the integration of various LLM services and improves the overall architecture of the analysis components.


Note

Adds a unified LLM client with retry/tracking and migrates analysis links to it for multi-provider support.

  • New lib/llm_client.py abstracts OpenAI, Anthropic, and LiteLLM with standardized LLMConfig, LLMResponse, retries, usage tracking, and vendor detection
  • Refactors links/analyze and links/analyze_and_label to use the new client, updating metrics, vendor metadata, and JSON response handling
  • Extensive unit and optional integration tests for the client and links; test env loader prefers .env.test with fallback to env.test.example
  • Updates dependencies (anthropic, litellm, related deps) and docs (OpenAI config wording, how-to guides); adds env.test.example
  • Minor repo hygiene: broadened .gitignore and lockfile updates

Written by Cursor Bugbot for commit 59f64a5. Configure here.

- Added support for multiple LLM providers (OpenAI, Anthropic, LiteLLM) through a unified client interface.
- Updated environment configuration to include new API keys and options for LLM providers.
- Refactored analysis and tagging functions to utilize the new LLM client, improving modularity and maintainability.
- Enhanced test coverage for LLM-related functionalities, ensuring robust integration and error handling.

This change streamlines the integration of various LLM services and improves the overall architecture of the analysis components.
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

This PR is being reviewed by Cursor Bugbot

Details

Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Comment @cursor review or bugbot run to trigger another review on this PR

assert response.content is not None
assert "4" in response.content
assert response.provider == "anthropic"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Integration tests use invalid Anthropic model names

Medium Severity

The Anthropic integration tests use model names claude-sonnet-4-5-20250514 and claude-haiku-4-5-20250514 which don't follow Anthropic's standard naming convention (claude-{version}-{variant}-{date} like claude-3-opus-20240229). The unit tests in the same codebase correctly use claude-3-opus-20240229, claude-3-sonnet-20240229, etc. These invalid names will cause API errors when running integration tests with a valid ANTHROPIC_API_KEY.

Additional Locations (1)

Fix in Cursor Fix in Web

api_key=self.config.azure_api_key,
azure_endpoint=self.config.azure_api_base,
api_version=self.config.azure_api_version or "2024-02-15-preview",
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Azure OpenAI client missing timeout and retry configuration

Medium Severity

The AzureOpenAI client initialization is missing timeout and max_retries=0 parameters that are present in the regular OpenAI client initialization (lines 165-169). This causes inconsistent behavior: Azure clients use default timeout/retry settings while regular OpenAI clients use the configured self.config.timeout and disable SDK retries to let tenacity handle them. This could cause Azure requests to timeout unexpectedly or trigger double retries (both SDK and tenacity).

Fix in Cursor Fix in Web

howethomas and others added 3 commits January 22, 2026 14:28
The tests/storage directory was being confused with server/storage
due to pythonpath settings in pytest.ini. This conftest.py ensures
proper module discovery in the Docker test environment.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Move test_llm_client.py and test_llm_client_integration.py from
  server/lib/tests/ to tests/lib/ to follow project test structure
- Update imports to use 'lib.llm_client' (not 'server.lib.llm_client')
  since pytest.ini adds 'server' to pythonpath
- Update tests/conftest.py to properly set up sys.path for both
  project root and server directory
- Add testpaths = tests to pytest.ini to limit test discovery

This fixes the CI import errors where pytest was confused by
multiple 'tests' directories in the project.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use modern ENV key=value format instead of legacy ENV key value
- Remove undefined $PYTHONPATH variable reference (not needed in fresh container)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants