-
Notifications
You must be signed in to change notification settings - Fork 14
feat:Add perplexity search tool #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
esafwan
wants to merge
9
commits into
legacy
Choose a base branch
from
add-perplexity-search-tool
base: legacy
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Add litellm>=1.0.0 dependency to pyproject.toml - Create unified litellm.py provider supporting 100+ LLM providers - Update run.py to route existing providers (OpenAI, Anthropic, Google, OpenRouter) to LiteLLM - Maintain 100% backward compatibility: * Existing API keys work as-is * Existing model names auto-normalized (e.g., 'gpt-4-turbo' → 'openai/gpt-4-turbo') * Existing tools work as-is * No DocType changes needed - Add LITELLM_MIGRATION.md documentation Benefits: - Single codebase (~287 lines vs ~800+ lines across 4 providers) - Built-in retry logic, cost tracking, error handling - Support for 100+ providers automatically - Easy to add new providers via DocType without code changes
- Add after_install hook to check for litellm package - Improve error message with clear installation steps - Update README with bench setup requirements step - Provide helpful guidance when litellm is missing The litellm package is already in pyproject.toml dependencies, so 'bench setup requirements' will install it automatically.
- Remove debug print statements from agent_integration.py - Add specific exception handling for LiteLLM errors: * InternalServerError (transient server errors) * RateLimitError (rate limit exceeded) * APIError (other API errors) - Provide clearer error messages with model name context - Model 'gpt-oss-120b' is valid - errors are now properly categorized
- Fix temperature/top_p to read from agent.model_settings (from Agent DocType) - Previously was trying to read from agent.temperature (doesn't exist) - Add litellm.drop_params = True to handle models with restricted params (e.g., gpt-5 only supports temperature=1) - Add debug logging when temperature not found in model_settings - Pass agent_name in context for better error debugging Fixes issue where Agent DocType temperature (1.0) was being ignored and default (0.7) was used instead.
- Load Agent DocType directly in LiteLLM provider to get temperature/top_p - Priority order: Agent DocType > agent.model_settings > fallback - Add comprehensive debug logging to troubleshoot parameter access - Each agent has its own temperature/prompt from Agent DocType - Fixes issue where Agent DocType temperature (1.0) was not being used This ensures the runner correctly gets each agent's individual settings from the Agent DocType as documented in AGENTS.md.
- Remove all debug logging statements from LiteLLM provider - Simplify temperature/top_p access logic (Agent DocType > model_settings > default) - Update comments to be production-ready - Remove temporary migration/testing documentation files: * LITELLM_MIGRATION.md * TESTING_INSTRUCTIONS.md - Standardize code comments and error handling Code is now production-ready with clean, maintainable implementation.
- Document unified LiteLLM provider architecture - Add details about model name normalization - Document agent settings access (temperature/top_p from Agent DocType) - Add RunProvider and litellm.py documentation - Update provider and model documentation with LiteLLM details - Remove references to old implementation, focus on current architecture - Add security considerations for LiteLLM dependency
- Add Perplexity Search to Agent Tool Function types - Implement handle_perplexity_search function - Read API key from site_config (perplexity_api_key) - Uses LiteLLM search provider for Perplexity AI
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Important: Merge after merging liteLLM PR.
Add Perplexity Search Tool
Adds Perplexity Search as a tool for agents via LiteLLM.
What it does
Configuration
site_config.json:{ "perplexity_api_key": "pplx-..." }Get API Key
Get your API key from: https://www.perplexity.ai/settings/api
Usage
Agent Tool Functionwith type "Perplexity Search"This follows the same pattern as the Google Search tool implementation.