Skip to content

Conversation

@esafwan
Copy link
Contributor

@esafwan esafwan commented Nov 3, 2025

Important: Merge after merging liteLLM PR.

Add Perplexity Search Tool

Adds Perplexity Search as a tool for agents via LiteLLM.

What it does

  • Adds "Perplexity Search" to Agent Tool Function types
  • Allows agents to search using Perplexity AI
  • Uses LiteLLM's search provider for Perplexity integration

Configuration

site_config.json:

{
  "perplexity_api_key": "pplx-..."
}

Get API Key

Get your API key from: https://www.perplexity.ai/settings/api

Usage

  1. Create an Agent Tool Function with type "Perplexity Search"
  2. Add it to your Agent's tool list
  3. The agent can use it to perform searches via Perplexity AI

This follows the same pattern as the Google Search tool implementation.

- Add litellm>=1.0.0 dependency to pyproject.toml
- Create unified litellm.py provider supporting 100+ LLM providers
- Update run.py to route existing providers (OpenAI, Anthropic, Google, OpenRouter) to LiteLLM
- Maintain 100% backward compatibility:
  * Existing API keys work as-is
  * Existing model names auto-normalized (e.g., 'gpt-4-turbo' → 'openai/gpt-4-turbo')
  * Existing tools work as-is
  * No DocType changes needed
- Add LITELLM_MIGRATION.md documentation

Benefits:
- Single codebase (~287 lines vs ~800+ lines across 4 providers)
- Built-in retry logic, cost tracking, error handling
- Support for 100+ providers automatically
- Easy to add new providers via DocType without code changes
- Add after_install hook to check for litellm package
- Improve error message with clear installation steps
- Update README with bench setup requirements step
- Provide helpful guidance when litellm is missing

The litellm package is already in pyproject.toml dependencies,
so 'bench setup requirements' will install it automatically.
- Remove debug print statements from agent_integration.py
- Add specific exception handling for LiteLLM errors:
  * InternalServerError (transient server errors)
  * RateLimitError (rate limit exceeded)
  * APIError (other API errors)
- Provide clearer error messages with model name context
- Model 'gpt-oss-120b' is valid - errors are now properly categorized
- Fix temperature/top_p to read from agent.model_settings (from Agent DocType)
- Previously was trying to read from agent.temperature (doesn't exist)
- Add litellm.drop_params = True to handle models with restricted params (e.g., gpt-5 only supports temperature=1)
- Add debug logging when temperature not found in model_settings
- Pass agent_name in context for better error debugging

Fixes issue where Agent DocType temperature (1.0) was being ignored and default (0.7) was used instead.
- Load Agent DocType directly in LiteLLM provider to get temperature/top_p
- Priority order: Agent DocType > agent.model_settings > fallback
- Add comprehensive debug logging to troubleshoot parameter access
- Each agent has its own temperature/prompt from Agent DocType
- Fixes issue where Agent DocType temperature (1.0) was not being used

This ensures the runner correctly gets each agent's individual settings
from the Agent DocType as documented in AGENTS.md.
- Remove all debug logging statements from LiteLLM provider
- Simplify temperature/top_p access logic (Agent DocType > model_settings > default)
- Update comments to be production-ready
- Remove temporary migration/testing documentation files:
  * LITELLM_MIGRATION.md
  * TESTING_INSTRUCTIONS.md
- Standardize code comments and error handling

Code is now production-ready with clean, maintainable implementation.
- Document unified LiteLLM provider architecture
- Add details about model name normalization
- Document agent settings access (temperature/top_p from Agent DocType)
- Add RunProvider and litellm.py documentation
- Update provider and model documentation with LiteLLM details
- Remove references to old implementation, focus on current architecture
- Add security considerations for LiteLLM dependency
- Add Perplexity Search to Agent Tool Function types
- Implement handle_perplexity_search function
- Read API key from site_config (perplexity_api_key)
- Uses LiteLLM search provider for Perplexity AI
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants