Releases: TianyiPeng/LLM_batch_helper
Releases · TianyiPeng/LLM_batch_helper
Release v0.4.0: Google Gemini Provider Support
What's New in v0.4.0
🚀 Major Feature Addition: Google Gemini Provider Support
Added
- New Provider Support: Google Gemini integration with direct API access
- Support for Gemini models: gemini-1.5-pro, gemini-1.5-flash, gemini-1.0-pro
- Gemini-specific error handling for safety filter blocks
- Environment variable support for both GEMINI_API_KEY and GOOGLE_API_KEY
- Comprehensive Gemini documentation and examples in README
- Test script for validating Gemini provider functionality
Changed
- Updated provider documentation to include Gemini across all examples
- Enhanced supported models section with Gemini model descriptions
- Added google-generativeai dependency to pyproject.toml
Technical Details
- Async support for Gemini API calls
- Proper error handling for content safety blocks
- Token usage tracking for Gemini responses
- Integration with existing caching and retry mechanisms
Full Changelog: v0.3.3...v0.4.0
v0.2.0 - OpenRouter Provider Support
🚀 Major Feature Release: OpenRouter Provider Support
🎯 What's New
OpenRouter Integration
- 100+ AI Models: Access to OpenAI, Anthropic, Meta, Google, Mistral, and more through one unified API
- DeepSeek Support: Full support for
deepseek/deepseek-v3.1-baseanddeepseek/deepseek-chat - Cost Effective: Often cheaper than going directly to providers
- Unified API: Single integration for multiple model providers
Enhanced Configuration
- Added
**kwargssupport toLLMConfigfor provider-specific parameters - Maintains full backward compatibility with existing code
- Enhanced error handling and retry logic
📖 Quick Start with OpenRouter
# Install the latest version
pip install llm-batch-helper==0.2.0
# Set up your API key
export OPENROUTER_API_KEY="your-openrouter-api-key"from llm_batch_helper import LLMConfig, process_prompts_batch
# Use any OpenRouter model
config = LLMConfig(
model_name="deepseek/deepseek-v3.1-base", # or openai/gpt-4o, anthropic/claude-3-5-sonnet, etc.
temperature=0.7,
max_completion_tokens=500
)
results = await process_prompts_batch(
config=config,
provider="openrouter", # New provider!
prompts=your_prompts
)🌟 Supported Models (Examples)
DeepSeek Models
deepseek/deepseek-v3.1-base- Latest base modeldeepseek/deepseek-chat- Chat-optimized version
OpenAI Models via OpenRouter
openai/gpt-4oopenai/gpt-4o-miniopenai/gpt-4-turbo
Anthropic Models
anthropic/claude-3-5-sonnetanthropic/claude-3-haiku
Meta Models
meta-llama/llama-3.1-405b-instructmeta-llama/llama-3.1-70b-instruct
Google Models
google/gemini-pro-1.5google/gemini-flash-1.5
🔧 Technical Improvements
- Full integration with existing caching and verification systems
- Proper retry logic with exponential backoff
- Token usage tracking and monitoring
- Credit-based rate limiting support
- Enhanced provider comparison documentation
📚 Documentation Updates
- Comprehensive OpenRouter setup guide
- Updated provider comparison table
- Model selection examples and best practices
- Enhanced API reference
🔄 Migration
No breaking changes! Existing code continues to work unchanged. Simply add OpenRouter as a new option when you need access to additional models.
🙏 Get Started
- Get your OpenRouter API key: https://openrouter.ai/settings/keys
- Install:
pip install llm-batch-helper==0.2.0 - Follow the documentation: Provider Guide
🚦 Supported Providers
Your package now supports:
- ✅ OpenAI (original)
- ✅ Together.ai (existing)
- ✅ OpenRouter (new!) - 100+ models including DeepSeek v3.1
📊 Performance
OpenRouter integration has been tested and verified with:
- Fast response times (~1.8s per request for DeepSeek)
- Reliable error handling and retry logic
- Proper token usage tracking
- Full compatibility with existing batch processing features
Full Changelog: https://github.com/TianyiPeng/LLM_batch_helper/blob/main/CHANGELOG.md
PyPI Package: https://pypi.org/project/llm-batch-helper/0.2.0/
v0.1.6 - Environment Variable Handling Improvements
🔧 Breaking Changes
load_dotenv() call from providers.py
✨ Improvements
Better Environment Variable Handling
- Removed Side Effects: No more automatic
.envfile loading on import - User Control: Users now decide when/how to load environment variables
- Smaller Dependencies:
python-dotenvis now optional (moved to dev dependencies) - Library Best Practices: Follows Python packaging guidelines
Updated Documentation
- Clear instructions for both environment variable approaches
- Examples showing proper
.envfile usage - Better code examples throughout
📦 Migration Guide
Before (v0.1.5)
# .env file was automatically loaded
from llm_batch_helper import LLMConfig, process_prompts_batchAfter (v0.1.6)
# Option 1: Use system environment variables
export OPENAI_API_KEY="your-key"
# Option 2: Load .env file yourself (recommended for development)
from dotenv import load_dotenv
load_dotenv() # You control when this happens
from llm_batch_helper import LLMConfig, process_prompts_batch💡 Benefits
- No Import Side Effects: Cleaner library behavior
- Better Testing: Easier to test with controlled environments
- Reduced Dependencies: Smaller package footprint
- More Flexibility: Users control environment loading
📦 Installation
pip install llm_batch_helper==0.1.6
# Optional: Install dotenv support for development
pip install python-dotenvFull Changelog: v0.1.5...v0.1.6
v0.1.5 - Together.ai Provider Support
🚀 New Features
- Together.ai Provider Support: Added support for Together.ai API as a new provider
- Support for various open-source models through Together.ai (Llama, Mixtral, etc.)
- Comprehensive documentation with Read the Docs integration
📝 Changes
- Added
_get_together_response_direct()function for Together.ai API calls - Updated provider selection logic to support multiple providers
- Enhanced error handling for Together.ai specific errors
- Updated docstrings and examples to reflect new provider options
📖 Documentation
- Complete Sphinx documentation structure
- Read the Docs configuration
- API reference, examples, and tutorials
- Provider comparison guide
🔧 Usage
# New Together.ai provider usage
config = LLMConfig(
model_name="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
temperature=0.7,
max_completion_tokens=300
)
results = await process_prompts_batch(
config=config,
provider="together", # New provider option
prompts=your_prompts
)📦 Installation
pip install llm_batch_helper==0.1.5Full Changelog: v0.1.4...v0.1.5
Release v0.1.4
What's Changed
- Added backward compatibility for
max_completion_tokensparameter - Relaxed httpx dependency constraint to
>=0.24.0,<2.0.0for better compatibility - Maintained support for legacy
max_tokensparameter
Technical Changes
- Updated OpenAI API calls to use
max_completion_tokensinstead of deprecatedmax_tokens - Added automatic fallback when only
max_tokensis provided - Expanded httpx version range to support more environments
Full Changelog: v0.1.3...v0.1.4