requirements.txt β Core dependencies only (2 packages)
requirements-dev.txt β Development + all providers + testing tools
# Minimal install (just core)
pip install cascadeflow
# With specific provider
pip install cascadeflow[openai]
pip install cascadeflow[anthropic]
# With common providers (OpenAI + Anthropic + Groq)
pip install cascadeflow[providers]
# With everything
pip install cascadeflow[all]# Method 1: Using requirements-dev.txt
pip install -r requirements-dev.txt
# Method 2: Using pyproject.toml extras (recommended)
pip install -e ".[dev]"pydantic>=2.0.0 # Data validation
httpx>=0.25.0 # HTTP client
That's it! Just 2 core dependencies. Providers are optional extras.
Core dependencies β
All provider SDKs β
(openai, anthropic, groq, huggingface, together, vllm)
Testing tools β
(pytest, pytest-asyncio, pytest-cov, pytest-mock)
Code quality tools β
(black, ruff, mypy, isort, pre-commit)
Development utilities β
(rich for terminal output)
# No Python package needed!
# 1. Install Ollama from https://ollama.ai
curl -fsSL https://ollama.com/install.sh | sh
# 2. Pull a model
ollama pull llama3.2:1b
# 3. Use with cascadeflow
pip install cascadeflow # Core only, no extras needed!Cost: $0/month π°
# Option 1: HTTP server (no Python package)
# Run vLLM server, connect via HTTP
# Option 2: Python package
pip install cascadeflow[vllm]# Only add keys for providers you want to use
# OpenAI
OPENAI_API_KEY=sk-proj-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Groq (free tier available!)
GROQ_API_KEY=gsk_...
# HuggingFace
HF_TOKEN=hf_...
# Together.ai
TOGETHER_API_KEY=...
# Ollama - no API key needed! (local)
# vLLM - no API key needed! (local)# Install with common providers
pip install cascadeflow[providers]
# Set API keys in .env
echo "OPENAI_API_KEY=sk-..." >> .env
echo "ANTHROPIC_API_KEY=sk-ant-..." >> .env
# Start using
python your_app.py# Clone repo
git clone https://github.com/lemony-ai/cascadeflow.git
cd cascadeflow
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# or: .venv\Scripts\activate # Windows
# Install in dev mode
pip install -e ".[dev]"
# Run tests
pytestUse Ollama for free local testing:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a small model
ollama pull llama3.2:1b
# Install just core cascadeflow
pip install cascadeflow
# Test
python examples/multi_provider.py| Package | Used By | Required? | Install Via |
|---|---|---|---|
| pydantic | Core validation | β Always | requirements.txt |
| httpx | Core HTTP client | β Always | requirements.txt |
| openai | OpenAIProvider | β Optional | [openai] or [providers] or [all] |
| anthropic | AnthropicProvider | β Optional | [anthropic] or [providers] or [all] |
| groq | GroqProvider | β Optional | [groq] or [providers] or [all] |
| huggingface-hub | HuggingFaceProvider | β Optional | [huggingface] or [all] |
| together | TogetherProvider | β Optional | [together] or [all] |
| vllm | VLLMProvider | β Optional | [vllm] or [all] |
| rich | Dev/Debug | β Dev only | requirements-dev.txt |
| pytest | Testing | β Dev only | requirements-dev.txt |
| black | Formatting | β Dev only | requirements-dev.txt |
| ruff | Linting | β Dev only | requirements-dev.txt |
| mypy | Type checking | β Dev only | requirements-dev.txt |
- Start minimal:
pip install cascadeflow - Add providers as needed:
pip install cascadeflow[openai] - Use Ollama for free local inference
- Always use virtual environment
- Install in editable mode:
pip install -e ".[dev]" - Run tests before committing:
pytest - Format code:
black . && isort . - Check types:
mypy cascadeflow/
- Use Ollama for free testing (no API costs!)
- Mock API calls when testing without keys
- Use pytest fixtures for provider initialization
# Install OpenAI extra
pip install cascadeflow[openai]# Install Anthropic extra
pip install cascadeflow[anthropic]# Make sure Ollama is running
ollama serve
# Check if running
curl http://localhost:11434/api/tags# Create fresh virtual environment
python -m venv .venv
source .venv/bin/activate
pip install cascadeflow[providers]# Sync your IDE with virtual environment
# In PyCharm/IntelliJ: File β Project Structure β SDK
# Select .venv/bin/python
# Or reinstall
pip install -r requirements-dev.txtpip install cascadeflow[openai]pip install cascadeflow[openai,anthropic]
# or
pip install cascadeflow[providers]pip install cascadeflow[all]pip install cascadeflow
# Then install Ollama from https://ollama.aigit clone https://github.com/lemony-ai/cascadeflow.git
cd cascadeflow
pip install -e ".[dev]"
pre-commit install
pytestTest your installation:
# Core
python -c "import cascadeflow; print('β
Core OK')"
# OpenAI (if installed)
python -c "import openai; print('β
OpenAI OK')"
# Anthropic (if installed)
python -c "import anthropic; print('β
Anthropic OK')"
# Full test
python -c "
from cascadeflow import CascadeAgent, ModelConfig
print('β
All imports working!')
"| Provider | Cost | Speed | Quality | Setup | API Key |
|---|---|---|---|---|---|
| OpenAI | $$$ | Medium | High | Easy | Yes |
| Anthropic | $$$ | Medium | High | Easy | Yes |
| Groq | $ | Fast | Medium | Easy | Yes |
| Ollama | Free | Fast | Medium | Medium | No |
| vLLM | Free | Very Fast | Medium | Hard | No |
- pyproject.toml: See all available extras
- Documentation: https://github.com/lemony-ai/cascadeflow
- Repository: https://github.com/lemony-ai/cascadeflow
- Issues: https://github.com/lemony-ai/cascadeflow/issues
- tiktoken removed: Not used in current implementation
- Ollama: No Python package needed - uses HTTP directly
- Core is minimal: Only 2 dependencies for maximum flexibility