The ultimate AI-powered developer cockpit for cost-optimized building.
Lodestar is a high-performance orchestration layer for LLM-based development. It intelligently routes between local FREE models (via Ollama) and premium APIs (Claude, OpenAI, Gemini) to maximize speed and minimize costs (target: 90%+ savings).
Current Release: v2.0.0-beta.2
Primary Integration Branch: develop
- π° 90% Cost Savings - Default to FREE Ollama models (DeepSeek, Llama)
- π 8 LLM Providers - Claude, OpenAI, Grok, Gemini + FREE models
- π€ Smart Routing - Automatic provider selection via LiteLLM
- π Never Lose Context - Git auto-commits preserve all changes
- π― Zero Configuration - Works out of the box with sensible defaults
- π§ͺ Automated Testing - Verify all providers in 60 seconds
- π Secure - API keys in environment, logs excluded from git
- Debian/Ubuntu Linux VM (2 CPU, 4GB RAM minimum)
- Ollama server with GPU (separate VM recommended)
- Python 3.11+
- Git
# Clone repository
git clone git@github.com:zebadee2kk/ProjectLodestar.git
cd ProjectLodestar
# Install dependencies
pip install --break-system-packages aider-chat litellm
# Configure API keys (optional - FREE models work without these)
nano ~/.bashrc
# Add:
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export XAI_API_KEY="xai-..."
export GEMINI_API_KEY="..."
source ~/.bashrc
# Start router
./scripts/start-router.sh
# Test everything
./scripts/test-providers-simple.sh# Navigate to your project
cd ~/your-project
# Start coding with FREE AI
aider file.py
# Need more power? Switch to Claude
aider --model claude-sonnet file.py
# Switch models mid-session
/model gpt-4o-mini
/model gpt-3.5-turbo # Back to FREE| Tier | Provider | Model Alias | Cost | Status |
|---|---|---|---|---|
| 1 | DeepSeek | gpt-3.5-turbo |
FREE | β Working |
| 1 | Llama 3.1 | local-llama |
FREE | β Working |
| 2 | Claude Sonnet | claude-sonnet |
$3/$15 per M | π³ Needs Credits |
| 2 | Claude Opus | claude-opus |
$15/$75 per M | π³ Needs Credits |
| 3 | GPT-4o Mini | gpt-4o-mini |
$0.15/$0.60 per M | π³ Needs Credits |
| 3 | GPT-4o | gpt-4o |
$2.50/$10 per M | π³ Needs Credits |
| 4 | Grok Beta | grok-beta |
$5/$15 per M | π³ Needs Credits |
| 5 | Gemini | gemini-pro |
$0.08/$0.30 per M | βοΈ Config Needed |
βββββββββββββββββββ
β Aider (CLI) β β Your coding interface
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β LiteLLM Router β β Smart routing layer (localhost:4000)
β (OpenAI API) β
ββββββββββ¬βββββββββ
β
ββββββ΄ββββββββββββββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββ ββββββββββββββββ
β FREE Models β β PAID APIs β
β β β β
β β’ DeepSeek β β β’ Claude β
β β’ Llama 3.1 β β β’ OpenAI β
β (T600 GPU) β β β’ Grok β
β β β β’ Gemini β
ββββββββββββββββ ββββββββββββββββ
Flow:
- Aider sends requests to LiteLLM router (OpenAI-compatible)
- Router routes to appropriate backend based on model alias
- FREE models handle 90% of requests
- Premium APIs only used when explicitly requested
- Branching Strategy - Git Flow and collaboration (NEW)
- Task Allocation - Workstreams and assignments (NEW)
- Developer Guide - File map and onboarding (NEW)
- Versioning - Release strategy (NEW)
- Architecture - System design and components
- Setup Guide - Detailed installation instructions
- Workflow - Day-to-day usage patterns
- Quick Reference - Daily commands cheat sheet
- Security - API key management best practices
- Contributing - Contribution guidelines
- ADRs - Architecture decision records
- Roadmap - Future enhancements (v2.1+)
# Start/Stop Router
./scripts/start-router.sh
./scripts/stop-router.sh
# Check Status
./scripts/status.sh
# Test All Providers
./scripts/test-providers-simple.sh
./scripts/test-all-providers.sh
# v2.0 Features
lodestar status # Check module health
lodestar costs --dashboard # Real-time cost TUI
lodestar route "fix bug" # Test routing decision
lodestar tournament "prompt" model1 model2 # Compare models
lodestar diff # Visual AI diff
lodestar run "python app.py" # Self-healing execution
lodestar cache # View cache stats
lodestar cache --clear # Clear response cache
# Test Infrastructure
./scripts/test-lodestar.sh
# Create ADR
./scripts/adr-new.sh "Decision Title"Before Lodestar (Pure Claude):
- ~100 requests/day Γ 30 days = 3,000 requests/month
- Average ~1,000 tokens per request
- Cost: ~$9-15/month
- Usage limits block work
With Lodestar (90% FREE, 10% Claude):
- 2,700 requests via FREE DeepSeek (unlimited)
- 300 requests via Claude (complex tasks)
- Cost: ~$0.90-1.50/month
- Savings: 90%+ with unlimited usage
Perfect For:
- Solo developers seeking cost-effective AI coding
- Projects with budget constraints
- Learning AI-assisted development
- Long coding sessions (4-8 hours)
- Experimentation without usage anxiety
When to Upgrade to Paid:
- Complex architecture decisions
- Critical bug fixes requiring deep reasoning
- Code reviews of large PRs
- Documentation generation for complex systems
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Write/update tests
- Submit a pull request
See CONTRIBUTING.md for details.
MIT License - see LICENSE for details.
Built with:
- Aider - AI pair programming
- LiteLLM - Universal LLM proxy
- Ollama - Local LLM runtime
- DeepSeek Coder - FREE coding model
- GitHub Issues: Bug reports and feature requests
- Discussions: Questions and community chat
- Project: https://github.com/zebadee2kk/ProjectLodestar
Status: v2.0.0-beta.2 - Core features complete, 296 tests passing Last Updated: February 10, 2026
β¨ SSH authentication configured