Releases: acunningham-ship-it/trustlayer
Releases · acunningham-ship-it/trustlayer
TrustLayer v0.2.1 — Bug fixes
Bug fixes and frontend rebuild. All 17 GET endpoints and 6 POST endpoints verified working. CLI workflow command removed, proxy and digest commands added.
TrustLayer v0.2.0 — The Proxy Update
TrustLayer v0.2.0 — The Proxy Update
Major update: TrustLayer is now an invisible infrastructure layer, not just a dashboard.
New in v0.2
- OpenAI-Compatible Proxy — Set
OPENAI_BASE_URL=http://localhost:8000/v1and all your tools (Claude Code, Cursor, Aider) automatically flow through TrustLayer - Smart Routing — Classifies prompts and routes simple tasks to cheaper models. 4 default rules, fully customizable.
- Savings Dashboard — "TrustLayer saved you $X this week" counter on the dashboard
- AI Cross-Verification — Low trust scores trigger Ollama to fact-check claims
- Consistency Checker — Run the same prompt N times, detect which claims are consistent
- Cost Savings Analysis — See what local Ollama usage would cost on cloud providers
- Test All Providers — One-click test of every configured provider
- Agent Audit Log (placeholder) — Coming next: monitor what AI agents do on your filesystem
Removed
- Workflows (was stubbed, replaced with Audit Log)
Install
pip install trustlayer-ai==0.2.0
trustlayer serverLinks
TrustLayer v0.1.0 — Initial Release
TrustLayer v0.1.0
The universal AI trust layer. You bring the AI, we bring the trust.
Install
pip install trustlayer-ai
trustlayer serverFeatures
- Verification Engine — 0-100 trust scores on any AI output with hallucination detection
- Universal Connectors — Ollama (auto-detected), Claude Code, Gemini CLI
- Cost Tracker — real-time spending across all AI providers
- History — complete interaction log, searchable and filterable
- Profile — personalization that learns your patterns
- CLI — verify, detect, costs from terminal
- Web UI — React dashboard with dark mode
- 100% Local — data never leaves your machine