AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency, and JSON/format quality on any OpenAI, Anthropic, or local client.
-
Updated
Jan 1, 2026 - TypeScript
AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency, and JSON/format quality on any OpenAI, Anthropic, or local client.
🛡️ Verify AI outputs with llmverify for Node.js, ensuring safety and accuracy without sacrificing privacy.
Add a description, image, and links to the ai-observalibility topic page so that developers can more easily learn about it.
To associate your repository with the ai-observalibility topic, visit your repo's landing page and select "manage topics."