A comprehensive, multi-service platform to monitor and secure LLM applications in production. Built with a focus on mitigating prompt injections, detecting PII leakage, calculating token costs, and providing high-performance observability insights.
The platform follows a microservices architecture to ensure scalability and separation of concerns.
- Proxy Service: A FastAPI-based proxy that handles real-time LLM requests. It performs synchronous safety checks (Prompt Injection Detection, PII Redaction) before forwarding valid requests to the underlying LLM (e.g., OpenAI). It also logs all interactions to Redis asynchronously.
- Analytics Worker: A Python worker process that subscribes to the Redis logging channel. It performs heavier asynchronous tasks such as token counting and cost calculation, and inserts structured logs into ClickHouse.
- BFF (Backend-For-Frontend) Service: A FastAPI service that serves the frontend dashboard by running analytical queries on ClickHouse data.
- Frontend Dashboard: A React/Vite-based modern UI that provides visualizations of metrics (Latency, Costs, Requests) and displays traces.
- ClickHouse: The core analytics database for high-performance log querying.
- Redis: Serves as an asynchronous message queue (Pub/Sub) between the Proxy Service and the Analytics Worker.
- Docker and Docker Compose
- Required environment variables set in
.env(check.env.example).
- Create a
.envfile by copying.env.exampleand filling in the required values (e.g., your testLLM_API_KEY).cp .env.example .env
- Start the entire platform using Docker Compose. It will build the individual services and initialize the database.
docker-compose up --build
- The platform will be accessible at:
- Frontend Dashboard: http://localhost:3000
- Proxy Service API:
http://localhost:8000(Use for OpenAI-compatible chat completion requests) - BFF Service API:
http://localhost:8001
Check the individual directories for detailed service READMEs:

