Real-time monitoring dashboard for CLIProxy API usage - Track requests, tokens, costs, and rate limits across all your AI models.
- Usage Analytics - Track requests, tokens, success rates over time
- Cost Estimation - Calculate estimated API costs per model
- Date Range Filters - View Today, Yesterday, 7 Days, 30 Days, or All Time
- Hourly Breakdown - See usage patterns throughout the day
- Rate Limit Tracking - Monitor remaining quotas for each provider
- Model Breakdown - Usage and cost per AI model
- Docker & Docker Compose
- Supabase account (free tier works)
- CLIProxy running with Management API enabled
# Create project directory
mkdir cliproxy-dashboard && cd cliproxy-dashboard
# Download docker-compose.yml
curl -O https://raw.githubusercontent.com/leolionart/CLIProxyAPI-Dashboard/main/docker-compose.yml
# Download environment template
curl -O https://raw.githubusercontent.com/leolionart/CLIProxyAPI-Dashboard/main/.env.example
cp .env.example .envEdit .env with your credentials:
# Supabase Configuration
SUPABASE_URL=https://xxxxx.supabase.co
SUPABASE_SECRET_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# CLIProxy Connection
CLIPROXY_URL=http://host.docker.internal:8317
CLIPROXY_MANAGEMENT_KEY=your-management-secret-key
# Optional Settings
COLLECTOR_INTERVAL_SECONDS=300
TIMEZONE_OFFSET_HOURS=7docker compose up -dOpen your browser: http://localhost:8417
First data will appear after ~5 minutes (first collection cycle)
Pull the latest images and restart:
docker compose pull
docker compose up -d| Variable | Description | Default |
|---|---|---|
SUPABASE_URL |
Your Supabase project URL | Required |
SUPABASE_SECRET_KEY |
Supabase service role key | Required |
CLIPROXY_URL |
CLIProxy Management API URL | http://host.docker.internal:8317 |
CLIPROXY_MANAGEMENT_KEY |
CLIProxy management secret | Required |
COLLECTOR_INTERVAL_SECONDS |
Polling interval | 300 (5 min) |
TIMEZONE_OFFSET_HOURS |
Your timezone offset from UTC | 7 |
# All services
docker compose logs -f
# Collector only
docker compose logs -f collector
# Frontend only
docker compose logs -f frontendDashboard shows no data:
- Wait 5 minutes for first data collection
- Check collector logs for connection errors
- Verify Supabase tables exist (see Initial Setup below)
Collector can't connect to CLIProxy:
- Ensure CLIProxy has
remote-management.allow-remote: true - Verify
CLIPROXY_MANAGEMENT_KEYmatches CLIProxy's secret - Check CLIProxy is accessible from Docker (
host.docker.internal)
- Go to supabase.com and sign in
- Click New Project
- Choose your organization, name your project, set database password
- Wait for project creation (~2 minutes)
- In Supabase, go to SQL Editor (left sidebar)
- Click New Query
- Paste and run the following SQL:
-- ============================================
-- CLIProxy Dashboard Database Schema
-- ============================================
-- Table for storing raw usage snapshots
CREATE TABLE IF NOT EXISTS usage_snapshots (
id BIGSERIAL PRIMARY KEY,
collected_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
total_requests INTEGER NOT NULL DEFAULT 0,
success_count INTEGER NOT NULL DEFAULT 0,
failure_count INTEGER NOT NULL DEFAULT 0,
total_tokens BIGINT NOT NULL DEFAULT 0,
cumulative_cost_usd DECIMAL(10, 6) DEFAULT 0,
raw_data JSONB
);
-- Table for storing per-model usage data
CREATE TABLE IF NOT EXISTS model_usage (
id BIGSERIAL PRIMARY KEY,
snapshot_id BIGINT REFERENCES usage_snapshots(id) ON DELETE CASCADE,
api_endpoint VARCHAR(255) NOT NULL,
model_name VARCHAR(255) NOT NULL,
request_count INTEGER NOT NULL DEFAULT 0,
input_tokens BIGINT NOT NULL DEFAULT 0,
output_tokens BIGINT NOT NULL DEFAULT 0,
total_tokens BIGINT NOT NULL DEFAULT 0,
estimated_cost_usd DECIMAL(10, 6) DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Table for storing daily aggregated statistics
CREATE TABLE IF NOT EXISTS daily_stats (
id BIGSERIAL PRIMARY KEY,
stat_date DATE NOT NULL UNIQUE,
total_requests INTEGER NOT NULL DEFAULT 0,
success_count INTEGER NOT NULL DEFAULT 0,
failure_count INTEGER NOT NULL DEFAULT 0,
total_tokens BIGINT NOT NULL DEFAULT 0,
estimated_cost_usd DECIMAL(10, 6) DEFAULT 0,
breakdown JSONB DEFAULT '{}'::jsonb,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Table for model pricing configuration
CREATE TABLE IF NOT EXISTS model_pricing (
id BIGSERIAL PRIMARY KEY,
model_pattern VARCHAR(255) NOT NULL UNIQUE,
input_price_per_million DECIMAL(10, 4) NOT NULL,
output_price_per_million DECIMAL(10, 4) NOT NULL,
provider VARCHAR(50) NOT NULL DEFAULT 'unknown',
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Table for rate limit configurations
CREATE TABLE IF NOT EXISTS rate_limit_configs (
id BIGSERIAL PRIMARY KEY,
provider VARCHAR(50) NOT NULL,
tier_name VARCHAR(50) NOT NULL,
model_pattern VARCHAR(255) NOT NULL,
token_limit BIGINT,
request_limit INTEGER,
context_window INTEGER,
window_minutes INTEGER NOT NULL DEFAULT 1440,
reset_strategy VARCHAR(20) NOT NULL DEFAULT 'daily',
reset_anchor_timestamp TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Table for tracking current rate limit status
CREATE TABLE IF NOT EXISTS rate_limit_status (
id BIGSERIAL PRIMARY KEY,
config_id BIGINT REFERENCES rate_limit_configs(id) ON DELETE CASCADE,
remaining_tokens BIGINT,
remaining_requests INTEGER,
used_tokens BIGINT DEFAULT 0,
used_requests INTEGER DEFAULT 0,
percentage INTEGER DEFAULT 0,
status_label VARCHAR(50),
window_start TIMESTAMPTZ NOT NULL DEFAULT NOW(),
next_reset TIMESTAMPTZ,
last_updated TIMESTAMPTZ NOT NULL DEFAULT NOW(),
CONSTRAINT unique_status_per_config UNIQUE (config_id)
);
-- Create indexes for performance
CREATE INDEX IF NOT EXISTS idx_usage_snapshots_collected_at ON usage_snapshots(collected_at DESC);
CREATE INDEX IF NOT EXISTS idx_model_usage_snapshot_id ON model_usage(snapshot_id);
CREATE INDEX IF NOT EXISTS idx_model_usage_model_name ON model_usage(model_name);
CREATE INDEX IF NOT EXISTS idx_daily_stats_date ON daily_stats(stat_date DESC);
-- Enable Row Level Security
ALTER TABLE usage_snapshots ENABLE ROW LEVEL SECURITY;
ALTER TABLE model_usage ENABLE ROW LEVEL SECURITY;
ALTER TABLE daily_stats ENABLE ROW LEVEL SECURITY;
ALTER TABLE model_pricing ENABLE ROW LEVEL SECURITY;
ALTER TABLE rate_limit_configs ENABLE ROW LEVEL SECURITY;
ALTER TABLE rate_limit_status ENABLE ROW LEVEL SECURITY;
-- Create policies for read access
CREATE POLICY "Allow read access" ON usage_snapshots FOR SELECT USING (true);
CREATE POLICY "Allow read access" ON model_usage FOR SELECT USING (true);
CREATE POLICY "Allow read access" ON daily_stats FOR SELECT USING (true);
CREATE POLICY "Allow read access" ON model_pricing FOR SELECT USING (true);
CREATE POLICY "Allow read access" ON rate_limit_configs FOR SELECT USING (true);
CREATE POLICY "Allow read access" ON rate_limit_status FOR SELECT USING (true);
-- Create policies for service role (Collector)
CREATE POLICY "Allow service insert" ON usage_snapshots FOR INSERT WITH CHECK (true);
CREATE POLICY "Allow service update" ON usage_snapshots FOR UPDATE USING (true);
CREATE POLICY "Allow service insert" ON model_usage FOR INSERT WITH CHECK (true);
CREATE POLICY "Allow service upsert" ON daily_stats FOR ALL USING (true);
CREATE POLICY "Allow service insert" ON model_pricing FOR INSERT WITH CHECK (true);
CREATE POLICY "Allow service update" ON model_pricing FOR UPDATE USING (true);
CREATE POLICY "Allow service all" ON rate_limit_configs FOR ALL USING (true);
CREATE POLICY "Allow service all" ON rate_limit_status FOR ALL USING (true);- Go to Settings > API in Supabase
- Copy these values:
- Project URL:
https://xxxxx.supabase.co - service_role key: Under "Project API keys" > "service_role" (click eye icon)
- Project URL:
Ensure your CLIProxy has Management API enabled:
remote-management:
allow-remote: true
secret: "your-management-secret-key"Note the secret value - use it as CLIPROXY_MANAGEMENT_KEY in your .env.
cd frontend
npm install
npm run devAccess at http://localhost:5173 with hot reload.
cd collector
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.py# Build from source
docker compose -f docker-compose.dev.yml build
docker compose -f docker-compose.dev.yml up -dcliproxy-dashboard/
├── collector/ # Python data collector
│ ├── main.py # Collector logic
│ ├── rate_limiter.py # Rate limit tracking
│ ├── Dockerfile
│ └── requirements.txt
├── frontend/ # React dashboard
│ ├── src/
│ │ ├── App.jsx
│ │ ├── components/
│ │ └── lib/
│ ├── Dockerfile
│ └── package.json
├── docker-compose.yml # Production (GHCR images)
├── .env.example
└── README.md
| Tab | Description |
|---|---|
| Today | Usage delta for current day only |
| Yesterday | Usage delta for previous day |
| 7 Days | Total usage over past week |
| 30 Days | Total usage over past month |
| This Year | Total usage for current year |
- Stats Cards - Total requests, tokens, success rate
- Request Trends - Line chart of requests over time
- Token Usage Trends - Line chart of token consumption
- Cost Breakdown - Pie chart of costs by model
- Model Usage - Bar chart of requests per model
- API Keys - Usage breakdown by API key
- Rate Limits - Remaining quota for each provider
- Cost Details - Detailed cost table by model
| Model | Input | Output |
|---|---|---|
| GPT-4o | $2.50 | $10.00 |
| GPT-4o-mini | $0.15 | $0.60 |
| Claude 3.5 Sonnet | $3.00 | $15.00 |
| Claude 4 Sonnet | $3.00 | $15.00 |
| Gemini 2.5 Flash | $0.15 | $0.60 |
| Gemini 2.5 Pro | $1.25 | $10.00 |
graph LR
User["User / AI Clients"] --> Proxy["CLIProxy Server"]
Proxy --> Provider["AI Provider (OpenAI/Anthropic)"]
subgraph "Data Pipeline"
Proxy -- API --> Collector["Python Collector"]
Collector -- Writes --> DB[("Supabase DB")]
end
subgraph "Visualization"
DB -- Reads --> Dashboard["React Dashboard"]
Dashboard --> Browser["User Browser"]
end
MIT License - see LICENSE file for details
See CONTRIBUTING.md for guidelines.
If you find this project helpful, please give it a star!
For detailed setup, see the Initial Setup section above.
