Local AI workstation dashboard for RTX setups
A local dashboard for managing AI/ML workstation workflows on RTX hardware. Monitors GPU and system resources in real time, tracks active training jobs across Kohya SS and Musubi Tuner, controls local AI services (ComfyUI, SwarmUI, Ollama), and aggregates model discovery from GitHub, HuggingFace, and CivitAI — all without sending data anywhere.
Built for setups where you run your own stack and need a single interface to see what is happening across the machine.
![]() Command Center overview |
![]() GPU Live Monitor |
![]() Training Jobs |
![]() Community Hub |
- Real-time GPU monitoring — VRAM, utilization, temperature, power draw with 30-second rolling charts
- Training job auto-detection — scans running processes, parses TOML configs, reads TensorBoard loss curves (Kohya SS, Musubi Tuner)
- Service control — health check, launch, and stop for ComfyUI, SwarmUI, and Ollama
- Community Hub — GitHub trending repos, HuggingFace models, and CivitAI checkpoints in a single view
- Script Package system — drag-drop
.zippackages withmanifest.json, AI-generated BAT scripts, SSE-streamed execution
| Layer | Status |
|---|---|
| Frontend | Complete — fully functional on mock data |
| Backend | ~20% stubbed — FastAPI routing scaffolded, endpoints ready to wire up |
| Desktop | Tauri wrapper scaffolded, inactive |
The frontend degrades gracefully through three tiers: live backend → direct browser API calls → mock data. It works out of the box without a running backend.
pnpm install
cp .env.example .env # fill in API keys (all optional)
pnpm devRuns at http://localhost:5173. Backend not required for the UI to function.
To build the backend sidecar:
cd src/app/backend/fastapi
pip install -r requirements.txt
uvicorn main:app --host 127.0.0.1 --port 8420 --reloadSee ATTRIBUTIONS.md.



