Skip to content

Latest commit

 

History

History
88 lines (64 loc) · 3.86 KB

File metadata and controls

88 lines (64 loc) · 3.86 KB

AI Command Center

Local AI workstation dashboard for RTX setups

React TypeScript Vite FastAPI Tauri


A local dashboard for managing AI/ML workstation workflows on RTX hardware. Monitors GPU and system resources in real time, tracks active training jobs across Kohya SS and Musubi Tuner, controls local AI services (ComfyUI, SwarmUI, Ollama), and aggregates model discovery from GitHub, HuggingFace, and CivitAI — all without sending data anywhere.

Built for setups where you run your own stack and need a single interface to see what is happening across the machine.



Command Center overview

GPU Live Monitor

Training Jobs

Community Hub

Features

  • Real-time GPU monitoring — VRAM, utilization, temperature, power draw with 30-second rolling charts
  • Training job auto-detection — scans running processes, parses TOML configs, reads TensorBoard loss curves (Kohya SS, Musubi Tuner)
  • Service control — health check, launch, and stop for ComfyUI, SwarmUI, and Ollama
  • Community Hub — GitHub trending repos, HuggingFace models, and CivitAI checkpoints in a single view
  • Script Package system — drag-drop .zip packages with manifest.json, AI-generated BAT scripts, SSE-streamed execution

Stack

Frontend React TypeScript Vite Tailwind CSS Radix UI

Backend FastAPI Python psutil pynvml

Desktop Tauri

Status

Layer Status
Frontend Complete — fully functional on mock data
Backend ~75% — GPU, system, services, AI proxy working; training detection and BAT runner partially stubbed
Desktop Tauri v2 wrapper scaffolded with sidecar spawning

The frontend degrades gracefully through three tiers: live backend → direct browser API calls → mock data. It works out of the box without a running backend.

Setup

pnpm install
cp .env.example .env   # fill in API keys (all optional)
pnpm dev

Runs at http://localhost:5173. Backend not required for the UI to function.

To build the backend sidecar:

cd src/app/backend/fastapi
pip install -r requirements.txt
uvicorn main:app --host 127.0.0.1 --port 8000 --reload

License

See ATTRIBUTIONS.md.