Real-time dashboard for multi-agent AI systems — built for the Claude Code + Hermes + OpenClaw stack. One command to run. No cloud. No API keys.
Most local AI setups are invisible. Agents run in separate terminals, routing decisions disappear into logs, memory health is hard to interpret, and operator state gets scattered across tools. This dashboard pulls that into one live surface.
- Permanent agent dock — Lead, Hermes, and IrisEye stay pinned with status, model, presence, runtime, and current task
- Cinematic mesh sphere — animated agent/service topology with active-node motion, routing emphasis, and memory-aware link states
- Ops strip + alerts — live service health, memory mode, route target, and highest-priority operator alert at a glance
- Operator utility block — cron, audit, queue, and memory alert counts without leaving the main view
- System panel — CPU, RAM, MLX, and local-use telemetry in the canvas HUD
- Bottom telemetry rail — quick system totals for memory, MLX load, CPU load, disk, and mesh availability
- Memory route intelligence — backend memory summary, cause ranking, and routing impact surfaced directly in the dashboard
- Clickable summaries — operator chips and service indicators focus the matching service or agent in the mesh
Premium dark-mode terminal aesthetic with a mesh-first operator layout. No cloud dependency, no tab hunting, no fake glassmorphism.
- Mesh-first composition — the sphere is the product center, not a decorative background
- Permanent operator dock — agent state stays visible instead of hiding behind modal panels or tabs
- Responsive HUD clearance — the sphere respects the top operator stack and side dock instead of overlapping them
- Unified accent language — warm gold, cyan, violet, amber, and red are used consistently for identity and state
- State-aware motion — node glow, reticles, and memory-path coloring respond to live routing and degradation causes
git clone https://github.com/iriseye931-ai/mission-control-dashboard
cd mission-control-dashboard
docker-compose up --build- Dashboard: http://localhost:3000
- Backend API: http://localhost:8000
- WebSocket:
ws://localhost:8000/ws
No config required — the backend polls your local services and shows live status immediately.
Backend:
cd backend
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000Mesh Doctor:
cd backend
./venv/bin/python mesh_doctor.pyRuns a local-first operational check over Mission Control, Hermes, AI Maestro, cron freshness, routing policy, and stale agents.
Frontend:
cd frontend
npm install
npm run devThe backend polls your local services, normalizes operator state, and broadcasts it over WebSocket. Agents and local tools can push structured events straight into the dashboard through the REST API.
Mission Control works best when each tool keeps its own runtime and authentication model:
Claude CodeandCodexas installed subscription-backed CLIsHermesandOpenClawas configured local agent runtimesOpenViking,Memory MCP, andMLXas local infrastructure/services
Mission Control does not replace those tools. It turns them into one operator surface.
- agent status and progress events
- service health and telemetry
- memory recall activity and memory monitor logs
- routing decisions and queue depth
- cron/scheduled task freshness
- local system pressure from the host machine
- permanent agent dock for Lead, Hermes, and IrisEye
- cinematic mesh with clickable service and routing focus
- top operator strip for service health, memory mode, and alerts
- memory cause and routing impact surfaced directly in the main view
From Python (send a direct agent message):
import httpx
httpx.post("http://localhost:8000/api/agent-messages", json={
"from_agent": "hermes",
"to_agent": "atlas",
"summary": "Memory route degraded",
"details": "Gateway degraded. Recommend local fallback until Memory MCP recovers.",
"files": ["backend/main.py"]
})Via WebSocket (browser / any client):
const ws = new WebSocket("ws://localhost:8000/ws");
ws.onmessage = (e) => console.log(JSON.parse(e.data)); // full mesh state on every updateDirect messaging payload:
| Field | Type | Use for |
|---|---|---|
from_agent |
string |
Source agent/tool |
to_agent |
string |
Target agent/tool |
summary |
string |
Short operator-readable subject |
details |
string |
Optional body text |
files |
string[] |
Optional related file paths |
REST API:
GET /api/health — health check
GET /api/agents — active agents + status
GET /api/status — full dashboard state snapshot
GET /api/routing — current routing summary
GET /api/system — CPU, RAM, MLX RAM, PID
GET /api/logs — recent log buffer
GET /api/cron — Hermes scheduled jobs
GET /api/memories — recent memory recalls
GET /api/memory-events — normalized memory event stream
GET /api/amp/messages — AMP messages from AI Maestro
GET /api/amp/events — live routing events from bridge logs
GET /api/agent-messages — direct agent-to-agent message history
POST /api/agent-messages — send a direct agent-to-agent message
POST /api/amp/send — send AMP message to any agent
| Layer | Tech |
|---|---|
| Frontend | React 19, Vite, TypeScript, Tailwind CSS, Zustand |
| Backend | FastAPI, uvicorn, WebSockets, Pydantic |
| Inference | MLX (Qwen3.5 35B-A3B 4-bit, Apple Silicon) |
| Deploy | Docker Compose (one command) |
This dashboard was built alongside the iriseye local AI mesh, and works with any similar setup:
| Role | Examples |
|---|---|
| Lead agent | Claude Code (Atlas), any CLI agent |
| Task runner | Hermes, any cron-capable agent |
| File/web agent | OpenClaw / iriseye, browser-use |
| Memory store | OpenViking, mem0 |
| Local LLM | MLX (Apple Silicon), Ollama, llama.cpp |
The backend auto-detects running processes and polls local service endpoints — no instrumentation required to get a working dashboard.
Want the full mesh setup? → iriseye repo
Mission Control now treats the mesh as local-first, premium-by-exception.
Hermesis the default workhorse for routine execution.iriseyehandles interactive file and web tasks.Atlasis the lead premium role, served by Codex or Claude Code.claudeis the premium backup when available.
Current enforced routing:
routine -> hermes
specialized -> iriseye
premium -> atlas (fallback: claude)
Premium capacity should be reserved for planning, ambiguous debugging, tricky refactors, and final review.
- the README screenshots are captured from the live local dashboard, not static mockups
- the public repo focuses on the dashboard, backend polling, memory/routing state, and operator UI
- local experimental agent wrapper scripts are intentionally not part of the committed dashboard product surface
Hermes is modeled as one local runner with multiple profiles, not as a pile of always-hot models.
Mission Control now also detects Hermes-native isolated profiles from ~/.hermes/profiles and surfaces them beside the MLX-backed profile runners. That means the dashboard can distinguish:
-
MLX-backed execution profiles for model/runtime selection
-
Hermes-native profiles for isolated config, memory, sessions, and gateway state
-
Hermes session lineage for latest titled session, resume target, per-profile session counts, and whether session search/FTS is available
-
Hermes background work for launching deeper side tasks without blocking the main operator session
-
Hermes worktree launches and quick-command surfacing for isolated repo jobs and profile-specific operator shortcuts
-
Hermes worktree lifecycle controls for launch, stop, branch/path tracking, and cleanup of stale isolated workers
-
Hermes checkpoint awareness for rollback readiness, snapshot depth, and per-profile rollback guidance before destructive changes
-
Hermes provider graph visibility for primary model, fallback providers, smart routing, auxiliary providers, and delegation paths per profile
-
Hermes toolset visibility and richer background-task controls for poll/log inspection directly from Mission Control
-
Hermes shared skill visibility for local skill count, external skill directories, and mesh-skill connectivity per profile
-
workhorseQwen3.5-35B-A3B-4bitDefault local execution profile. -
sidecarQwen2.5-7B-Instruct-4bitCheap auxiliary profile for summaries, routing, compression, and lightweight helper work. -
code-specialistQwen2.5-Coder-32B-Instruct-4bitPlanned on-demand profile for code-heavy implementation and local review. -
reasoning-specialistDeepSeek-R1-Distill-Qwen-32B-4bitPlanned on-demand profile for harder local reasoning and second-pass debugging/review.
The design goal is simple:
- keep the always-on memory footprint small enough for a 48 GB Apple Silicon machine
- let Hermes handle most task volume locally
- only load heavier specialist profiles when a task actually justifies it
- avoid burning premium capacity on routine work
MIT


