Skip to content

roperscrossroads/kumatastic

Repository files navigation

Kumatastic

Meshtastic mesh network monitoring with Uptime Kuma.

Monitor your Meshtastic nodes from multiple gateways. If any gateway sees a node, it's UP.

  Meshtastic              Kumatastic             Uptime Kuma
 ┌──────────┐         ┌────────────────┐       ┌───────────┐
 │ Gateway A │──┐      │                │       │           │
 └──────────┘  ├─────►│  Collector(s)  │       │  Status   │
 ┌──────────┐  │      │       ↓        │       │  Page     │
 │ Gateway B │──┘      │  State Store   │──────►│           │
 └──────────┘         │       ↓        │       │  !node1 ✓ │
 ┌──────────┐         │   Pusher(s)    │       │  !node2 ✓ │
 │ Gateway C │────────►│               │       │  !node3 ✗ │
 │ (mmrelay) │         └────────────────┘       └───────────┘
 └──────────┘

How it works: Collectors connect to Meshtastic radios and record node sightings. Pushers read the sightings and report UP/DOWN status to Uptime Kuma. A shared node manifest (nodes.yaml) controls which nodes are tracked.

Status: Kumatastic has been tested on a single production deployment (3 collectors, 2 pushers, ~20 nodes across 2 hosts). It works, but wider testing is needed. Documentation and deployment simplification are ongoing — contributions and feedback welcome.

Key features:

  • Multiple collectors merge visibility — no single point of observation
  • HTTP sighting forwarding between hosts for cross-gateway awareness
  • Push to multiple Kuma instances (internal dashboard, public status page, etc.)
  • Auto-reconnects on connection loss with exponential backoff
  • Scales from a single Raspberry Pi to a distributed multi-host setup
  • Works standalone or as a meshtastic-matrix-relay plugin

Quick Start

1. Install

pip install -e ".[all]"

Or minimal: pip install -e . and add meshtastic or python-socketio[client] as needed.

2. Define your nodes

# nodes.yaml — only these nodes will be monitored
nodes:
  "!aabbccdd":
    name: "Base Station Alpha"
    tags: ["core"]
  "!11223344":
    name: "Hilltop Repeater"
    tags: ["infra"]

3. Configure

# kumatastic.yaml
collector:
  id: "gateway-1"
  meshtastic: "tcp:192.168.1.100:4403"
  state_path: "/var/lib/kumatastic/state.json"
  manifest_path: "nodes.yaml"

pusher:
  state_path: "/var/lib/kumatastic/state.json"
  manifest_path: "nodes.yaml"
  targets:
    - name: "kuma"
      url: "http://localhost:3001"
      username: "admin"
      password: "your-password"

See Configuration Reference for all options.

4. Run

# Create monitors on Kuma (once)
kumatastic init --target kuma

# Start collector and pusher
kumatastic collect &
kumatastic push &

# Check status
kumatastic status

5. Keep in sync

kumatastic sync

Creates monitors for new nodes, deletes orphans, and updates the status page. Run from cron every 30 minutes:

*/30 * * * * root /usr/local/bin/kumatastic sync --config /etc/kumatastic/kumatastic.yaml

CLI

Command Description
kumatastic collect Run collector daemon
kumatastic push Run pusher daemon
kumatastic push --once One-shot push (for testing)
kumatastic status Show current node state
kumatastic init --target NAME Create monitors for all manifest nodes
kumatastic sync Sync monitors and status page

--config FILE is optional — searches ./kumatastic.yaml, ~/.config/kumatastic/kumatastic.yaml, /etc/kumatastic/kumatastic.yaml in order.

Scaling Up

The simplest setup is one collector and one pusher on the same host. For production, kumatastic supports a many-to-many topology where multiple collectors forward sightings to multiple pushers over HTTP:

  Host A                          Host B
 ┌────────────────────┐          ┌────────────────────┐
 │ Collector (gw 1)   │─ POST ─►│ Pusher 2           │
 │ Collector (gw 2)   │─ /sighting ─►│  → Kuma A     │
 │ Pusher 1           │         │  → Kuma B          │
 │   → Kuma A         │         └────────────────────┘
 │   → Kuma B         │
 └────────────────────┘

In distributed mode (push_secret), all pushers derive identical push tokens per node using HMAC-SHA256. Only UP is pushed — Kuma's dead-man-switch timer handles DOWN. No coordination needed between hosts.

See Architecture for details on topologies, data flow, and design decisions.

Deployment

See deploy/ for systemd units, example configs, cron jobs, and a step-by-step setup guide.

The mmrelay plugin can be used instead of a standalone collector if you're already running meshtastic-matrix-relay.

Documentation

Doc Description
Configuration Reference All config options, env vars, manifest format
Architecture Components, topologies, data flow, design decisions
Tuning Guide Threshold tuning, common patterns, troubleshooting
Deployment Guide systemd setup, secrets, multi-user permissions

Development

pip install -e ".[dev]"
pytest tests/ -v

Author

Designed and tested by Adam Roper for the Georgia Statewide Mesh Coalition.

License

MIT

Acknowledgments

About

Meshtastic network monitoring using Uptime Kuma

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages