RTSPanda is a self-hosted RTSP camera platform for browser live view, recording, and detection workflows. It runs as a single Go backend with an embedded React frontend, a mediamtx relay process, and optional AI services based on deployment mode.
Project tags: rtsp video-surveillance golang react fastapi onnxruntime raspberry-pi docker-compose
This guide is the production operator walkthrough for RTSPanda v0.1.1:
- Choose the correct deployment mode
- Prepare the host safely
- Run one of the supported setup methods
- Validate health and stream readiness
- Operate, harden, and upgrade reliably
RTSPanda has three runtime modes. Pick one before setup.
| Mode | Target Hardware | AI Path | Real-time Detection | Typical Use |
|---|---|---|---|---|
standard |
x86 server, optional GPU | Local/remote YOLO worker | Yes | Full production stack |
pi |
Raspberry Pi / ARM | Snapshot AI (Claude/OpenAI) and/or remote YOLO worker | No (local) | Edge ingest/view + alerts |
viewer |
Desktop/server | None | No | Live view + optional recording only |
Important: Raspberry Pi is not a local real-time YOLO inference host. Use snapshot AI on Pi or route detection to a remote worker.
All supported setup paths are listed here:
| Method | Command Path | Best For |
|---|---|---|
| Standard full stack | docker compose up --build -d |
Single host production |
| Pi mode (viewer/snapshot AI) | ./scripts/pi-up.sh |
Pi edge node |
| Pi + remote AI worker | AI_WORKER_URL=... ./scripts/pi-up.sh |
Pi ingest + server inference |
| Standalone AI worker | PI_DEPLOYMENT_MODE=ai-worker ./scripts/pi-up.sh (server only) |
Dedicated inference host |
| Android no-Docker (2-node) | AI_WORKER_URL=... ./scripts/android-up.sh |
Android/Termux + remote YOLO |
| Android no-Docker (3-node) | Android viewer + Pi pi-up.sh relay |
Android hub + Pi detection offload |
| Viewer mode (binary) | RTSPANDA_MODE=viewer ./rtspanda |
No Docker monitoring |
| Viewer mode (Docker) | RTSPANDA_MODE=viewer docker compose up --build -d rtspanda |
Lightweight container deployment |
| Source development setup | run backend/frontend/worker directly | Local development and debugging |
- Docker Engine 24+ and Docker Compose plugin
- Open ports:
8080(app),8888(HLS served via app reverse path),9997/9998internal mediamtx API/metrics - Stable LAN access to camera RTSP endpoints
- For Standard mode with multiple cameras, CPU with headroom and GPU recommended
- For Pi mode with Snapshot AI, API key for Claude or OpenAI
git clone https://github.com/248Tech/RTSPanda.git
cd RTSPandaOptional but recommended:
cp .env.example .env 2>/dev/null || trueUse one of the methods below in full.
docker compose up --build -dWhat starts:
rtspandabackend/frontendai-worker(YOLO detector)
Validation:
curl -s http://127.0.0.1:8080/api/v1/health
curl -s http://127.0.0.1:8080/api/v1/health/ready
curl -s http://127.0.0.1:8080/api/v1/detections/healthchmod +x ./scripts/pi-*.sh
./scripts/pi-preflight.sh
./scripts/pi-up.shEnable snapshot AI:
export SNAPSHOT_AI_ENABLED=true
export SNAPSHOT_AI_PROVIDER=claude
export SNAPSHOT_AI_API_KEY=sk-ant-...
export SNAPSHOT_AI_INTERVAL_SECONDS=30
export SNAPSHOT_AI_THRESHOLD=medium
./scripts/pi-up.shOn the AI server:
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
curl -s http://127.0.0.1:8090/healthOn the Pi:
export AI_WORKER_URL=http://<ai-server-ip>:8090
./scripts/pi-up.shThis is only for dedicated inference nodes. Do not run this on Raspberry Pi.
export PI_DEPLOYMENT_MODE=ai-worker
./scripts/pi-up.shSee docs/android-no-docker.md for the full walkthrough.
Quick start (2-node — Android + remote AI server):
On your AI server (Docker required):
docker compose -f docker-compose.yml -f docker-compose.standalone.yml \
--profile ai-worker up -d --no-build ai-worker-standaloneOn Android in Termux:
pkg install -y golang ffmpeg wget
git clone https://github.com/248Tech/RTSPanda.git ~/RTSPanda
cd ~/RTSPanda/backend && go build -o ../rtspanda ./cmd/rtspanda
# Download mediamtx ARM64 binary — see docs/android-no-docker.md Step 4
export AI_WORKER_URL=http://<ai-server-ip>:8090
./scripts/android-up.shValidation:
curl -s http://127.0.0.1:8080/api/v1/health
curl -s http://127.0.0.1:8080/api/v1/detections/health
# ai_mode should be "remote"Build or use a compiled binary, then run:
RTSPANDA_MODE=viewer DATA_DIR=./data ./rtspandaRTSPANDA_MODE=viewer docker compose up --build -d rtspandaBackend:
cd backend
go run ./cmd/rtspandaFrontend:
cd frontend
npm install
npm run devAI worker:
cd ai_worker
python -m pip install -r requirements.txt
python -m uvicorn app.main:app --host 0.0.0.0 --port 8090- Open
http://<host>:8080 - Add one camera with known-good RTSP URL
- Confirm
/api/v1/cameras/:id/streamreturns:status=initializingwhile startup completesstatus=onlineand non-emptyhls_urlwhen playable
- Verify dashboard card transitions to Live
- Trigger manual stream reset if needed:
curl -X POST http://127.0.0.1:8080/api/v1/streams/reset- Confirm detection health endpoint for your mode:
- Standard: YOLO worker healthy
- Pi: snapshot AI configured, or remote worker reachable
| Variable | Default | Notes |
|---|---|---|
RTSPANDA_MODE |
auto | standard, pi, or viewer |
DATA_DIR |
./data |
SQLite, snapshots, recordings |
PORT |
8080 |
HTTP bind port |
| Variable | Default | Notes |
|---|---|---|
MEDIAMTX_HLS_ALWAYS_REMUX |
false |
Keep low latency profile |
MEDIAMTX_HLS_SEGMENT_COUNT |
3 |
Playlist segment count |
MEDIAMTX_HLS_SEGMENT_DURATION |
2s |
Segment size |
MEDIAMTX_HLS_PART_DURATION |
200ms |
LL-HLS part duration |
MEDIAMTX_SOURCE_ON_DEMAND |
false |
Streams initialize proactively |
| Variable | Default | Notes |
|---|---|---|
AI_MODE |
local |
local or remote |
AI_WORKER_URL |
empty | Remote worker endpoint |
DETECTOR_URL |
empty | Direct detector override |
| Variable | Default | Notes |
|---|---|---|
THERMAL_MONITOR_ENABLED |
false (except auto-on arm64+pi) |
Force thermal monitor on/off |
THERMAL_AUTO_RESUME |
false |
Keep manual recovery behavior after thermal events |
| Variable | Default | Notes |
|---|---|---|
SNAPSHOT_AI_ENABLED |
false |
Enables snapshot analysis loop |
SNAPSHOT_AI_PROVIDER |
claude |
claude or openai |
SNAPSHOT_AI_API_KEY |
empty | Provider API key |
SNAPSHOT_AI_INTERVAL_SECONDS |
30 |
Capture interval |
SNAPSHOT_AI_PROMPT |
built-in | Scene interpretation prompt |
SNAPSHOT_AI_THRESHOLD |
medium |
Alert sensitivity |
| Variable | Default | Notes |
|---|---|---|
AUTH_ENABLED |
false |
Enable API token auth |
AUTH_TOKEN |
empty | Required when auth enabled |
Recommended for long-running deployments:
- Put RTSPanda behind a reverse proxy with TLS
- Restrict app port access to trusted LAN/VPN
- Enable auth token and rotate it periodically
- Mount
DATA_DIRto durable storage - Back up SQLite and recording metadata daily
- Watch container restart patterns and mediamtx logs
- Keep host clock and timezone correct for event ordering
docker compose logs -f rtspanda
docker compose logs -f ai-workerPi profile logs:
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi logs -f rtspanda-pigit pull
docker compose up --build -dPi profile upgrade:
git pull
./scripts/pi-up.sh- Keep the previous image/tag available
- Back up
DATA_DIRbefore upgrades - If needed, redeploy previous tag and restore data backup
- Verify camera RTSP URL with VLC or ffplay
- Confirm camera credentials and codec support
- Check backend logs for mediamtx path add errors
- Trigger stream reset endpoint
- Validate camera and host are on routable network
- Confirm
/api/v1/cameras/:id/streamhls_urlis non-empty - Check whether HLS playlist is reachable from app host
- Review firewall rules between app and camera
- Standard mode: check
ai-workerhealth and model load - Pi mode: verify snapshot AI provider/API key
- Remote mode: verify
AI_WORKER_URLconnectivity
- Android no-Docker setup
- Raspberry Pi setup
- Cluster mode (Pi + remote AI + Android 3-node)
- Streaming tuning
- Testing strategy
- Release notes v0.1.1
cd backend && go test ./...
cd frontend && npm run test -- --config vitest.config.ts
cd ai_worker && python -m pytest -qMIT (LICENSE).