Skip to content

Commit 793a239

Browse files
committed
Fix Pi standalone startup and ONNX model pin
1 parent dcf5e7c commit 793a239

File tree

11 files changed

+75
-46
lines changed

11 files changed

+75
-46
lines changed

README.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,8 @@ chmod +x ./scripts/pi-*.sh
9898
Equivalent manual command:
9999

100100
```bash
101-
docker compose --profile pi up --build -d rtspanda-pi
101+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi build rtspanda-pi
102+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi up -d --no-build rtspanda-pi
102103
```
103104

104105
Behavior:
@@ -122,7 +123,8 @@ Best for a distributed deployment where the Pi handles cameras and the second ma
122123
```bash
123124
git clone https://github.com/248Tech/RTSPanda.git
124125
cd RTSPanda
125-
docker compose --profile ai-worker up --build -d ai-worker-standalone
126+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
127+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
126128
```
127129

128130
Verify the worker:
@@ -144,7 +146,8 @@ export AI_WORKER_URL="http://192.168.1.50:8090"
144146
Equivalent manual command on the Pi:
145147

146148
```bash
147-
docker compose --profile pi up --build -d rtspanda-pi
149+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi build rtspanda-pi
150+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi up -d --no-build rtspanda-pi
148151
```
149152

150153
Verify from the Pi:
@@ -165,14 +168,14 @@ The Docker AI worker is ONNX-only and never exports or converts models at runtim
165168

166169
```bash
167170
export MODEL_SOURCE=remote
168-
export YOLO_MODEL_NAME=yolov8n
171+
export YOLO_MODEL_NAME=yolo11n
169172
export YOLO_MODEL_RELEASE=v8.3.0
170173
```
171174

172175
Optional mirror:
173176

174177
```bash
175-
export YOLO_MODEL_URL="https://your-mirror.example/yolov8n.onnx"
178+
export YOLO_MODEL_URL="https://your-mirror.example/yolo11n.onnx"
176179
```
177180

178181
### Local prebuilt model

RELEASE_NOTES_v0.0.8.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,24 +9,29 @@ RTSPanda v0.0.8 turns the project into a more serious edge-video platform: the d
99
- Removed AI-worker export fallback from Docker builds. The worker now uses prebuilt ONNX models only.
1010
- Added additive Compose profiles for `full`, `pi`, and `ai-worker` without breaking the existing `docker compose up --build -d` flow.
1111
- Rewrote setup and deployment docs to clearly support Standard, Pi Standalone, and Pi + AI topologies.
12+
- Fixed Pi standalone startup so it no longer traverses the local AI-worker build path.
13+
- Repaired the default remote ONNX pin to a live Ultralytics asset and improved failure messages for invalid model sources.
1214

1315
## What Changed
1416
### Platform and Deployment
1517
- `docker-compose.yml` now supports:
1618
- standard full-stack deployment
1719
- lightweight Pi deployment
1820
- standalone remote AI-worker deployment
21+
- New `docker-compose.standalone.yml` overlay isolates Pi-only and AI-worker-only deployments from the unprofiled full stack.
1922
- `scripts/pi-up.sh` now supports:
2023
- `PI_DEPLOYMENT_MODE=pi`
2124
- `PI_DEPLOYMENT_MODE=full`
2225
- `PI_DEPLOYMENT_MODE=ai-worker`
26+
- Standalone launch paths now use `build` followed by `up --no-build` so targeted deployments only build the intended service.
2327
- `scripts/pi-preflight.sh` now checks deployment mode and model-source expectations more accurately for Docker-first Pi workflows.
2428

2529
### AI Runtime
2630
- `ai_worker/Dockerfile` now resolves models deterministically:
2731
- local prebuilt ONNX file first
2832
- explicit `YOLO_MODEL_URL` second
2933
- named Ultralytics ONNX asset fallback last
34+
- Default remote model pin now targets `yolo11n` on Ultralytics `v8.3.0`, which is published and buildable today.
3035
- No PyTorch install path
3136
- No `YOLO(...).export(...)`
3237
- No runtime model conversion on ARM
@@ -60,7 +65,8 @@ docker compose up --build -d
6065
AI host:
6166

6267
```bash
63-
docker compose --profile ai-worker up --build -d ai-worker-standalone
68+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
69+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
6470
```
6571

6672
Pi host:

ai_worker/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
FROM python:3.12-slim AS model-resolver
1414

1515
ARG MODEL_SOURCE=remote
16-
ARG YOLO_MODEL_NAME=yolov8n
16+
ARG YOLO_MODEL_NAME=yolo11n
1717
ARG YOLO_MODEL_RELEASE=v8.3.0
1818
ARG YOLO_MODEL_URL
1919

@@ -41,13 +41,13 @@ RUN --mount=type=bind,source=.,target=/context,readonly \
4141
exit 1; \
4242
elif [ -n "${YOLO_MODEL_URL:-}" ]; then \
4343
curl -fsSL "${YOLO_MODEL_URL}" -o /model/model.onnx || { \
44-
echo "failed to download model from YOLO_MODEL_URL=${YOLO_MODEL_URL}" >&2; exit 1; \
44+
echo "failed to download model from YOLO_MODEL_URL=${YOLO_MODEL_URL}; provide a valid URL or use MODEL_SOURCE=local with ./model.onnx" >&2; exit 1; \
4545
}; \
4646
else \
4747
model_file="${YOLO_MODEL_NAME}.onnx"; \
4848
model_url="https://github.com/ultralytics/assets/releases/download/${YOLO_MODEL_RELEASE}/${model_file}"; \
4949
curl -fsSL "${model_url}" -o /model/model.onnx || { \
50-
echo "failed to download prebuilt ONNX model ${model_file} from ${model_url}" >&2; exit 1; \
50+
echo "failed to download prebuilt ONNX model ${model_file} from ${model_url}; set YOLO_MODEL_URL or provide ./model.onnx / ./ai_worker/model/model.onnx" >&2; exit 1; \
5151
}; \
5252
fi; \
5353
test -s /model/model.onnx

ai_worker/export_model.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
"""
2-
Export a YOLOv8 .pt model to ONNX format for use with the RTSPanda AI worker.
2+
Export a YOLO .pt model to ONNX format for use with the RTSPanda AI worker.
33
44
Usage (run once, requires ultralytics):
55
66
pip install ultralytics
77
python ai_worker/export_model.py
8-
# produces ai_worker/yolov8n.onnx
8+
# produces ai_worker/yolo11n.onnx
99
1010
Then point the AI worker at the file:
1111
12-
YOLO_MODEL_PATH=/path/to/yolov8n.onnx uvicorn app.main:app ...
12+
YOLO_MODEL_PATH=/path/to/yolo11n.onnx uvicorn app.main:app ...
1313
1414
For Docker users, export the model ahead of time and either:
1515
- place it at ./model.onnx before building the image, or
@@ -47,11 +47,11 @@ def export(model_name: str, output_dir: Path) -> Path:
4747

4848

4949
if __name__ == "__main__":
50-
parser = argparse.ArgumentParser(description="Export YOLOv8 model to ONNX")
50+
parser = argparse.ArgumentParser(description="Export YOLO model to ONNX")
5151
parser.add_argument(
5252
"--model",
53-
default="yolov8n",
54-
help="Model name without extension, e.g. yolov8n, yolov8s (default: yolov8n)",
53+
default="yolo11n",
54+
help="Model name without extension, e.g. yolo11n, yolo11s (default: yolo11n)",
5555
)
5656
parser.add_argument(
5757
"--output-dir",

docker-compose.standalone.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Standalone-mode compose overlay.
2+
#
3+
# Use this file together with docker-compose.yml for Pi-only or AI-worker-only
4+
# deployments so unprofiled full-stack services do not remain active in the
5+
# combined project model.
6+
services:
7+
ai-worker:
8+
profiles: ["full"]
9+
10+
rtspanda:
11+
profiles: ["full"]

docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ x-ai-worker-common: &ai-worker-common
44
dockerfile: ai_worker/Dockerfile
55
args:
66
MODEL_SOURCE: ${MODEL_SOURCE:-remote}
7-
YOLO_MODEL_NAME: ${YOLO_MODEL_NAME:-yolov8n}
7+
YOLO_MODEL_NAME: ${YOLO_MODEL_NAME:-yolo11n}
88
YOLO_MODEL_RELEASE: ${YOLO_MODEL_RELEASE:-v8.3.0}
99
YOLO_MODEL_URL: ${YOLO_MODEL_URL:-}
1010
image: rtspanda-ai-worker:latest

docs/ai-pi-compatibility.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The worker now includes explicit low-power controls:
3737

3838
## 3) CPU-Only Inference Expectations (Realistic)
3939

40-
These are practical planning ranges for **YOLOv8n ONNX** on CPU-only Pi, assuming one worker and no GPU/NPU acceleration:
40+
These are practical planning ranges for a **nano-class ONNX detector such as `yolo11n`** on CPU-only Pi, assuming one worker and no GPU/NPU acceleration:
4141

4242
- **Pi 5 (8 GB)**, 640 input: typically **~250-450 ms/inference** (about 2-4 FPS max sustained detector throughput).
4343
- **Pi 4 (4-8 GB)**, 640 input: typically **~600-1200 ms/inference** (about 0.8-1.6 FPS).
@@ -47,7 +47,7 @@ Operational guidance:
4747

4848
- For backend sampling, plan around **1 frame every 1-3 seconds per actively tracked camera** on Pi 4.
4949
- Use **single detector concurrency** and keep queue depth low.
50-
- Prefer `yolov8n` for Pi; larger variants (`s/m/l/x`) are usually too slow for practical multi-camera tracking.
50+
- Prefer `yolo11n` for Pi; larger variants (`s/m/l/x`) are usually too slow for practical multi-camera tracking.
5151

5252
## 4) Frame Processing Limits and Tunables
5353

@@ -106,7 +106,7 @@ This keeps API compatibility while disabling heavy inference entirely.
106106

107107
## 6) Recommended Model/Runtime Adjustments
108108

109-
- Prefer **YOLOv8n ONNX**.
109+
- Prefer **YOLO11n ONNX**.
110110
- Keep backend detection sampling conservative (seconds, not sub-second) on Pi.
111111
- Keep detector concurrency at 1 where possible.
112112
- Avoid large source frames; downscale before inference.

docs/cluster-mode.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,8 @@ The transport stays simple:
2424
```bash
2525
git clone https://github.com/248Tech/RTSPanda.git
2626
cd RTSPanda
27-
docker compose --profile ai-worker up --build -d ai-worker-standalone
27+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
28+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
2829
```
2930

3031
Health check:
@@ -50,7 +51,8 @@ export AI_WORKER_URL="http://192.168.1.50:8090"
5051
Equivalent manual command:
5152

5253
```bash
53-
docker compose --profile pi up --build -d rtspanda-pi
54+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi build rtspanda-pi
55+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi up -d --no-build rtspanda-pi
5456
```
5557

5658
## 3. Verify the connection
@@ -76,21 +78,23 @@ Expected:
7678

7779
```bash
7880
export MODEL_SOURCE=remote
79-
export YOLO_MODEL_NAME=yolov8n
80-
docker compose --profile ai-worker up --build -d ai-worker-standalone
81+
export YOLO_MODEL_NAME=yolo11n
82+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
83+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
8184
```
8285

8386
### Use a local prebuilt model
8487

8588
```bash
8689
cp /path/to/model.onnx ./model.onnx
8790
export MODEL_SOURCE=local
88-
docker compose --profile ai-worker up --build -d ai-worker-standalone
91+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
92+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
8993
```
9094

9195
You can also place the file at `./ai_worker/model/model.onnx`.
9296

9397
## Operational Notes
9498
- Keep the AI host and Pi on a stable LAN.
95-
- Prefer `yolov8n` for CPU-only deployments unless you have measured headroom.
99+
- Prefer `yolo11n` for CPU-only deployments unless you have measured headroom.
96100
- The default `docker compose up --build -d` workflow is unchanged and still runs the full local stack on a single machine.

docs/raspberry-pi-deployment.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,8 @@ PI_DEPLOYMENT_MODE=ai-worker ./scripts/pi-up.sh
3333
Or directly:
3434

3535
```bash
36-
docker compose --profile ai-worker up --build -d ai-worker-standalone
36+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker build ai-worker-standalone
37+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile ai-worker up -d --no-build ai-worker-standalone
3738
```
3839

3940
## Upgrades

docs/raspberry-pi.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@ chmod +x ./scripts/pi-*.sh
2727
This is equivalent to:
2828

2929
```bash
30-
docker compose --profile pi up --build -d rtspanda-pi
30+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi build rtspanda-pi
31+
docker compose -f docker-compose.yml -f docker-compose.standalone.yml --profile pi up -d --no-build rtspanda-pi
3132
```
3233

3334
If `AI_WORKER_URL` is not set, the Pi stays usable for RTSP ingestion, playback, and recording while detection health remains degraded.
@@ -65,14 +66,14 @@ Default behavior:
6566

6667
```bash
6768
export MODEL_SOURCE=remote
68-
export YOLO_MODEL_NAME=yolov8n
69+
export YOLO_MODEL_NAME=yolo11n
6970
export YOLO_MODEL_RELEASE=v8.3.0
7071
```
7172

7273
Optional explicit URL:
7374

7475
```bash
75-
export YOLO_MODEL_URL="https://your-mirror.example/yolov8n.onnx"
76+
export YOLO_MODEL_URL="https://your-mirror.example/yolo11n.onnx"
7677
```
7778

7879
### Local model baked into the image

0 commit comments

Comments
 (0)