You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
echo "failed to download prebuilt ONNX model ${model_file} from ${model_url}" >&2; exit 1; \
50
+
echo "failed to download prebuilt ONNX model ${model_file} from ${model_url}; set YOLO_MODEL_URL or provide ./model.onnx / ./ai_worker/model/model.onnx" >&2; exit 1; \
Copy file name to clipboardExpand all lines: docs/ai-pi-compatibility.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ The worker now includes explicit low-power controls:
37
37
38
38
## 3) CPU-Only Inference Expectations (Realistic)
39
39
40
-
These are practical planning ranges for **YOLOv8n ONNX** on CPU-only Pi, assuming one worker and no GPU/NPU acceleration:
40
+
These are practical planning ranges for a **nano-class ONNX detector such as `yolo11n`** on CPU-only Pi, assuming one worker and no GPU/NPU acceleration:
41
41
42
42
-**Pi 5 (8 GB)**, 640 input: typically **~250-450 ms/inference** (about 2-4 FPS max sustained detector throughput).
0 commit comments