This project uses YOLOv8 for object detection and DeepSort for object tracking. It supports video file input or real-time camera processing, and is tested on both PC and Raspberry Pi (including Pi 5).
- Python 3.8+
- Virtual environment (
venv) - Required libraries (see below)
- For Raspberry Pi: Pi OS (Bookworm recommended), camera enabled, GStreamer installed
- Clone the repository
git clone https://github.com/NoGenryFord/Exaple-YOLO-model.git cd Exaple-YOLO-model - Create and activate a virtual environment
python -m venv venv # For Windows: .\venv\Scripts\activate # For Linux/Mac: source venv/bin/activate
- Install dependencies
pip install -r requirements.txt
- Start the script
python main.py
-
Convert model to TFLite (if not already done)
python src/convert_to_tflite/onnx2tf_converter.py
-
Run TFLite version
python main_tflite.py
-
Test TFLite model
python test_tflite.py
# Install system dependencies
sudo apt update
sudo apt install python3-pip python3-venv libgstreamer1.0-0 gstreamer1.0-plugins-base
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install TensorFlow Lite runtime (lighter than full TensorFlow)
pip install tflite-runtime
# Install other dependencies
pip install opencv-python deep-sort-realtime ultralytics numpy
# Run TFLite version
python3 main_tflite.pyESC: Exitg: Toggle grayscale modec: Switch to default camera1: Switch to Raspberry Pi camera (GStreamer)v: Restart videor: Reset trackerr: Reset selection (if implemented)
- Functionality
- Object Detection: YOLOv8 model detects objects in video/camera stream.
- Object Tracking: DeepSort tracks detected objects across frames.
- Interactive: Switch video sources, toggle modes, and view real-time FPS and confidence.
Exaple-YOLO-model/
│
├── main.py # Main script (all logic here)
├── weights/
│ └── YOLO/
│ └── model_3_best.pt # YOLOv8 model weights
├── data/
│ └── tank1.mp4 # Example input video
├── venv/ # Virtual environment (not in Git)
├── requirements.txt # Dependencies
└── README.md # This file
Main libraries:
- ultralytics
- deep_sort_realtime
- opencv-python
- numpy
Install with:
pip install -r requirements.txtNote: Ensure the model weights file (model_3_best.pt) is in the correct folder. For camera use, make sure the camera is connected and enabled on your device.
- Make sure your Pi OS is up to date and camera is enabled (
libcamera-helloshould work). - Install GStreamer and plugins:
sudo apt update sudo apt install -y gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav sudo apt install -y gstreamer1.0-libcamera
- For best performance, use the GStreamer pipeline for the Pi camera:
- In the app, press
1to switch to the Pi camera (v4l2src device=/dev/video0 ! videoconvert ! appsink).
- In the app, press
- For best speed on Raspberry Pi, use:
- Lower video resolution (e.g., 240x240)
- Lower FPS (e.g., 15)
- For maximum performance, use hardware acceleration (e.g., OpenVINO, Coral, or NPU if available on your Pi).
COMMERCIAL SOFTWARE
Copyright © 2025. All rights reserved.
This software is provided under a limited commercial license.
See LICENSE for details.
For licensing, contact:
- Email: your.contact@example.com
- Phone: +XX XXX XXX XXXX