A real-time drowning detection application using a YOLOv8-based deep learning model. This project processes video or image inputs to detect events related to drowning, swimming, or being out of water. When a drowning event is detected, the system draws bounding boxes around the detected area and plays an alert sound.
This project implements a real-time detection system that:
- Processes both images and videos to detect drowning events.
- Differentiates between
drowning,swimming, andout of waterevents. - Plays an alert sound (
sound/alarm.wav) whenever a drowning event is detected.
The system leverages a YOLOv8 model that has been previously trained using a dataset (e.g., from Roboflow) containing images with the three classes. The application uses OpenCV for video processing and the playsound module for audio alerts.
- Real-Time Detection: Process live video streams or input files to detect events.
- Multi-Class Support: Identify and display bounding boxes for
drowning,swimming, andout of waterevents. - Audible Alerts: An alarm sound is played when a drowning event is detected.
- User-Friendly: Simple command-line interface to specify the input source.
- Python 3.7+
- Required Python packages:
opencv-pythonultralyticsplaysound
-
Clone the Repository:
git clone <repository_url> cd <repository_directory>
-
Install Dependencies:
pip install opencv-python ultralytics playsound
-
Model and Sound Setup:
Ensure that trained YOLOv8 model file (best.pt) is placed in the same directory as app.py. Place the alert sound file (alarm.wav) inside a folder named sound (i.e., sound/alarm.wav).
Run the application by specifying the input file (image or video):
- For an image:
python app.py --source path/to/your/image.jpg
- For a video:
python app.py --source path/to/your/video.mp4
When executed, the application will display the input with annotated bounding boxes. If a drowning event is detected, an audible alarm will be played.
.
├── app.py # Main application file
├── best.pt # Trained YOLOv8 model weights
├── sound
│ └── alarm.wav # Alert sound file
└── README.md # Project documentation
The YOLOv8 model used in this application was trained on a dataset from Roboflow containing images labeled as drowning, swimming, and out of water. You can see the training code in "Drown_detect.ipynb".
This project is licensed under the MIT License.