This project implements a real-time mask detection system that identifies individuals wearing or not wearing masks. If a person without a mask is detected, their picture is captured and saved in a designated folder. This solution is ideal for use in environments like hospitals, pharmacies, or public spaces to ensure compliance with mask-wearing policies.
- Model: YOLOv8
- Task: Detect mask-wearing individuals, save images of those without masks.
- Application: Useful for monitoring mask-wearing in public places.
To start using the project, clone this repository:
git clone https://github.com/MohammadG4/Mask-Detection-Using-YOLOv8
cd mask-detection-yolov8-
Create and activate a virtual environment:
python -m venv yolo_env source yolo_env/bin/activate # Windows: yolo_env\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
Ensure the best YOLO model (best.pt) is saved in the correct path. You can find this in the repository under runs/detect/train14/weights/best.pt.
The dataset used in this project includes mask-wearing and non-mask-wearing images. You can modify the dataset location in the data.yaml file if you wish to retrain the model with new data or if you want to use the base dataset here is the link: (https://universe.roboflow.com/joseph-nelson/mask-wearing/dataset/18).
Use the provided script to run the mask detection model on a video feed:
from ultralytics import YOLO
import cv2
model = YOLO("best.pt") # Load pre-trained model
cap = cv2.VideoCapture("path_to_video.mp4")
# Process the video
while True:
ret, frame = cap.read()
if not ret:
break
results = model.track(frame, conf=0.6)
# Additional code for saving images of non-mask wearers...The system captures and saves pictures of detected individuals without masks into a specific folder. These images are labeled with unique IDs.
- You can adjust the confidence threshold, detection speed, and model path in the provided Python script.
Several metrics were used to evaluate the model’s performance:
These metrics demonstrate the model's ability to balance precision and recall when identifying individuals with and without masks.
To retrain the model on your own dataset:
- Modify the
data.yamlfile to point to your dataset. - Use the following command to start training:
model.train(
data="path_to_data.yaml",
epochs=60,
imgsz=640 # Image size
)Training results will be saved under the runs/detect/train15 directory.
Sample output from the model detecting individuals in a video stream is shown below:
This project is licensed under the MIT License. See the LICENSE file for more details.





