Skip to content

MohammadG4/Mask-Detection-Using-YOLOv8

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mask Detection Using YOLOv8

This project implements a real-time mask detection system that identifies individuals wearing or not wearing masks. If a person without a mask is detected, their picture is captured and saved in a designated folder. This solution is ideal for use in environments like hospitals, pharmacies, or public spaces to ensure compliance with mask-wearing policies.

Project Overview

  • Model: YOLOv8
  • Task: Detect mask-wearing individuals, save images of those without masks.
  • Application: Useful for monitoring mask-wearing in public places.

Installation

Clone the Repository

To start using the project, clone this repository:

git clone https://github.com/MohammadG4/Mask-Detection-Using-YOLOv8
cd mask-detection-yolov8

Set up Virtual Environment

  1. Create and activate a virtual environment:

    python -m venv yolo_env
    source yolo_env/bin/activate  # Windows: yolo_env\Scripts\activate
  2. Install dependencies:

    pip install -r requirements.txt

Download the Model

Ensure the best YOLO model (best.pt) is saved in the correct path. You can find this in the repository under runs/detect/train14/weights/best.pt.

Dataset

The dataset used in this project includes mask-wearing and non-mask-wearing images. You can modify the dataset location in the data.yaml file if you wish to retrain the model with new data or if you want to use the base dataset here is the link: (https://universe.roboflow.com/joseph-nelson/mask-wearing/dataset/18).

Usage

Running Mask Detection on Video

Use the provided script to run the mask detection model on a video feed:

from ultralytics import YOLO
import cv2

model = YOLO("best.pt")  # Load pre-trained model
cap = cv2.VideoCapture("path_to_video.mp4")

# Process the video
while True:
    ret, frame = cap.read()
    if not ret:
        break

    results = model.track(frame, conf=0.6)
    # Additional code for saving images of non-mask wearers...

Saving Images of People Without Masks

The system captures and saves pictures of detected individuals without masks into a specific folder. These images are labeled with unique IDs.

Configuration

  • You can adjust the confidence threshold, detection speed, and model path in the provided Python script.

Results and Evaluation

Several metrics were used to evaluate the model’s performance:

  • F1 Score Curve: F1_curve
  • Precision-Recall Curve: PR_curve
  • Confusion Matrix: confusion_matrix

These metrics demonstrate the model's ability to balance precision and recall when identifying individuals with and without masks.

Example of saved images:

240 1 128

Training

To retrain the model on your own dataset:

  1. Modify the data.yaml file to point to your dataset.
  2. Use the following command to start training:
model.train(
    data="path_to_data.yaml", 
    epochs=60, 
    imgsz=640  # Image size
)

Training results will be saved under the runs/detect/train15 directory.

Sample Output

Sample output from the model detecting individuals in a video stream is shown below:

  • Annotated Video

License

This project is licensed under the MIT License. See the LICENSE file for more details.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published