Skip to content

Saisankarwork123/Thermal-Fire-Detection

Repository files navigation

YOLOX + DeepFace Emotion Analysis

This project integrates YOLOX (for face detection and tracking) with DeepFace (for emotion recognition) to perform real-time emotion analysis from videos.

It detects faces using a YOLOX TensorRT-accelerated model, tracks them with ByteTrack, and analyzes the emotion for each detected face using DeepFace, maintaining a smoothed, stable emotion prediction over time.

Features

1. GPU-Accelerated YOLOX Inference using TensorRT

2. Emotion Recognition with DeepFace (happy, sad, angry, surprise, etc.)

3. Consistent Face Tracking powered by ByteTrack

4. Temporal Emotion Smoothing for stable predictions

5. Annotated Output Video with bounding boxes and emotions

6. Supports TensorRT .engine Models for ultra-fast inference

Requirements

You can recreate the original environment with:

pip install -r requirements.txt


This requirements.txt is based on the full environment (pip freeze) used during development.
Major frameworks and versions include:

Library	Version
YOLOX	0.3.0
ByteTrack	0.3.2
DeepFace	Custom commit (Serengil repo)
PyTorch	1.13.1
Torchvision	0.14.1
TensorRT	10.9.0.34
OpenCV	4.11.0.86
Shapely	2.0.7
InsightFace	0.7.3
Albumentations	2.0.6
NumPy	1.26.0

Directory Structure

EmotionAnalysis/
│
├── emotion_analysis.py                # Main YOLOX–DeepFace script
├── utils/
│   └── yolox_predictor.py             # Custom YOLOX inference helper
│
├── model_trt_face_det.engine          # TensorRT model
├── model_trt_face_det.pth             # YOLOX weights
├── yolox_voc_s_deepface.py            # YOLOX config file
├── Day4.mp4                           # Input video
└── requirements.txt                   # Environment dependencies

Configuration

Update the following paths in the script to match your setup:

video_path = "/path/to/input_video.mp4"
output_video_folder = "/path/to/output_folder"
model_engine_location = "/path/to/model_trt_face_det.engine"
model_config_location = "/path/to/yolox_voc_s_deepface.py"
model_location = "/path/to/model_trt_face_det.pth"

How to Run

Download the model weights using the gdrive links given in model_weight_gdrive_links.txt file and place them in the same directory structure as mentioned above.

Run the script:

python emotion_analysis.py


After processing, the output video will be saved to:

output_video_folder/emotion_<timestamp>.mp4


Each detected face is annotated with its dominant emotion and bounding box.

About

PoC for fire detection from thermal camera output

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages