This project implements a real-time face detection and expression classification system using OpenCV for face detection, EfficientNetV2 models for emotion classification, and deployment via Docker or Streamlit.
- Real-time face detection using ResNet SSD (Caffe-based)
- Real-time face detection using Haar Cascade Classifier
- Real-time Object detection using MobileNet Classifier
- Smile classification using a fine-tuned MobileNetV2 deep learning model
- Smile & Emotion classification via EfficientNetB0
- Offline and real-time data augmentation to enhance model generalization
- Multi-threading optimized for efficient real-time video capture and processing
- Multiclass emotion support with bounding box + confidence overlay
- Dockerized environment for simplified deployment
- Single entry point (
app.py) for ease of use - TFLite script for high-speed real-time use
- Gradio UI for live webcam-based inference
- Streamlit Web App for live webcam-based inference
- Configurable UI: toggle bounding boxes, set confidence threshold
Before setting up the application, ensure you have installed:
- Python 3.11
- Git
- Docker installed
- Webcam access enabled
git clone https://github.com/YOUR_GITHUB_USERNAME/emotion_detection.git
cd emotion_detectionTo build the Docker image, run:
docker build -t emotion_detection .Since Docker is configured to run app_cli.py by default, execute the following command:
docker run -it --rm --device=/dev/video0 -p 8501:8501 emotion_detection-
The webcam will automatically activate, detecting and classifying faces in real-time.
-
Press 'q' to exit the application.
-
Uncomment the corresponing line to run Streamlit interface.
-
open http://localhost:8501 in your browser.
If needed, you can start an interactive shell in the container without automatically running the application:
docker run -it --rm emotion_detection /bin/bashIf you prefer to run the application without Docker, follow these steps:
python -m venv venv
source venv/bin/activate # On Windows use: venv\Scripts\activatepip install -r requirements.txtpython app.pystreamlit run app.pypython ./src/deployment/gradio_ui.pyAn alternative way to run the application is by using the runapp.sh script, which checks for a connected camera before launching the application in Docker.
./runapp.sh- If a camera is connected, the application launches inside Docker.
- If no camera is detected, a warning message is displayed.
Make sure to give execution permission to the script before running it:
chmod a+x runapp.shThe app.py script serves as the central control point for various functionalities. Different options can be selected by user via a menu.
- Capture Video Feed: The script uses
cv2.VideoCapture()to access the webcam and retrieve real-time frames. - Face Detection: For each frame, various face/object detections are used to detect faces.
- Draw Bounding Box: Once a face is detected, a rectangle is drawn around it using
cv2.rectangle(). - Live Tracking: The script continuously updates the bounding boxes as faces move within the frame.
- Dataset Management: The script allows capturing and splitting datasets for training, data augmentation, and model evaluation.
- Model Training and Fine-Tuning: The user can train the model using labeled data and fine-tune it for better accuracy.
- Model Evaluation: Performance is evaluated using accuracy, precision, recall, and confusion matrix visualization.
- Real-Time Detection and Classification: Once trained, When a face is detected, the trained MobileNetV2 model classifies whether the person is smiling in real-time through a live webcam feed.
- Model Deployment via TFLite The trained model is converted to TensorFlow Lite format and loaded using the tflite.Interpreter for fast, efficient inference on edge devices.
emotion_detection/
├── data/ # Train, validation, and test datasets
├── dataset/ # Original collected images
├── model/ # Trained and fine-tuned model files
├── src # Python scripts
├── .gitignore # files and directories to be ignored by Git
├── app_cli.py # Main script
├── app.py # Streamlit Main script
├── Dockerfile # Docker container setup
├── playground.py # Run all implemented modules
├── README.md # Project documentation
├── requirements.txt # Project dependencies
└── runapp.sh # Bash script to check for camera and launch the app
- Python
- OpenCV
- TensorFlow/Keras
- TensorFlow Lite (TFLite)
- Docker
- Multithreading
- Multiprocessing
- Gradio UI
- Streamlit web app
- Nima Daryabar
This project is licensed under the Apache License 2.0.
See the LICENSE file for details.