A real-time facial expression analysis system using a 3-model ensemble approach with dlib's 68-point facial landmarks for enhanced accuracy.
- 3-Model Ensemble: Combines predictions from three HSEmotion models (enet_b2, vgaf, afew) with per-emotion weighted voting
- 68-Point Facial Landmarks: Uses dlib for precise facial feature analysis
- Smart Refinements: Targeted corrections for common confusion patterns (Fear/Surprise, Sad/Angry)
- Modern GUI: CustomTkinter-based interface with glassmorphism effects, bracket corners, and ghost transparency
- Live Webcam Support: Real-time emotion detection from webcam feed
- Head Pose Estimation: Detects if subject is facing camera or turned left/right
- Face Enumeration: Numbers faces in reading order (top-to-bottom, left-to-right)
| Emotion | Description |
|---|---|
| Happy | Happiness, joy, smiling |
| Sad | Sadness, melancholy |
| Angry | Anger, frustration |
| Fear | Fear, anxiety |
| Surprise | Surprise, astonishment |
| Disgust | Disgust, displeasure |
| Neutral | Calm, neutral state |
| Contempt | Contempt, disdain |
# Clone repository
git clone https://github.com/FueledByRedBull/CompVis.git
cd CompVis
# Create virtual environment
python -m venv venv
venv\Scripts\activate # Windows
# source venv/bin/activate # Linux/Mac
# Install dependencies
# Note: requirements.txt uses dlib-bin (pre-built, Windows-optimized)
# Linux/Mac users may need: pip install dlib (requires CMake + C++ compiler)
pip install -r requirements.txt
# Download shape predictor
# Get from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
# Extract and place in project root
# Run GUI
python gui_app.py| Platform | Requirements |
|---|---|
| Windows | Python 3.10+, Visual Studio Build Tools, CMake |
| Linux | Python 3.10+, build-essential, cmake |
| macOS | Python 3.10+, Xcode Command Line Tools, cmake |
-
Download Visual Studio Build Tools from: https://visualstudio.microsoft.com/visual-cpp-build-tools/
-
Run the installer and select:
- "Desktop development with C++" workload
- Make sure Windows 10/11 SDK is checked
- Make sure MSVC v143 (or latest) is checked
-
Click Install and wait for completion (may take 10-20 minutes)
-
Restart your computer after installation
Option A: Download installer (Recommended)
- Download from: https://cmake.org/download/
- Choose "Windows x64 Installer"
- During installation, select "Add CMake to the system PATH"
Option B: Using package managers
# Using Chocolatey
choco install cmake
# Using winget
winget install Kitware.CMakeVerify CMake installation:
cmake --version
# Should show: cmake version 3.x.x# Navigate to project directory
cd CompVis
# Create virtual environment
python -m venv venv
# Activate it
venv\Scripts\activatepip install -r requirements.txtNote: dlib compilation may take 5-10 minutes. This is normal.
- Download from: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
- Extract the
.bz2file (use 7-Zip or similar) - Place
shape_predictor_68_face_landmarks.datin the project root directory
# Ubuntu/Debian
sudo apt update
sudo apt install build-essential cmake python3-dev python3-venv
# Fedora
sudo dnf install gcc gcc-c++ cmake python3-devel
# Arch Linux
sudo pacman -S base-devel cmake pythoncd CompVis
python3 -m venv venv
source venv/bin/activatepip install -r requirements.txt# Download and extract
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bunzip2 shape_predictor_68_face_landmarks.dat.bz2xcode-select --install# Using Homebrew (recommended)
brew install cmake
# Or download from cmake.orgcd CompVis
python3 -m venv venv
source venv/bin/activatepip install -r requirements.txt# Download and extract
curl -O http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bunzip2 shape_predictor_68_face_landmarks.dat.bz2Windows:
- Ensure CMake is installed from cmake.org
- During installation, check "Add CMake to system PATH"
- Restart your terminal/command prompt
- If still failing, manually add CMake to PATH:
- Open System Properties > Environment Variables
- Add
C:\Program Files\CMake\binto PATH
Linux/Mac:
# Verify cmake is in PATH
which cmake
cmake --version- Ensure you installed "Desktop development with C++" workload
- Restart your computer after installation
- Open a new command prompt after restart
- Try installing in Developer Command Prompt for VS:
- Search for "Developer Command Prompt" in Start Menu
- Run
pip install dlibfrom there
- Install/repair Visual Studio Build Tools with C++ workload
- Make sure Windows SDK is included
- Restart computer and try again
This is normal - dlib compiles from source and may take 5-15 minutes depending on your system. The compilation only happens once.
If compilation fails, try a pre-built wheel:
pip install dlib-binNote: Pre-built wheels may not be available for all Python versions.
Skip all the dlib compilation hassle by using Docker:
# Build the image (downloads shape predictor automatically)
docker build -t emotion-analyzer .
# Analyze images in a folder
docker run -v /path/to/images:/data emotion-analyzer python main.py /data
# Save results
docker run -v /path/to/images:/data -v /path/to/output:/output \
emotion-analyzer python main.py /data --output /output --save-annotatedNote: GUI mode (
gui_app.py) requires X11 forwarding and is not recommended in Docker.
python gui_app.pyFeatures:
- Click "Select Folder" to choose a directory of images
- Use "Start Webcam" for real-time analysis
- Navigate between images with Previous/Next buttons
- View detailed emotion breakdowns in the results panel
# Analyze a directory
python main.py ./photos
# Save results to output directory
python main.py ./photos --output ./results
# Save annotated images
python main.py ./photos --output ./results --save-annotated
# Analyze a single image
python main.py ./photo.jpg --single
# Quiet mode (less output)
python main.py ./photos --quietfrom emotion_analyzer import EmotionAnalyzer
import cv2
# Initialize analyzer
analyzer = EmotionAnalyzer()
# Load and analyze image
image = cv2.imread("photo.jpg")
results = analyzer.analyze_image(image)
# Process results
for face in results:
print(f"Face #{face['face_number']}: {face['dominant_emotion']} ({face['confidence']:.1f}%)")
print(f" Head pose: {face['head_pose']}")
print(f" Model agreement: {face['backend_agreement']:.0f}%")CompVis/
├── emotion_analyzer.py # Core analysis engine
├── gui_app.py # CustomTkinter GUI application
├── main.py # CLI interface
├── requirements.txt # Python dependencies
├── shape_predictor_68_face_landmarks.dat # dlib landmark model (download separately)
├── README.md # This file
├── report.md # Detailed project analysis
└── LICENSE # MIT License
The system uses three HSEmotion ONNX models:
| Model | Strengths |
|---|---|
| enet_b2 | Best overall, especially Happy/Neutral |
| vgaf | Natural expressions (Sad, Surprise) |
| afew | Acted expressions (Fear, Angry, Disgust) |
Each emotion uses optimized weights:
EMOTION_WEIGHTS = {
'angry': {'enet_b2': 0.3, 'vgaf': 0.2, 'afew': 0.5},
'happy': {'enet_b2': 0.5, 'vgaf': 0.3, 'afew': 0.2},
'surprise': {'enet_b2': 0.3, 'vgaf': 0.5, 'afew': 0.2},
# ... etc
}Uses dlib's 68-point facial landmarks for:
- Mouth Opening: Detects open mouth for Surprise vs Fear
- Mouth Corners: Detects smile/frown for Happy vs Sad vs Angry
- Head Pose: Estimates yaw from eye centers and nose tip
Post-processing corrections for common confusions:
- Fear → Surprise: If mouth is open, bias toward Surprise (more common in photos)
- Sad → Angry: If mouth isn't downturned, bias toward Angry
- Majority Override: If 2/3 models agree, boost that emotion
| Metric | Value |
|---|---|
| Basic Emotions Accuracy | ~83% |
| Face Detection | dlib HOG |
| Models Used | 3 (ensemble) |
| Landmark Points | 68 |
| GUI Framework | CustomTkinter |
opencv-python>=4.8.0
numpy>=1.24.0
Pillow>=10.0.0
hsemotion-onnx>=0.3
customtkinter>=5.2.0
dlib-bin>=19.24.0 # Windows pre-built; Linux/Mac: pip install dlib
Platform Note: The
requirements.txtusesdlib-binwhich provides pre-built wheels for Windows. Linux/Mac users should replace withdlib(requires CMake and C++ compiler).
This project is licensed under the MIT License - see the LICENSE file for details.
- HSEmotion for the emotion recognition models
- dlib for facial landmark detection
- CustomTkinter for the modern GUI framework