Skip to content

SanyaShresta25/facial-expression-generation-using-gan-autoencoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

8 Commits
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ˜„ Facial Expression Generation Using GAN and Autoencoder

Python TensorFlow Accuracy License

This project leverages deep learning to generate facial expressions from labeled data using an autoencoder architecture trained on the CelebA dataset. The model learns facial features (e.g., "Smiling") and can reconstruct or generate faces conditioned on these attributes.

๐Ÿ” Overview

Facial expression generation is a complex area in computer vision. This notebook uses the CelebA dataset, focusing on labeled facial attributes to generate or classify expressions using an autoencoder structure.

๐Ÿ“ Project Structure

Facial_Expression_Generation_Labeled.ipynb
README.md

๐Ÿš€ Features

  • ๐Ÿ“‚ Loads and filters CelebA facial attribute dataset
  • ๐Ÿง  Trains a custom encoder-decoder (autoencoder) network
  • ๐ŸŽจ Generates expressions based on labeled features (e.g., "Smiling")
  • ๐Ÿ“Š Tracks and visualizes accuracy, loss, and outputs
  • ๐Ÿงช Evaluates generation quality and classification accuracy

๐Ÿงฐ Technologies Used

  • Python 3.x
  • PyTorch
  • torchvision
  • Google Colab
  • Matplotlib, Pandas
  • PIL, NumPy

๐Ÿ“Š Dataset

CelebA โ€” a large-scale face dataset with 202,599 celebrity images, each annotated with 40 attribute labels (e.g., Smiling, Male, Eyeglasses).

This project focuses on the "Smiling" label to:

  • Train the encoder to learn a latent representation of smiling vs non-smiling faces
  • Use the decoder to generate corresponding facial images

๐Ÿงช Model Evaluation

The training process monitors:

  • Reconstruction Loss (MSE or BCE): Measures how well the decoder recreates the input.
  • Attribute Accuracy: A simple classifier is sometimes used on the latent representation to test if attributes like "Smiling" are preserved.
  • Loss & Accuracy Visualization: The notebook includes plots to visualize training dynamics.

โš™๏ธ How to Use

  1. Clone the repo or download the notebook.
  2. Upload the CelebA dataset and attribute file (img_align_celeba.zip and list_attr_celeba.txt) to your Google Drive.
  3. Run the notebook on Google Colab (recommended).
  4. All required packages will be installed at runtime.

๐Ÿ“ˆ Output Examples

The notebook generates:

  • Reconstructed face images side-by-side with originals image

  • Images with changed expression attributes

  • Loss and accuracy graphs per epoch image -Confusion Matrix image -Hyperparamater Tuning image

๐Ÿ“Œ Notes

  • The model uses a custom CelebAZipDataset class to read images from a ZIP file without extracting.
  • Ensure you have enough runtime memory in Colab (recommended: GPU backend).
  • The model can be extended to other labels like โ€œHappy,โ€ โ€œSad,โ€ or โ€œSurprisedโ€ by changing attribute filters.

โœ… Future Work

  • Extend to full expression classification (multi-label)
  • Use a Conditional GAN for sharper results
  • Evaluate with metrics like FID or IS scores
  • Add a web demo using Streamlit or Gradio

๐Ÿ‘จโ€๐Ÿ’ป Author

Sanya Shresta Jathanna

About

Generate realistic facial expressions from labeled face data using PyTorch and CelebA. Built with an autoencoder architecture, this project reconstructs and modifies faces based on emotional attributes like โ€œSmilingโ€. Perfect for emotion AI, facial editing, and deep learning demos. ๐Ÿ’ก๐Ÿง 

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors