Skip to content

This project develops a real-time sign language recognition model to convert sign language into spoken language, enabling seamless communication and expanding opportunities for the hearing impaired.

Notifications You must be signed in to change notification settings

se-ran/DynamicASLProject

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dynamic ASL Project

Problem Definition

In remote work environments, individuals who are deaf or hard of hearing often face challenges in effectively communicating during video conferences. To address this, we propose a solution that leverages sign language recognition technology to break communication barriers and empower individuals with hearing impairments to communicate seamlessly.

Project Goal

This project focuses on developing a real-time sign language recognition model that converts sign language into spoken language, facilitating seamless communication and creating new opportunities for individuals with hearing impairments. Beyond enabling communication, this solution unlocks new possibilities. For instance, it empowers deaf individuals to interact directly with clients using sign language in face-to-face services, such as banking.

Key Features and Technologies

  1. Real-Time Sign Language Recognition
    Utilizing Google MediaPipe, the system extracts 3D landmarks of hands and poses in real-time, calculates key joint and finger angles, and processes this data for sign language recognition.

  2. Dataset
    The project uses the 'WLASL-2000 Resized' dataset from Kaggle, containing 2,000 sign language words as training data.

  3. Data Preprocessing

    • Extract frames and calculate angles from videos.
    • Construct data sequences in 20-frame units.
    • Balance the dataset using the SMOTE technique.
  4. Deep Learning Model

    • LSTM-based model for temporal learning of frame features.
    • Classification of 2,000 sign language gestures in the output layer.
  5. Training and Evaluation

    • Train-test split: 80%-20%.
    • Training for 20-30 epochs.
    • Achieved validation accuracy: 0.89, Top-3 Accuracy: 0.97.

Workflow Diagram

Workflow Diagram Step 1: Model Training
Step 2: Real-time Inference

Project File Structure

The repository contains the following files and folders:

About

This project develops a real-time sign language recognition model to convert sign language into spoken language, enabling seamless communication and expanding opportunities for the hearing impaired.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published