(Replace with an actual image or GIF of your app in action.)
HandTalk is a real-time American Sign Language (ASL) recognition application that helps users learn and practice ASL by detecting and classifying hand gestures through their webcam. It leverages computer vision and deep learning to provide instant feedback and track user progress interactively.
The project utilizes TensorFlow.js, HandPose, and Fingerpose for hand detection and classification. The frontend is built using React.js and Typescript, offering a seamless user experience with a structured progress-based learning system.
β
Real-time ASL Alphabet Detection β Uses a deep learning pipeline to classify ASL hand gestures dynamically.
β
Interactive Learning System β Step-by-step ASL alphabet training with feedback and animations.
β
Gesture Validation System β Highlights correct gestures with a visual confirmation circle.
β
Progress Tracking β Users can navigate through the ASL alphabet with a sidebar and progress bar.
β
Web-based Application β No installations required; runs entirely in a browser using TensorFlow.js.
Frontend: React.js, TypeScript Machine Learning: TensorFlow.js, HandPose, Fingerpose Computer Vision: Hand landmark detection, feature extraction, gesture classification State Management: React Hooks (useState, useEffect) UI/UX Development: Figma π Setup & Installation To run HandTalk locally, follow these steps:
1οΈβ£ Clone the repository:
git clone https://github.com/yourusername/handtalk-asl.git
cd handtalk-asl
2οΈβ£ Install dependencies:
npm install
3οΈβ£ Start the development server:
npm start
Enable Webcam β The app accesses the webcam to detect hand gestures. Gesture Recognition β HandPose detects hand landmarks, and Fingerpose classifies gestures. Real-time Feedback β If a gesture matches the target ASL letter, a green confirmation circle appears. Progress Navigation β Users move through the alphabet with a sidebar and progress tracker.