A speaking glove project for the speechless that translates sign language into readable, listenable sequences based on AtMega32 and KNN Algorithm
Abdelrahman Atef |
Hanin Sherif |
Andrew Tanas |
|---|
- Introduction
- 🎥 Demo Video
- ✨ Features
- 🛠️ Dataset Preparation
- 🔌 Required Components
- ⚙️ System Architecture
- 🤝 Connect & Collaborate
The Speaking Glove is designed to improve the lives of people with speech disabilities. This is achieved by translating sign language into a text sequence, which is displayed on LCD screen, and spoken words displayed through an audio speaker. The glove interprets hand gestures and converts them to the corresponding text and audio using Embedded AI.
- Scalability 🌍: Unlike many similar projects limited to a simple set of predefined words, this project is highly scalable. We've integrated AI with embedded systems and implemented a simple K-Nearest Neighbors (KNN) algorithm. The KNN algorithm maps sensor readings to the closest matching word.
- Customizable Vocabulary 🗣️: To expand the vocabulary, you can simply add the sound files for new words to the M16P sound module and update the "sensor reads-to-word" dataset in the
get_word_soundconfiguration file. - Sentence Handling 📝: The project can interpret sentences by detecting a STOPPING_WORD or reaching a maximum sentence length. Once a sentence is complete, it displays and resets for the next one.
- Efficient Scheduling ⏳: Although it’s not built on an RTOS, the project schedules tasks using timers and flags, ensuring efficient parallel processing for reading and displaying without interference.
To create or expand the dataset:
- Change the main configuration to "test mode," which displays sensor readings on the LCD.
- Perform the gesture for a new word and record the sensor readings.
- Add these readings to the configuration file, update the total word count, and the system is ready with the new vocabulary.
- Atmega32 microcontroller
- 5 Flex sensors
- 2 KY020 tilt sensors
- M16P sound module
- SD card
- Speaker
- LED
- 16x2 LCD
- 1k resistors
- 8M crystal oscillator
- 10nF capacitors
- Wiring
- Standard Types 📏: Defines standard data types for consistency.
- Bit Math 🔢: Provides utilities for bit-level operations.
- DIO Driver: Manages digital input/output, used by LCD and LED.
- ADC (Analog-to-Digital Converter): Converts analog signals from flex sensors to digital values.
- Global Interrupt: Manages interrupt routines, used by timers and USART.
- Timers:
- Timer0 🕰️: Manages display-related tasks.
- Timer1 ⏲️: Manages sensor reading and conversion tasks.
- USART 🔗: Enables UART communication, used to send commands to the M16P sound module.
- LCD 📺: Controls the LCD display for showing text.
- LED 💡: Provides visual feedback, toggling when a reading is taken.
- M16P Sound Module 🔊: Plays audio for corresponding words.
- Flex Sensor ✋: Reads and converts sensor data, indicating each finger's position.
- Tilt Sensor 🎛️: Detects hand angle to add contextual information.
- Get Glove Read 📜: Collects sensor readings in an array format.
- Get Word Sound 🎤: Uses KNN to map sensor readings to the closest matching word.
- Show 🖥️🔊: Manages display and sound output, both on the LCD and through audio.
We’d love for you to connect and collaborate on this project! Whether you're interested in contributing to the codebase or expanding the dataset, your help is welcome.
👀 Feel free to reach out!