The Gesture Vocalizer project empowers individuals with speech and hearing impairments by translating hand gestures into speech or text output.
By combining flex sensors, accelerometers, and deep learning models, this wearable system bridges the communication gap β enabling real-time gesture-to-voice translation.
The design phase focuses on translating the identified problem into a functional, wearable prototype that captures, processes, and vocalizes gestures.
- Acts as the computational hub with multiple I/O pins.
- Handles real-time sensor integration and data transmission.
- Five sensors attached to a glove detect finger bending through resistance changes.
- Provide continuous analog data reflecting hand posture.
- Measures acceleration and angular velocity to capture spatial orientation.
- Enhances motion precision and dynamic gesture recognition.
- Sensor Integration: Connect flex sensors to Arduino analog pins.
- MPU-6050 Integration: Interface via I2C to capture acceleration and rotation.
- Power Supply: Powered through USB to ensure stable operation.
- Encapsulation: Mounted on a glove to maintain sensor stability and comfort.
- Implements a Bi-directional LSTM for gesture recognition.
- Processes sequences of sensor readings to identify dynamic and static gestures.
- Trained using a custom dataset for enhanced accuracy.
- Captures 3-second intervals of sensor data per gesture.
- Combines flex sensor and MPU-6050 outputs into a single input vector.
- Stored in CSV format for efficient model training and testing.
- A simple Python-based UI displays recognized gestures in real time.
- Enables recording of new gesture data and live model inference.
- Define analog pins for flex sensors (A0βA4).
- Initialize MPU6050 and set up I2C communication.
- Begin serial communication and initialize all sensors.
- Configure pin modes for flex sensors.
- Read analog flex sensor data.
- Capture accelerometer and gyroscope readings.
- Print all data to the serial monitor for real-time observation.
#include <Wire.h>
#include <MPU6050.h>
MPU6050 mpu;
int flexPins[5] = {A0, A1, A2, A3, A4};
void setup() {
Serial.begin(9600);
Wire.begin();
mpu.initialize();
for (int i = 0; i < 5; i++) pinMode(flexPins[i], INPUT);
}
void loop() {
for (int i = 0; i < 5; i++) {
int flexVal = analogRead(flexPins[i]);
Serial.print(flexVal);
Serial.print("\t");
}
int16_t ax, ay, az, gx, gy, gz;
mpu.getMotion6(&ax, &ay, &az, &gx, &gy, &gz);
Serial.println();
delay(100);
}| Gesture | Flex1 | Flex2 | Flex3 | Flex4 | Flex5 | Ax | Ay | Az | Predicted Output |
|---|---|---|---|---|---|---|---|---|---|
| Hello | 612 | 589 | 578 | 560 | 533 | 112 | 85 | 75 | Hello |
| Thank You | 604 | 591 | 576 | 569 | 540 | 120 | 90 | 80 | Thank You |
| Help | 620 | 580 | 570 | 562 | 530 | 118 | 86 | 78 | Help |
| Category | Technologies |
|---|---|
| Hardware | Arduino Mega, Flex Sensors, MPU-6050 |
| Programming | Python, C++ |
| Machine Learning | TensorFlow, LSTM |
| Data Handling | Pandas, NumPy |
| UI / Visualization | HTML/CSS |
git clone https://github.com/username/gesture-vocalizer.git
cd gesture-vocalizerpip install -r requirements.txtUse the Arduino IDE to upload the .ino file to your Arduino Mega board.
python app.py
Perform gestures wearing the glove β recognized gestures will appear in the UI and be vocalized.
| Metric | Value |
|---|---|
| Model accuracy | 84.59% |
| Detection Latency | < 0.5 seconds |
| Dataset Scale | 3-second windows Γ 5 sensors Γ N gestures |
- Our research was published in the Journal of Electrical Systems, Vol. 20, No. 10s (2024).
- We obtained a copyright for our novel dataset.
- Mahek Upadhye
- Aasmi Thadhani
- Urav Dalal
- Shreya Shah





