NOTE: CSV data files for training and testing data must be added to project folder manually due to file sizes.
Audible ON is an application that utilizes the client’s camera on their device to record and translate gestures from the American Sign Language (ASL) into legible text and/or audible speech, and vice versa (from either text or speech to ASL). Both hearing impaired and non-hearing impaired clients will be able to use this application to communicate with one another in a variety of situations. Audible ON utilizes several other libraries and computing systems such as Google MediaPipe and a Convolutional Neural Network (CNN) to add additional features that will allow user flexibility. Such features will be listed below in the features list.
As technology and medicine advance each year, communication between hearing impaired and non-hearing impaired individuals needs improvement to communicate. Audible ON seeks to allow both groups to communicate with each other without the need for an interpreter. Audible ON’s goal is to allow the hearing impaired to assimilate into society easier by communicating with the general public just as easily as the general public communicates with each other without the need for a live interpreter.
- Programmed exclusively for the left hand
- Jupyter Notebook
- Google's MediaPipe
- Keras
- NumPy
- OpenCV
- Time
https://google.github.io/mediapipe/solutions/hands.html
https://github.com/google/mediapipe/