This project runs an ASL sign detector locally. It streams webcam frames to a Flask app (/video_feed) and recognizes letters and its given accuracy.
Future imrprovement ideas include using the OpenAPI to construct complete words and also adding support for numbers.
app.py— Flask app that streams/video_feedand exposes/currentfor the corrected word.keras_Model.h5— your trained Keras model (place the real file here).labels.txt— newline-separated class labels (A-Z or custom labels).templates/index.html— frontend page that displays the video.
- Create and activate a Python virtual environment (recommended).
Windows PowerShell example:
python -m venv .venv; .\.venv\Scripts\Activate.ps1
pip install -r requirements.txt-
Place your
keras_Model.h5andlabels.txtin the project root. -
(Optional) Configure OpenAI for spelling correction:
setx OPENAI_API_KEY "your_api_key_here"
# Then restart your shell/IDE so the env var is visible to processesStart the Flask app:
python app.pyOpen http://localhost:5000/ in your browser. The video stream is at /video_feed.
- The app will not send any other user text to GPT; only the ASL tentative word is sent for correction.
- If you run into model import errors, ensure
tensorflowis installed and your Python version is compatible. - For production or remote camera use, modify the
cv2.VideoCapture()source and secure the OpenAI key handling.