A Machine Learning game that captures ASL. It uses the Hand-pose Model in ml5.js to perform Pose Estimation to determine the letter of the Alphabet.
This demo below shows the main functionality of the game.
This uses MQTT mosquitto to perform online connectivity.
This allows to see real-time data from other players (Player's hand points for example) as well as other data to validate the game flow.
Additionally, it uses Machine Learning to detect the Alphabet using ASL.

Here are the requirements in order to use this project.
- Tauri – Desktop app framework
- p5.js – Creative coding library
- ml5.js – Machine learning for the web
- Supabase – Database and authentication
- Arduino – Hardware integration
Hardware Requirements for this project if you want to use an Arduino.
| Component |
|---|
| Arduino Nano 33 BLE Sense lite |
| TinyML Shield |
| Grove LCD 16x2 (White on Blue) |
| LCD Button |
| USB Webcam (PC) |
A step by step guide that will tell you how to get the development environment up and running for this project.
Once you fork and have this repository in your desktop, open the project in VScode and do the following:
$ cd ProjectASL
$ npm install
Copy the "config.example.json" file and rename the one you copied "config.json".
Then, replace "YOUR_SUPABASE_URL" and "YOUR_SUPABASE_ANON_KEY" with your own.
See Supabase Documentation for more information of setting your database.
{
"supabase": {
"url": "YOUR_SUPABASE_URL",
"anonKey": "YOUR_SUPABASE_ANON_KEY"
}
}
See the Hardware Requirements before adding the code.
In your Arduino IDE, copy/paste the code below and click "Upload":
#include <Wire.h>
#include "rgb_lcd.h"
rgb_lcd lcd;
// Player data
String playerName = "";
float avgLetterSpeed = 0.0;
// Button and LED pins
#define BUTTON_PIN A4 // Correct pin based on your testing
const int ledPin = 13; // Built-in LED
int lastButtonValue = HIGH; // For edge detection
// LCD update tracking
String lastPlayerName = "";
float lastSpeed = -1;
void setup() {
Serial.begin(9600);
while (!Serial) {}
// LCD setup
lcd.begin(16, 2);
lcd.setRGB(255, 255, 255); // White backlight
// Button and LED setup
pinMode(BUTTON_PIN, INPUT_PULLUP); // Use internal pull-up for stability
pinMode(ledPin, OUTPUT);
}
void loop() {
// --- Read Serial Data from PC ---
if (Serial.available()) {
String data = Serial.readStringUntil('\n'); // Expect "PlayerName,1.25"
int commaIndex = data.indexOf(',');
if (commaIndex > 0 && commaIndex < data.length() - 1) {
playerName = data.substring(0, commaIndex);
avgLetterSpeed = data.substring(commaIndex + 1).toFloat();
}
}
// --- Button Press Detection with Debounce ---
int currentValue = digitalRead(BUTTON_PIN);
// Detect press event (HIGH → LOW)
if (currentValue == LOW && lastButtonValue == HIGH) {
Serial.println("BUTTON PRESSED!");
digitalWrite(ledPin, HIGH);
delay(200); // Blink effect
digitalWrite(ledPin, LOW);
}
lastButtonValue = currentValue;
delay(40); // Debounce
// --- Update LCD Display only if data changed ---
if (playerName != lastPlayerName || avgLetterSpeed != lastSpeed) {
lcd.clear();
// Player name on left
lcd.setCursor(0, 0);
lcd.print(playerName);
// Avg speed on right
String speedText = String(avgLetterSpeed, 2) + "s";
int pos = 16 - speedText.length();
lcd.setCursor(pos, 0);
lcd.print(speedText);
// Progress bar on second line
int barLength = constrain(map(avgLetterSpeed * 100, 0, 500, 0, 16), 0, 16);
lcd.setCursor(0, 1);
for (int i = 0; i < barLength; i++) {
lcd.write(byte(255)); // Full block character
}
lastPlayerName = playerName;
lastSpeed = avgLetterSpeed;
}
}
To run the Tauri app, do the following command:
$ npm run tauri dev
To build the Tauri app, do the following command:
$ npm run tauri build
This will give you an exe file in the "src-tauri" folder. (Assume the user is in Windows).
This is built using ml5.js and p5.js to perform Pose Estimation for each letter (A-Z excluding "J" and "Z"). Collected a total of 54,000 Pose Estimation data (2000 for each letter. For "X", it is 4000. For "Y" it is 6000). The settings I used to train the model:
epoch = 1000;
batchSize = 128;
learningRate = 0.001;
hiddenUnits = 2048;
ml5.handPose({ runtime: "mediapipe" }, { flipped: true });
See the Source Code that was used to train the model.
Note: This has been trained using only right hand.
- This project was inspired by Code Train: https://www.youtube.com/channel/UCvjgXvBlbQiydffZU7m1_aw.
- This project is only used as a tool and not meant to replace ASL Interpreters. This only detects letters and not gestures.
- This project satisfy the course "SEIS 744: IoT with Machine Learning".
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This is under the MIT License.