The project aims to move beyond the use of VR controllers that rely on monotonous actions and simple button inputs. Instead, it seeks to implement an interface that recognizes a sequence of actions on VR devices by leveraging hand gesture/sign language recognition.
In the initial stages, webcam was utilized as a starting point, with some features forked from https://github.com/dgovor/Sign-Language-Translator. These features were later expanded and refined to meet the project's objectives.
To create a sign language recognition model for Oculus devices, we developed a Unity package to capture hand keypoint movements using the Oculus Quest 2. We also created a method to preprocess the data and developed a model training process to achieve optimal results and enabled real-time testing using the device.
- Data generation
- Data preprocessing
- Model training
- Model deployment/real-time testing
The primary file structure of the project is as follows:
📂 LSTM-Sign-Language-Recognition-Model
│ │
├── 📂 oculus # Recognition model for an Oculus device
│ ├── 📂 python
│ │ ├── convert.py # Data Preprocessing
│ │ ├── getrange.py # Script for calculating data ranges (can be used for data preprocessing)
│ │ ├── main.py # Main script for processing
│ │ └── model.py # LSTM model implementation
│ ├── 📂 unity
│ │ └── handtracking.unitypackage # Package for collecting data and testing model with Onculus
│ │
├── 📂 webcam # Recognition model for webcam
│ ├── data_collection.py # Script for collecting data
│ ├── delete.py # Script for deleting unwanted data
│ ├── main.py # Main script for webcam data processing
│ ├── model.py # LSTM model implementation
│ └── my_functions.py # Helper functions
│
└── README.md
Collect data
python webcam/data_collection.pyTrain a Model based on Collected Data
python webcam/model.pyTest the Trained Model
python webcam/main.pyCollect data with the handtracking unity package (oculus/unity/handtracking.unitypackage)
- Turn on "Timer" script
- Select "Data Collect" script and fill the sign name
- Run the package and use sign language with oculus on
Preprocess Data
python oculus/convert.py* Decide Threshold Value with this:
python oculus/getrange.pyTrain a Model based on Collected Data
python oculus/model.pyTest the Trained Model
- Run following command:
python oculus/main.py- Send real-time data through the unity package (oculus/unity/handtracking.unitypackage); Select "Test Collect" Script
This project has been successfully utilized in developing a Motion Recognition-Based Adventure Game in VR, which was awarded the 2024 Metaverse Developer Contest Meta Representative Award. Learn more about the model and the game on https://www.modoogallery.online/studioon
- Chaeri Kim
- Jaewon Zhang
- Developed with advice from professor Heesun Park
This project is licensed under the terms of the MIT license.