Skip to content

koyukan/WebEyeTrack

 
 

Repository files navigation

WebEyeTrack

Created by Eduardo Davalos, Yike Zhang, Namrata Srivastava, Yashvitha Thatigolta, Jorge A. Salas, Sara McFadden, Cho Sun-Joo, Amanda Goodwin, Ashwin TS, and Guatam Biswas from Vanderbilt University, Trinity University, and St. Mary's University

NPM Version GitHub License

Note: This is an enhanced fork of WebEyeTrack with professional-grade features, performance optimizations, and improved developer experience. See Attribution & Enhancements below for details.

WebEyeTrack is a framework that uses a lightweight CNN-based neural network to predict the (x,y) gaze point on the screen. The framework provides both a Python and JavaScript/TypeScript (client-side) versions to support research/testing and deployment via TS/JS. It performs few-shot gaze estimation by collecting samples on-device to adapt the model to account for unseen persons.

Attribution & Enhancements

About This Fork

This repository is an enhanced fork of the original WebEyeTrack research implementation created by Eduardo Davalos, Yike Zhang, and collaborators at Vanderbilt University, Trinity University, and St. Mary's University.

Original WebEyeTrack Research:

Fork Enhancements

This fork adds substantial improvements to the original WebEyeTrack implementation:

Infrastructure & Build System:

  • ✅ Modern build pipeline with Rollup for ESM/CJS/UMD distribution
  • ✅ Multi-format support (CommonJS, ES Modules, UMD)
  • ✅ Optimized worker loading with flexible bundler support
  • ✅ NPM package improvements with proper entry points

Code Quality & Type Safety:

  • ✅ TypeScript strict mode enabled throughout
  • ✅ Comprehensive type definitions and interfaces
  • ✅ Removed all @ts-ignore comments
  • ✅ Type-safe API surface

Memory Management:

  • ✅ IDisposable interface for resource cleanup
  • ✅ MemoryMonitor utility for leak detection
  • ✅ Automatic tensor disposal in all components
  • ✅ Memory cleanup error boundaries for React
  • ✅ Fixed optimizer memory leaks

Performance Optimizations:

  • ✅ TensorFlow.js warmup for shader pre-compilation
  • ✅ Eliminated redundant perspective matrix inversions
  • ✅ Optimized eye patch extraction (bilinear resize instead of homography)
  • ✅ Canvas caching in WebcamClient
  • ✅ Performance test suite

Calibration System:

  • ✅ Interactive 4-point calibration interface
  • ✅ Clickstream calibration with separate buffer architecture
  • ✅ Calibration point persistence (never evicted)
  • ✅ Parameters aligned with Python reference implementation
  • ✅ Comprehensive calibration documentation

Advanced Features:

  • ✅ Video-fixation synchronization
  • ✅ Gaze recording and analysis tools
  • ✅ Real-time visualization components
  • ✅ Analysis dashboard

Developer Experience:

  • ✅ Reorganized JavaScript-specific documentation
  • ✅ Worker configuration guides
  • ✅ Memory management documentation
  • ✅ Complete SDK implementation guide
  • ✅ Example applications with best practices

Package Installation

JavaScript/TypeScript (Enhanced Fork):

npm install @koyukan/webeyetrack

Python (Original):

pip install webeyetrack

For detailed usage instructions, see the respective README files:

Getting Started

Deciding which version of WebEyeTrack depends on your purpose and target platform. Here is a table to help you determine which version to use:

Feature Python Version JavaScript Version
Purpose Training, Research, and Testing Deployment and Production
Primary Use Case Model development and experimentation Real-time inference in the browser
Supported Devices CPU & GPU (desktop/server) CPU (Web browser, mobile)
Model Access Full access to model internals Optimized for on-device inference and training
Extensibility Highly customizable (e.g., few-shot learning, adaptation) Minimal, focused on performance
Frameworks TensorFlow / Keras TensorFlow.js
Data Handling Direct access to datasets and logs Webcam stream, UI input

Go to the README (links below) to the corresponding Python/JS version to get started using these packages.

Acknowledgements

The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A150199 and R305A210347 to Vanderbilt University. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.

Reference

If you use this work in your research, please cite us using the following:

@misc{davalos2025webeyetrack,
	title={WEBEYETRACK: Scalable Eye-Tracking for the Browser via On-Device Few-Shot Personalization},
	author={Eduardo Davalos and Yike Zhang and Namrata Srivastava and Yashvitha Thatigotla and Jorge A. Salas and Sara McFadden and Sun-Joo Cho and Amanda Goodwin and Ashwin TS and Gautam Biswas},
	year={2025},
	eprint={2508.19544},
	archivePrefix={arXiv},
	primaryClass={cs.CV},
	url={https://arxiv.org/abs/2508.19544}
}

License

WebEyeTrack is open-sourced under the MIT License, which permits personal, academic, and commercial use with proper attribution. Feel free to use, modify, and distribute the project.

About

WebEyeTrack: Real-time Eye-Tracking in the Browser

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 39.2%
  • Python 32.4%
  • TypeScript 27.5%
  • JavaScript 0.5%
  • Shell 0.2%
  • HTML 0.1%
  • CSS 0.1%