Welcome to the official results page for EmbeddedML-Benchmark, a benchmarking framework designed to evaluate machine learning models on embedded systems. This project aims to assess the performance of various machine learning models across different tasks, including image classification, anomaly detection, keyword spotting, and emotion detection. By optimizing models for embedded devices, we strive to explore their efficiency in real-time applications with limited resources.
The EmbeddedML-Benchmark project focuses on deploying machine learning models onto embedded systems such as microcontrollers. We evaluate key performance metrics such as inference time, memory usage, CPU utilization, and energy consumption across a range of machine learning models.
The results presented on this website include performance benchmarks of models trained for tasks such as:
- Image Classification: Classifying images into categories using models like MobileNetV2 and TinyML.
- Anomaly Detection: Identifying unusual patterns in datasets.
- Keyword Spotting: Recognizing specific words or commands in audio.
- Emotion Detection: Detecting emotions from text using models like LSTM.
We aim to highlight the feasibility of running complex machine learning models on resource-constrained devices.
On this website, you can explore detailed performance results for various models, including metrics like:
- Average execution time
- Average memory usage
- CPU usage during inference
- Quantized and dequantized model outputs
You can browse these results to get a comprehensive understanding of how well each model performs under different conditions, optimized for embedded devices.
To explore the benchmarking results and learn more about the EmbeddedML-Benchmark, visit the site here:
This site serves as a valuable resource for understanding how machine learning models can be deployed and optimized on embedded systems. We hope this benchmarking framework can be useful for researchers and developers working in the field of edge AI and embedded systems.
Special Thanks to Ali Salesi.