WASURE is a command-line toolkit that helps you benchmark WebAssembly runtimes with clarity. It lets you run benchmarks across multiple engines, manage runtime environments, and generate meaningful visualizations and exports for analysis.
- Run and manage WebAssembly benchmarks across various runtimes
- Install, update, and manage runtimes with simple commands
- Export results to CSV for analysis or create plots for quick visualization
- Designed for clarity, reproducibility, and extensibility
Install the latest released version from PyPI:
pip3 install wasureWarning
Updating the pip package will delete all changes made, including benchmark results, installed runtimes, saved plots, and custom benchmarks. To avoid this, you can either:
- Use a custom directory for these data (by using the relevant flags, such as
--runtimes-folder) - Back up any important data beforehand
- Use
wasurefrom source, without installing it
Run the project from source, without installing it. This will provide the newest version.
git clone https://github.com/r-carissimi/wasure.git
cd wasure
pip3 install -r requirements.txt
python3 -m wasureNote
While following the instructions below, be sure to replace wasure with python3 -m wasure
If you want to install the newest features or fixes, install directly from the repository:
git clone https://github.com/r-carissimi/wasure.git
cd wasure
pip install .WASURE is structured as a command-line tool with modular subcommands to list, run, compare, and visualize WebAssembly benchmarks. Each subcommand has its own options, allowing you to start simple and scale up your experiments as needed.
wasure [OPTIONS] COMMANDUse --log-level DEBUG to troubleshoot issues and --help under any subcommand for more information.
See what benchmarks are available:
wasure benchmarks listInstall, update, and manage supported runtimes:
# View available runtimes
wasure runtimes available
# Install a runtime
wasure runtimes install wasmtime
# Update or remove runtimes
wasure runtimes update wasmtime
wasure runtimes remove wasmtime
# List installed runtimes and their versions
wasure runtimes list
wasure runtimes versionRun benchmarks with your chosen runtimes:
# Run a single benchmark on one runtime
wasure run -b helloworld -r wasmtime
# Run multiple benchmarks on multiple runtimes
wasure run -b pystone dummy dhrystone/dhrystone10M -r wasmtime wasmedge wasmer --repeat 3
# Run a raw WebAssembly file directly
wasure run -b py2wasm/pystone/pystone.wasm -r wasmtime
# Run a benchmark tracking the memory consumption. Timings may increase.
wasure run -b helloworld -r wasmtime--repeat N: Repeat each benchmark N times--no-store-output: Don’t save output, just timings--results-folder <path>: Define custom output directory--memory: Pool the memory consumption
Plot or export results with:
# Plot benchmark output
wasure plot /path/to/results/2025-05-06_10-56-21.json
# Export results to CSV
wasure export /path/to/results/2025-05-06_10-56-21.jsonWhen you export benchmark results to CSV, each row contains the following columns:
| Column | Description |
|---|---|
benchmark |
Name of the benchmark or WebAssembly file |
runtime |
Name of the runtime used |
run_index |
Index of the run (for repeated benchmarks) |
elapsed_time_ns |
Execution time in nanoseconds |
score |
Benchmark-specific score (if applicable, else 0) |
return_code |
Process return code (0 means success) |
max_rss_bytes |
Maximum resident set size in bytes, if --memory is set |
max_vms_bytes |
Maximum virtual memory size in bytes, if --memory is set |
The check command allows you to verify if specific benchmarks run successfully on selected runtimes. It is particularly useful when combined with the wasm-features or wasi-proposals benchmark groups. These groups enable you to track which runtime has implemented specific features or proposals.
# Check the wasm features support on all runtimes
wasure check wasm-features
# Check the wasi proposals implementation on wasmtime and wasmedge
wasure check wasi-proposals -r wasmtime wasmedgeRefer to the Replay Merger project for instructions on how to run WASI benchmarks on runtimes that do not support WASI.
Refer to the Benchmarks Management Documentation for detailed instructions on adding new benchmarks.
Refer to the Runtimes Management Documentation for detailed instructions on adding or editing runtimes.
- Platform Support: Linux and macOS are supported. Windows is currently not supported.
- Path Restrictions: Installers relying on
npm(e.g., v8, jsc, spidermonkey) may fail if the runtimes path contains spaces. - Non-ASCII Characters: JSC (JavaScriptCore) does not support payload paths with non-ASCII characters.
Contributions are welcome! Feel free to open issues or submit pull requests to improve the project. Please ensure your code follows the PEP 8 style guide for Python to maintain consistency across the project.
This project is licensed under the GNU General Public License v3.0.
