Lightning-fast HTTP load testing with an interactive TUI
High-performance HTTP load testing tool with Python interface and C++ backend
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Volt Load Tester ┃
┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┫
┃ URL: https://api.example.com/endpoint | Method: GET ┃
┃ ┃
┃ ┌──────────────────┬──────────────────┬──────────────────┐ ┃
┃ │ Requests/Second │ Total Requests │ Errors │ ┃
┃ │ │ │ │ ┃
┃ │ 24.7 │ 1,247 │ 3 │ ┃
┃ │ │ │ │ ┃
┃ └──────────────────┴──────────────────┴──────────────────┘ ┃
┃ ┃
┃ Concurrency: 15 threads ┃
┃ Min: 124.5ms | Avg: 405.3ms | Max: 1,245.7ms ┃
┃ ┃
┃ ┌─ Latency (ms) - Last 30 seconds ─────────────────────────────────┐ ┃
┃ │ │ ┃
┃ │ 600│ ▄▆█ │ ┃
┃ │ │ ▂▆██▇▅▃ │ ┃
┃ │ 400│ ▁████████▇▆▅▄▃▂ │ ┃
┃ │ │▂▄███████████████▇▆▅▄▃▂▁ │ ┃
┃ │ 200│██████████████████████████▇▆▅▄▃▂ │ ┃
┃ │ └────────────────────────────────────────────────────────────│ ┃
┃ └──────────────────────────────────────────────────────────────────┘ ┃
┃ ┃
┃ ┌─ Event Log ──────────────────────────────────────────────────────┐ ┃
┃ │ 14:32:15 ● Load test STARTED │ ┃
┃ │ 14:32:18 ● Concurrency increased to 10 threads │ ┃
┃ │ 14:32:25 ● Concurrency increased to 15 threads │ ┃
┃ │ 14:32:30 ● HTTP Error 503 detected │ ┃
┃ └──────────────────────────────────────────────────────────────────┘ ┃
┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┫
┃ s Start/Stop │ + Increase │ - Decrease │ q Quit ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Adjust concurrency on the fly • Real-time metrics • Beautiful terminal UI
- Why Volt?
- Features
- Installation
- Quick Start
- Documentation
- Examples
- Performance
- Architecture
- Contributing
- License
| Feature | Volt | Other Tools |
|---|---|---|
| Interactive TUI | ||
| Dynamic Concurrency | ||
| C++ Performance | ||
| Python API | ||
| True Multi-threading | ||
| Lock-free Stats |
- C++ Backend - Native performance with libcurl through CPR
- Multi-threaded - True concurrency with GIL properly released
- Lock-free Statistics - Atomic operations for zero-overhead metrics
- Efficient - Minimal Python overhead, C++ does the heavy lifting
- Real-time Metrics - RPS, latency, errors updated every 100ms
- Sparkline Graphs - 30-second latency history visualization
- Event Log - Track test lifecycle and errors
- Dynamic Control - Adjust concurrency with
+/-keys while running
- All HTTP Methods - GET, POST, PUT, DELETE support
- Custom Headers & Body - Full request control
- Python API - Simple programmatic interface
- CLI Tool -
volt-tuicommand for quick testing
- Hot Reload - Change thread count without stopping
- Graceful Scaling - Threads added/removed safely
- Stats Preserved - No data loss during scaling
- Interactive - Perfect for finding optimal load levels
# Clone repository
git clone https://github.com/EpochSpace/volt.git
cd volt
# Install
pip install -e .- Python >= 3.8
- CMake >= 3.16
- C++17 compatible compiler (GCC 7+, Clang 5+, MSVC 2017+)
- libcurl development headers (Linux only)
Linux (Debian/Ubuntu):
sudo apt-get install -y cmake libcurl4-openssl-devLinux (RHEL/CentOS/Fedora):
sudo yum install -y cmake libcurl-develmacOS:
brew install cmakeWindows:
Download Visual Studio Build Tools
If you modify C++ code:
pip uninstall -y volt && rm -rf _skbuild && pip install -e .# Basic usage
volt-tui https://httpbin.org/get
# With custom concurrency
volt-tui https://api.example.com/endpoint -c 20
# POST request with custom method
volt-tui https://api.example.com/users -c 10 -m POSTKeyboard Controls:
s- Start/Stop load testing+- Increase concurrency (spawns new threads)-- Decrease concurrency (removes threads)q- Quit application
import time
import tui_tester
# Create load generator
gen = tui_tester.LoadGenerator(
url="https://httpbin.org/get",
concurrency=10
)
# Start testing
gen.start()
time.sleep(5)
# Get real-time statistics
stats = gen.get_stats()
print(f"RPS: {stats.rps:.2f}")
print(f"Avg Latency: {stats.avg_latency_ms:.2f}ms")
print(f"Errors: {stats.error_count}")
# Stop
gen.stop()gen = tui_tester.LoadGenerator(url="https://httpbin.org/get", concurrency=5)
gen.start()
# Gradually ramp up load
for threads in [10, 20, 50, 100]:
gen.set_concurrency(threads)
time.sleep(5)
stats = gen.get_stats()
print(f"{threads} threads: {stats.rps:.2f} RPS")
gen.stop()LoadGenerator(
url: str, # Target URL
concurrency: int = 1, # Number of threads
method: HttpMethod = HttpMethod.GET,
headers: dict = {}, # Custom headers
body: str = "" # Request body
)| Method | Description |
|---|---|
start() |
Start the load generator |
stop() |
Stop the load generator |
get_stats() |
Get current statistics (lock-free) |
is_running() |
Check if generator is running |
set_concurrency(n) |
Change thread count dynamically |
get_concurrency() |
Get current thread count |
stats = gen.get_stats()
stats.request_count # Total requests made
stats.error_count # Failed requests
stats.rps # Requests per second
stats.avg_latency_ms # Average latency
stats.min_latency_ms # Minimum latency
stats.max_latency_ms # Maximum latencyimport tui_tester
gen = tui_tester.LoadGenerator(
url="https://api.example.com/endpoint",
concurrency=5,
method=tui_tester.HttpMethod.POST,
headers={
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_TOKEN"
},
body='{"key": "value"}'
)
gen.start()
# ... run test ...
gen.stop()Explore the examples/ directory:
# Python API example
python examples/load_test_example.py
# TUI programmatic usage
python examples/tui_example.py
# Direct CLI
volt-tui https://httpbin.org/get -c 10┌─────────────────────────────────────────────────────────┐
│ Python Layer │
│ ┌──────────────┐ ┌─────────────────┐ │
│ │ Textual TUI │ │ Python API │ │
│ └──────┬───────┘ └────────┬────────┘ │
│ │ │ │
│ └──────────┬───────────────┘ │
│ │ │
│ ┌─────▼─────┐ │
│ │ pybind11 │ (GIL Released) │
│ └─────┬─────┘ │
└────────────────────┼────────────────────────────────────┘
│
┌────────────────────▼─────────────────────────────────────┐
│ C++ Core │
│ ┌─────────────────────────────────────────────────┐ │
│ │ LoadGenerator │ │
│ │ ┌──────────────┐ ┌──────────────────────┐ │ │
│ │ │ Thread Pool │ │ Atomic Statistics │ │ │
│ │ │ (Workers) │ │ (Lock-free counters) │ │ │
│ │ └──────┬───────┘ └──────────────────────┘ │ │
│ │ │ │ │
│ │ ┌────▼────┐ │ │
│ │ │ CPR │ (libcurl wrapper) │ │
│ │ └────┬────┘ │ │
│ └─────────┼───────────────────────────────────────┘ │
│ │ │
│ ┌────▼─────┐ │
│ │ libcurl │ │
│ └──────────┘ │
└──────────────────────────────────────────────────────────┘
# start() and stop() release Python GIL
gen.start() # ← GIL released, C++ threads run free
# Python code can run concurrently
for i in range(100):
stats = gen.get_stats() # ← Non-blocking
print(f"RPS: {stats.rps}")
time.sleep(0.1)- All counters use
std::atomic<>for zero-overhead updates - No mutex contention in hot paths
- Worker threads never block each other
get_stats()is always fast and non-blocking
- Add threads: New workers spawned immediately
- Remove threads: Graceful shutdown after current request
- Statistics preserved: No data loss during scaling
- Thread-safe: Mutex protects worker pool modifications
Run included benchmarks:
# Quick benchmark (30 seconds)
python benchmark_quick.py
# Full benchmark (60 seconds, multiple concurrency levels)
python benchmark.pyExample output:
Threads Requests Errors RPS Avg Latency
1 47 0 9.40 106.38
5 234 0 46.80 106.91
10 452 0 90.40 110.56
20 743 0 148.60 134.54
- C++ Core: CPR (libcurl wrapper) for HTTP requests
- Thread Pool: Multi-threaded concurrent execution
- Python Bindings: pybind11 for seamless integration
- TUI Framework: Textual for beautiful terminal UI
- Build System: CMake + scikit-build-core
| Component | Mechanism | Purpose |
|---|---|---|
request_count_ |
std::atomic<uint64_t> |
Lock-free increment |
error_count_ |
std::atomic<uint64_t> |
Lock-free increment |
total_latency_ms_ |
std::atomic<uint64_t> |
Lock-free addition |
min/max_latency_ms_ |
std::atomic<double> + CAS |
Lock-free compare-and-swap |
start_time_ |
std::mutex |
Rare access, mutex acceptable |
threads_ vector |
std::mutex |
Worker pool modification |
void LoadGenerator::set_concurrency(int new_concurrency) {
if (new_concurrency > current) {
// Add workers: spawn new threads
add_workers(new_concurrency - current);
} else {
// Remove workers: graceful shutdown
remove_workers(current - new_concurrency);
}
target_concurrency_ = new_concurrency;
}Worker thread loop checks concurrency:
void worker_thread(int thread_id) {
while (!stop_requested_ &&
thread_id < target_concurrency_.load()) {
perform_request(); // Do work
}
// Thread exits when ID >= target_concurrency
}# All tests
python tests/test_load_generator.py
# Basic tests
python tests/test_basic.py
# Performance tests (short)
python tests/test_performance.py
# Stress tests (short only, long tests skipped by default)
python tests/test_stress.py
# Run long stress tests (5-10 minutes)
pytest tests/test_stress.py -m slowContributions are welcome! Here's how you can help:
- Report Bugs - Open an issue with details
- Suggest Features - Share your ideas
- Submit PRs - Fix bugs or add features
- Improve Docs - Help others understand Volt
# Clone repository
git clone https://github.com/EpochSpace/volt.git
cd volt
# Install in development mode
pip install -e .
# Run tests
python tests/test_load_generator.py
# Make changes to C++ code
# Rebuild:
pip uninstall -y volt && rm -rf _skbuild && pip install -e .volt/
├── src/ # C++ source code
│ ├── load_generator.h # LoadGenerator class
│ ├── load_generator.cpp # Implementation
│ └── bindings.cpp # pybind11 bindings
├── tui_tester/ # Python package
│ ├── __init__.py # Package exports
│ ├── tui.py # Textual TUI
│ └── __main__.py # CLI entry point
├── tests/ # Test suite
├── examples/ # Usage examples
├── CMakeLists.txt # C++ build config
└── pyproject.toml # Python package config
MIT License - see LICENSE file for details.
- CPR - C++ Requests library built on libcurl
- pybind11 - Seamless Python/C++ interop
- Textual - Modern TUI framework for Python
- libcurl - The backbone of HTTP requests
Built by developers, for developers