A modern, GPU-accelerated reservoir computing library for PyTorch.
resdag brings the power of Echo State Networks (ESNs) and reservoir computing to PyTorch with a clean, modular API. Built for researchers and practitioners who need fast, flexible, and production-ready reservoir computing models.
- π GPU-Accelerated: Full GPU support for training and inference
- π― Pure PyTorch: Native
nn.Modulecomponents, TorchScript compatible - π§© Modular Design: Build complex architectures with simple building blocks
- π Multiple Topologies: 15+ graph topologies for reservoir initialization
- π¬ Algebraic Training: Efficient ridge regression via Conjugate Gradient
- π¨ Flexible API: Compose models using
pytorch_symbolic - π HPO Ready: Built-in Optuna integration for hyperparameter optimization
- π§ Production Ready: Stateful layers, model persistence, GPU compilation
pip install resdag
# With hyperparameter optimization support
pip install resdag[hpo]git clone https://github.com/El3ssar/resdag.git
cd resdag
pip install -e .
# Or using uv (faster)
uv syncimport torch
import pytorch_symbolic as ps
from resdag import ESNModel, ReservoirLayer, CGReadoutLayer, ESNTrainer
# 1. Define the model architecture
inp = ps.Input((100, 3)) # (seq_len, features)
reservoir = ReservoirLayer(
reservoir_size=500,
feedback_size=3,
spectral_radius=0.9,
topology="erdos_renyi"
)(inp)
readout = CGReadoutLayer(500, 3, alpha=1e-6, name="output")(reservoir)
model = ESNModel(inp, readout)
# 2. Train the model (algebraic, not SGD!)
trainer = ESNTrainer(model)
trainer.fit(
warmup_inputs=(warmup_data,),
train_inputs=(train_data,),
targets={"output": train_targets}
)
# 3. Make predictions
predictions = model.forecast(forecast_warmup, horizon=1000)from resdag.models import ott_esn
# Ott's ESN for chaotic systems (with state augmentation)
model = ott_esn(
reservoir_size=500,
feedback_size=3,
output_size=3,
spectral_radius=0.95,
)
# Train and forecast as aboveThe heart of ESNs - stateful RNN layers with randomly initialized, fixed recurrent weights:
from resdag.layers import ReservoirLayer
from resdag.init.topology import get_topology
reservoir = ReservoirLayer(
reservoir_size=500, # Number of neurons
feedback_size=3, # Dimension of feedback input
input_size=5, # Optional: dimension of driving inputs
spectral_radius=0.9, # Controls memory/stability
leak_rate=1.0, # Leaky integration (1.0 = no leak)
activation="tanh", # Activation function
topology=get_topology("watts_strogatz", k=4, p=0.3),
)
# Forward pass
states = reservoir(feedback) # Feedback only
states = reservoir(feedback, driving_input) # With driving inputLinear layers trained via ridge regression (not gradient descent):
from resdag.layers.readouts import CGReadoutLayer
readout = CGReadoutLayer(
in_features=500, # Reservoir size
out_features=3, # Output dimension
alpha=1e-6, # Ridge regularization
name="output", # Name for multi-readout models
)
# Fit using conjugate gradient
readout.fit(reservoir_states, targets)
output = readout(reservoir_states)Build models using pytorch_symbolic for clean, functional composition:
import pytorch_symbolic as ps
from resdag import ESNModel
from resdag.layers import ReservoirLayer, Concatenate
from resdag.layers.readouts import CGReadoutLayer
# Multi-input model with driving signal
feedback = ps.Input((100, 3))
driver = ps.Input((100, 5))
reservoir = ReservoirLayer(500, feedback_size=3, input_size=5)(feedback, driver)
readout = CGReadoutLayer(500, 3, name="output")(reservoir)
model = ESNModel([feedback, driver], readout)Efficient algebraic training via ESNTrainer:
from resdag.training import ESNTrainer
trainer = ESNTrainer(model)
# Two-phase training: warmup + fitting
trainer.fit(
warmup_inputs=(warmup_feedback, warmup_driver), # Synchronize states
train_inputs=(train_feedback, train_driver), # Fit readout
targets={"output": targets}, # One target per readout
)Two-phase forecasting: teacher-forced warmup + autoregressive generation:
# Simple forecast (feedback only)
predictions = model.forecast(warmup_data, horizon=1000)
# Input-driven forecast (with external signals)
predictions = model.forecast(
warmup_feedback,
warmup_driver,
horizon=1000,
forecast_drivers=(future_driver,), # Provide future driving inputs
)
# Include warmup in output
full_output = model.forecast(
warmup_data,
horizon=1000,
return_warmup=True,
)resdag supports 15+ graph topologies for reservoir initialization:
from resdag.init.topology import get_topology, show_topologies
# List all available topologies
show_topologies()
# Get details for a specific topology
show_topologies("erdos_renyi")
# Create topology initializer
topology = get_topology("watts_strogatz", k=6, p=0.3, seed=42)
# Use in reservoir
reservoir = ReservoirLayer(
reservoir_size=500,
feedback_size=3,
topology=topology,
spectral_radius=0.95,
)Available topologies:
erdos_renyi- Random graphs with edge probabilitywatts_strogatz- Small-world networksbarabasi_albert- Scale-free networkscomplete- Fully connectedring_chord- Ring with chordsdendrocycle- Dendritic cycles- And many more!
Custom initialization strategies for input/feedback weights:
from resdag.init.input_feedback import get_input_feedback
# List available initializers
from resdag.init.input_feedback import show_input_initializers
show_input_initializers()
# Use custom initializer
reservoir = ReservoirLayer(
reservoir_size=500,
feedback_size=3,
feedback_initializer=get_input_feedback("chebyshev"),
input_initializer=get_input_feedback("random", input_scaling=0.5),
)Train models with multiple outputs:
# Define multiple readouts
reservoir = ReservoirLayer(500, feedback_size=3)(inp)
readout1 = CGReadoutLayer(500, 3, name="position")(reservoir)
readout2 = CGReadoutLayer(500, 3, name="velocity")(reservoir)
model = ESNModel(inp, [readout1, readout2])
# Train both readouts
trainer.fit(
warmup_inputs=(warmup,),
train_inputs=(train,),
targets={
"position": position_targets,
"velocity": velocity_targets,
},
)Built-in utilities for data loading and preparation:
from resdag.utils.data import load_file, prepare_esn_data
# Load time series
data = load_file("lorenz.csv") # Auto-detects format
# Split into ESN training phases
warmup, train, target, f_warmup, val = prepare_esn_data(
data,
warmup_steps=100, # Reservoir synchronization
train_steps=500, # Readout training
val_steps=200, # Validation
normalize="minmax", # Normalization method
)Built-in Optuna integration for HPO:
from resdag.hpo import run_hpo
from resdag.models import ott_esn
def model_creator(reservoir_size, spectral_radius):
return ott_esn(
reservoir_size=reservoir_size,
feedback_size=3,
output_size=3,
spectral_radius=spectral_radius,
)
def search_space(trial):
return {
"reservoir_size": trial.suggest_int("reservoir_size", 100, 1000, step=100),
"spectral_radius": trial.suggest_float("spectral_radius", 0.5, 1.5),
}
def data_loader(trial):
return {
"warmup": warmup,
"train": train,
"target": target,
"f_warmup": f_warmup,
"val": val,
}
# Run optimization
study = run_hpo(
model_creator=model_creator,
search_space=search_space,
data_loader=data_loader,
n_trials=100,
loss="efh", # Expected Forecast Horizon (best for chaotic systems)
n_workers=4, # Parallel optimization
)
print(f"Best params: {study.best_params}")Available loss functions:
efh- Expected Forecast Horizon (recommended for chaotic systems)horizon- Contiguous valid forecast stepslyap- Lyapunov-weighted lossstandard- Mean geometric errordiscounted- Time-discounted RMSE
Comprehensive examples are available in the examples/ directory:
- 00_registry_system.py - Working with topology and initializer registries
- 01_reservoir_with_topology.py - Using different graph topologies
- 02_input_feedback_initializers.py - Custom weight initialization
- 06_premade_models.py - Using premade ESN architectures
- 07_save_load_models.py - Model persistence and checkpointing
- 08_forecasting.py - Time series forecasting examples
- 09_training.py - Training workflows with ESNTrainer
- 10_hpo.py - Hyperparameter optimization examples
Run any example:
python examples/08_forecasting.pyfrom resdag.models import ott_esn
from resdag.training import ESNTrainer
# Lorenz attractor prediction
model = ott_esn(reservoir_size=500, feedback_size=3, output_size=3)
trainer = ESNTrainer(model)
trainer.fit(warmup_inputs=(warmup,), train_inputs=(train,), targets={"output": target})
# Long-term prediction
predictions = model.forecast(forecast_warmup, horizon=5000)# System with external forcing
feedback = ps.Input((100, 3))
driver = ps.Input((100, 2))
reservoir = ReservoirLayer(500, feedback_size=3, input_size=2)(feedback, driver)
readout = CGReadoutLayer(500, 3, name="output")(reservoir)
model = ESNModel([feedback, driver], readout)
# Forecast with future driving signals
predictions = model.forecast(
warmup_feedback,
warmup_driver,
horizon=1000,
forecast_drivers=(future_driver,),
)# Predict at multiple timescales
reservoir = ReservoirLayer(1000, feedback_size=3)(inp)
short_term = CGReadoutLayer(1000, 3, name="1step")(reservoir)
medium_term = CGReadoutLayer(1000, 3, name="10step")(reservoir)
long_term = CGReadoutLayer(1000, 3, name="100step")(reservoir)
model = ESNModel(inp, [short_term, medium_term, long_term])Full documentation is available in the docs/ directory:
- Topology System - Graph topologies for reservoirs
- Input/Feedback Initializers - Weight initialization strategies
- Model Composition - Building complex architectures
- Training Guide - ESN training workflows
- Hyperparameter Optimization - HPO best practices
- Save/Load Models - Model persistence
Generate API documentation using Sphinx:
cd docs/
sphinx-apidoc -o api/ ../src/resdag
make htmlRun the test suite:
# Run all tests
pytest
# Run with coverage
pytest --cov=resdag --cov-report=html
# Run specific test module
pytest tests/test_layers/test_reservoir.pyCurrent test coverage: 57% (240 tests passing)
# Clone repository
git clone https://github.com/El3ssar/resdag.git
cd resdag
# Install with development dependencies
uv sync --dev
# Or with pip
pip install -e ".[dev]"We use ruff for linting and black for formatting:
# Format code
black src/ tests/
# Lint code
ruff check src/ tests/
# Type checking
mypy src/resdag/
βββ src/resdag/
β βββ composition/ # Model composition (pytorch_symbolic)
β βββ layers/ # Reservoir, Readout, custom layers
β βββ init/ # Weight initialization
β β βββ topology/ # Graph topologies
β β βββ input_feedback/ # Input/feedback initializers
β β βββ graphs/ # Graph generation functions
β βββ training/ # ESNTrainer and training utilities
β βββ models/ # Premade model architectures
β βββ hpo/ # Hyperparameter optimization
β βββ utils/ # Data loading and utilities
βββ tests/ # Comprehensive test suite
βββ examples/ # Example scripts
βββ docs/ # Documentation
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes with tests
- Ensure tests pass (
pytest) - Format code (
black src/ tests/) - Submit a pull request
See CONTRIBUTING.md for detailed guidelines.
If you use resdag in your research, please cite:
@software{resdag2026,
author = {Daniel Estevez-Moya},
title = {resdag: A PyTorch Library for Reservoir Computing},
year = {2026},
url = {https://github.com/El3ssar/resdag}
}This project is licensed under the MIT License - see the LICENSE file for details.
- Built on PyTorch and pytorch_symbolic
- Inspired by ReservoirPy and classical ESN literature
- Graph generation powered by NetworkX
- Model construction made easy and modular thanks to Pytorch-Symbolic
- Author: Daniel Estevez-Moya
- Email: kemossabee@gmail.com
- Issues: GitHub Issues
- Additional premade architectures (Liquid State Machines, Next-Gen RC)
- Online learning capabilities
- TorchScript export for production
- ONNX support
- Distributed training for large reservoirs
- Interactive visualization tools
- Benchmarking suite against other ESN libraries
β Star us on GitHub if you find this useful!