This repository implements Local Timescale Gates (LT-Gate) for spiking neural networks, along with baseline algorithms HLOP and DSD-SNN. The codebase supports training and evaluation on temporally distinct datasets, with a focus on continual learning scenarios.
lt-gate/
├── configs/ # Configuration files for different algorithms and experiments
│ ├── ltgate.yaml # LT-Gate hyperparameters and training settings
│ ├── hlop.yaml # HLOP baseline configuration
│ └── ablations/ # Ablation study configurations
├── data/ # Dataset storage
│ ├── fast/ # Fast temporal dataset (1kHz)
│ └── slow/ # Slow temporal dataset (50Hz)
├── src/ # Source code
│ ├── algorithms/ # Algorithm implementations
│ │ ├── ltgate.py # LT-Gate implementation
│ │ └── hlop.py # HLOP baseline implementation
│ ├── layers/ # Neural network layers
│ │ ├── dual_lif_neuron.py # Dual-pathway LIF neuron
│ │ ├── dual_conv_lif.py # Dual convolutional LIF layer
│ │ ├── lt_conv.py # LT-Gate convolutional layer
│ │ └── hlop_subspace.py # HLOP subspace projection layer
│ ├── data_loader.py # Dataset loading utilities
│ ├── model.py # SNN backbone architecture
│ └── evaluate.py # Evaluation script
├── analysis/ # Analysis and ablation scripts
│ ├── ablation_runner.py # Automated ablation experiments
│ ├── plot_ablation.py # Ablation result plotting
│ └── aggregate.py # Result aggregation
├── tools/ # Utility scripts
│ ├── run_akida.py # Akida runtime measurements
│ └── convert_akida.py # Model conversion for Akida
└── tests/ # Unit and integration tests
├── test_ltgate.py # LT-Gate unit tests
├── test_hlop.py # HLOP unit tests
└── test_model.py # Model integration tests
- LT-Gate Algorithm: Implementation of Local Timescale Gates for adaptive temporal processing
- Baseline Algorithms: HLOP and DSD-SNN implementations for comparison
- Flexible SNN Backbone: Configurable backbone supporting all algorithms
- Multi-Platform Support:
- CPU/GPU training via PyTorch
- Loihi 2 neuromorphic deployment
- Akida runtime/energy measurements
- Comprehensive Testing: Unit tests and integration tests for all components
- Experiment Tools:
- Dataset generation and validation
- Training pipeline with checkpointing
- Evaluation metrics and logging
- Statistical analysis and plotting
- Python 3.8+
- PyTorch 2.0+
- CUDA 11.8+ (for GPU support)
- h5py
- numpy
- tqdm
- pyyaml
- matplotlib
- seaborn
- pandas
- pingouin (for statistical analysis)
Optional dependencies:
- lava-dl (for Loihi 2 deployment)
- akida (for Akida runtime measurements)
- After cloning the repository:
cd lt-gate- Create a virtual environment:
python -m venv venv
source venv/bin/activate # Linux/Mac
# OR
.\venv\Scripts\activate # Windows- Install dependencies:
pip install -r requirements.txtTrain LT-Gate model on fast dataset:
python train.py --config configs/ltgate.yaml --dataset fast --seed 42Train LT-Gate model on slow dataset (continual learning):
python train.py --config configs/ltgate.yaml --dataset slow --seed 42Train HLOP baseline:
python train.py --config configs/hlop.yaml --dataset fast --seed 42Evaluate a trained checkpoint:
python src/evaluate.py --checkpoint logs/ltgate/seed_42/ckpt_task2.ptThe evaluation script will:
- Load the checkpoint and its configuration
- Create a compatible model architecture
- Run evaluation on the test dataset
- Save results to a JSON file
Run a specific ablation variant:
python analysis/ablation_runner.py --variant tau20Run all ablation variants:
python analysis/ablation_runner.py --variant allAvailable ablation variants:
tau10,tau20,tau100: Different time constant settingsnofast,noslow: Ablating fast/slow pathwaysgamma_fixed: Fixed gamma parametereta_half: Reduced learning rate
The ablation runner will:
- Train on fast dataset (Task 1)
- Continue training on slow dataset (Task 2)
- Run energy measurements on Akida hardware
- Save results to
logs/ablations/{variant}/seed_{seed}/
Run energy measurements on Akida hardware:
python tools/run_akida.py logs/ablations/tau20/seed_0/ckpt_task2.pt data/slow/test.h5 logs/ablations/tau20/seed_0/energy.jsonThe system uses a hierarchical configuration system:
alg: ltgate
batch_size: 32
epochs: 100
eta: 0.01
eta_v: 0.001
variance_lambda: 0.001
pre_decay: 0.8
tau_fast: 0.005
tau_slow: 0.1
dt: 0.001
reset_mechanism: subtract
conv_layers:
- in_channels: 1
out_channels: 16
kernel_size: 3
stride: 1
padding: 1
- in_channels: 16
out_channels: 32
kernel_size: 3
stride: 2
padding: 1
- in_channels: 32
out_channels: 64
kernel_size: 3
stride: 2
padding: 1
fc_layers:
- in_features: 3136
out_features: 256
- in_features: 256
out_features: 10
calibration:
enabled: true
target_rate: 0.02
batches: 10
iters: 3
tolerance: 0.5
min_threshold: 0.05
max_threshold: 2.0base_config: configs/ltgate.yaml
variant_tag: tau20
tau_fast: 0.02
seeds: [0, 1, 2]-
Import Errors: The codebase uses relative imports. Run scripts from the project root directory.
-
Checkpoint Compatibility: Older checkpoints may use different model architectures. The evaluation script includes compatibility layers.
-
Memory Issues: For large models, reduce batch size or use CPU training.
-
Calibration Convergence: If calibration doesn't converge, adjust
target_rateortolerancein config.
Enable debug mode for detailed logging:
python train.py --config configs/ltgate.yaml --dataset fast --seed 42 --debug- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
python -m pytest tests/ - Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
If you use this code in your research, please cite:
[Citation details will be added upon publication]