This repository is the official PyTorch implementation of "Understanding and Improving Laplacian Positional Encodings For Temporal GNNs", featuring temporal graph neural networks with trajectory-based supra-Laplacian positional encoding.
📄 Paper: Understanding and Improving Laplacian Positional Encodings For Temporal GNNs (ECML-PKDD 2025)
SLPE introduces novel positional encoding techniques for temporal graphs by leveraging trajectory information in supra-adjacency matrices. The framework extends traditional graph neural networks to handle dynamic graphs more effectively by incorporating temporal structural information through eigenvector trajectories.
- Temporal Positional Encoding: Novel approach using supra-Laplacian eigendecomposition with trajectory information
- Multiple Graph Representations: Support for Regular, Concatenated, and Supra graph representations
- Flexible Model Architecture: Integration with HTGN, SLATE, and other temporal GNN models
- Comprehensive Dataset Support: Compatible with TGBL, DyGLib, and HTGN datasets
- GPU Acceleration: CUDA support with CuPy for efficient eigendecomposition
Create a conda environment using the provided configuration:
conda env create -f environment.yml
conda activate envRun the basic training script:
python src/train.pyThe framework supports extensive configuration through command-line arguments. Key parameters include:
--dataset: Choose from TGBL datasets (tgbl-wiki,tgbl-review,tgbl-coin, etc.), DyGLib datasets (CanParl,USLegis,Flights, etc.), or HTGN datasets (enron10,dblp,as733, etc.)
--model: Model type (default:HTGN)--nhid: Hidden embedding dimension (default: 16)--nfeat: Input feature dimension (default: 128)
--use_pes: Enable positional encoding features--pes_kind: Type of PE (Laplacian,Random,Sinusoidal)--graph_representation: Graph representation method (Regular,Concatenated,Supra)--length_of_pe: Dimensionality of positional encoding (default: 8)--pes_window_size: Temporal context window size (default: 3)--use_traj: Enable trajectory-based eigenvector computation--circular: Use circular temporal connections
--max_epoch: Maximum training epochs (default: 500)--lr: Learning rate (default: 0.01)--patience: Early stopping patience (default: 50)--repeat: Number of experimental runs (default: 1)
Train HTGN with supra-Laplacian positional encoding:
python src/train.py --model=HTGN --dataset=enron10 --use_pes=True --pes_kind=Laplacian --graph_representation=Supra --use_traj=TrueTrain SLATE model on TGBL dataset:
python src/train.py --model=SLATE --dataset=tgbl-wiki --use_pes=True --length_of_pe=16 --pes_window_size=5tgbl-wiki,tgbl-review,tgbl-coin,tgbl-flight,tgbl-comment
CanParl,USLegis,Flights,UNtrade,UNvote,Contacts
enron10,dblp,as733,fbw,HepPh30
- TemporalPE: Handles temporal positional encoding computation
- EigenSolver: Manages eigendecomposition with multiple solver options
- Runner: Orchestrates training and evaluation procedures
- Model Implementations: HTGN, SLATE, and other temporal GNN variants
- Regular: Standard snapshot-based processing
- Concatenated: Feature concatenation across time windows
- Supra: Supra-adjacency matrix construction with inter-layer connections
The framework integrates with Weights & Biases for comprehensive experiment tracking. Results include:
- AUC and AP scores across multiple runs
- Training curves and performance metrics
- Hyperparameter configurations
- Computational profiling information
This project builds upon several existing frameworks:
- HTGN - Hyperbolic Temporal Graph Networks
If you use this code in your research, please cite:
@article{galron2025understanding,
title={Understanding and Improving Laplacian Positional Encodings For Temporal GNNs},
author={Galron, Yaniv and Frasca, Fabrizio and Maron, Haggai and Treister, Eran and Eliasof, Moshe},
journal={arXiv preprint arXiv:2506.01596},
year={2025}
}Please refer to the individual component licenses and the original HTGN repository for licensing information.