LLocA is a general framework for extending arbitrary backbones (neural networks, transformers, etc.) to be Lorentz-equivariant. Rather than enforcing equivariance through specialized tensor operations or group-theoretic layers, LLoCa achieves equivariance by locally canonicalizing 4-vectors and performing tensorial message passing between frames. This approach provides a lightweight and flexible alternative to traditional Lorentz-equivariant architectures such as the Lorentz-Equivariant Geometric Algebra Transformer (L-GATr), while retaining exact Lorentz symmetry.
This repo contains a PyTorch implementation of LLoCa applied to a Transformer backbone, including:
FramesNet: constructs per-particle local Lorentz frames (L and L_inv) from four-momenta.LLoCaTransformer: transformer backbone that operates in canonical frames; baseline Transformer insrc/transformer.pyfor comparison.- Example script:
scripts/fit_efps.pytrains both models to regress a single Energy Flow Polynomial usingsrc/efp_dataset.py.
- Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant — Spinner et al., 2025. arXiv:2505.20280
- Lorentz-Equivariance without Limitations — Favaro et al., 2025. arXiv:2508.14898
- Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics — Spinner et al., 2024. arXiv:2508.14898