Innovation โข Benchmarks โข Installation โข Quick Start โข Architecture โข Citation
|
+7.2% over best baseline 2.6x fewer parameters than Transformer Quaternions naturally encode rotation โ QRF leverages this inductive bias |
Quaternion Ruffle Field (QRF) is a novel neural network architecture where each neuron maintains both a spatial position and a rotational orientation represented as a unit quaternion. Unlike traditional scalar neurons, QRF neurons exist on a dynamic manifold (Sยณ) and interact through geometric relationships.
Just a number. No direction. No orientation memory. |
A point in space that knows which way it's pointing! |
Traditional neural networks treat rotation as just another pattern to learn. QRF builds rotation into the neuron itself โ each neuron IS a rotation. This creates a powerful inductive bias for tasks involving 3D geometry, orientation, and rotational relationships.
|
Dual-State Neurons
Position + Orientation |
Field Thermodynamics
Adaptive Temperature & Coherence |
Sequence Memory
Cross-Timestep Attention |
SLERP Memory
Spherical Interpolation |
Ruffle Optimizer
Energy-Based Exploration |
| Feature | Traditional NN | QRF | Benefit |
|---|---|---|---|
| Neuron State | Scalar value | Position + Quaternion | Native rotation representation |
| Interactions | Matrix multiply | Hamilton products + Geodesics | Preserves rotational composition |
| Attention | Learned from scratch | Modulated by quaternion state | Geometry-aware weighting |
| Memory | Hidden states | SLERP on Sยณ manifold | Smooth orientation interpolation |
| Dynamics | Static weights | Temperature + Coherence evolution | Automatic regularization |
| Optimization | Gradient only | Gradient + Ruffle perturbations | Escapes local minima |
Given 64 noisy point correspondences (before/after rotation), predict the rotation quaternion.
Model Accuracy Params Time(s)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ฅ QRF 92.60% 151,641 180.1
๐ฅ QRF_NoAttn 87.40% 131,077 163.5
๐ฅ Dense 85.40% 18,436 32.9
Transformer 78.60% 398,212 502.0
LSTM 71.80% 202,500 138.6
Why QRF excels: Quaternions naturally compose rotations. The sequence memory aggregates evidence from all 64 point pairs simultaneously.
Remember a pattern from sequence start, recall after noise-filled gap.
Model Accuracy Params
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Transformer 100.0% 413,828
LSTM 100.0% 264,964
QRF 100.0% 167,257
QRF_NoAttn 100.0% 146,693
Dense 58.6% 34,052
Note: QRF matches Transformer/LSTM with 2.5x fewer parameters.
Classify sequences based on embedded prototype patterns.
Model Accuracy Params
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
All models 100.0% Various
All models solve this task perfectly โ it's included as a sanity check.
During training, QRF's field state evolves:
Metric Start End Meaning
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Energy ~1.3 0.93 Field becoming more organized
Temperature 1.0 0.51 Cooling โ sharper decisions
Coherence 1.0 0.58 Neurons aligning orientations
These dynamics provide automatic regularization โ the field self-organizes!
Python 3.9+ and PyTorch 2.0+
pip install quaternion-ruffle-fieldgit clone https://github.com/reneemgagnon/quaternion-ruffle-field.git
cd quaternion-ruffle-field
pip install -e .torch>=2.0.0
numpy>=1.24.0import torch
from quaternion_ruffle_field import QRFModel
# Create model for rotation prediction (output_dim=4 for quaternion)
model = QRFModel(
input_dim=6, # e.g., 3D point + rotated point
hidden_dim=128,
output_dim=4, # quaternion output
n_neurons=64,
use_attention=True,
use_sequence_memory=True # NEW in v5.0!
)
# Forward pass
x = torch.randn(32, 64, 6) # [batch, sequence, features]
quaternions = model(x) # [batch, sequence, 4] - unit quaternions!
print(f"Output shape: {quaternions.shape}")
print(f"Is unit quaternion: {torch.allclose(quaternions.norm(dim=-1), torch.ones(32, 64))}")from quaternion_ruffle_field import QRFModel, QuaternionRuffleOptimizer
model = QRFModel(input_dim=6, hidden_dim=128, output_dim=4, n_neurons=64)
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)
qrf_optimizer = QuaternionRuffleOptimizer(model.field, fold_threshold=1.5)
for epoch in range(30):
for batch in dataloader:
optimizer.zero_grad()
output = model(batch['input'])
loss = quaternion_loss(output, batch['target'])
loss.backward()
optimizer.step()
# Apply quaternion ruffles (energy-based perturbations)
stats = qrf_optimizer.step()
print(f"Epoch {epoch}: Energy={stats['energy']:.3f}, T={stats['temperature']:.3f}")# Process first sequence
out1 = model(sequence_1)
# Reset field but preserve learned state
model.field.reset(preserve_memory=True)
# Process second sequence (can recall previous context)
out2 = model(sequence_2)
# Explicitly restore from memory with SLERP blending
model.field.restore_from_memory(blend_factor=0.5)โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ QUATERNION RUFFLE FIELD v5.0 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ INPUT SEQUENCE MEMORY FIELD PROCESSING โ
โ โโโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โ โ
โ [batch, seq, dim] โโโบ Cross-Timestep โโโบ โโโโโโโโโโโโโโโโโโโโโโโ โ
โ Attention โ Quaternion Field โ โ
โ (Q, K, V) โ โโโโโ โโโโโ โโโโโ โ โ
โ โ โ โ q โ โ q โ โ q โ โ โ
โ โผ โ โโโฌโโ โโโฌโโ โโโฌโโ โ โ
โ Temperature- โ โ โ โ โ โ
โ Modulated โ โโโโโโโดโโโโโโ โ โ
โ Softmax โ Hamilton Productsโ โ
โ โ โโโโโโโโโโโโฌโโโโโโโโโโโ โ
โ โผ โ โ
โ Gated Memory โผ โ
โ Integration โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ Quaternion Modulation โ โ
โ โ โ + Cached Attention โ โ
โ โ โ + Skip Connections โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ OUTPUT [batch, seq, dim] โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ FIELD DYNAMICS (Updated during training) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ ๐ก๏ธ Temperature โโโบ Controls attention sharpness (lower = more focused) โ
โ ๐งฒ Coherence โโโบ Measures neuron alignment (higher = more organized) โ
โ โก Energy โโโบ Triggers ruffle perturbations (prevents stagnation) โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Quaternion Representation:
q = w + xi + yj + zk where wยฒ + xยฒ + yยฒ + zยฒ = 1
Hamilton Product (Neuron Interaction):
qโ โ qโ = (wโwโ - xโxโ - yโyโ - zโzโ) +
(wโxโ + xโwโ + yโzโ - zโyโ)i +
(wโyโ - xโzโ + yโwโ + zโxโ)j +
(wโzโ + xโyโ - yโxโ + zโwโ)k
SLERP (Memory Blending):
slerp(qโ, qโ, t) = sin((1-t)ฮธ)/sin(ฮธ) ยท qโ + sin(tฮธ)/sin(ฮธ) ยท qโ
where ฮธ = arccos(qโ ยท qโ)
Geodesic Distance:
d(qโ, qโ) = arccos(|qโ ยท qโ|)
|
Pose estimation, motion planning, joint angle prediction |
Protein folding, molecular docking, conformational analysis |
Object pose estimation, camera calibration, SLAM |
|
Satellite orientation, flight dynamics, trajectory prediction |
Motion capture, skeletal animation, rotation interpolation |
Spin systems, quantum states, rotational dynamics |
class QRFModel(nn.Module):
"""
Complete QRF model for end-to-end training.
Parameters
----------
input_dim : int
Input feature dimension
hidden_dim : int
Hidden representation dimension
output_dim : int
Output dimension (use 4 for quaternion output)
n_neurons : int, default=64
Number of neurons in quaternion field
use_attention : bool, default=True
Enable quaternion-modulated attention
use_sequence_memory : bool, default=True
Enable cross-timestep sequence memory (NEW in v5.0)
use_memory : bool, default=True
Enable field state memory preservation
"""class QuaternionRuffleField(nn.Module):
"""
Core quaternion field with dynamic neuron states.
Attributes
----------
coordinates : nn.Parameter
Spatial positions [n_neurons, space_dim]
quaternions : nn.Parameter
Rotational states [n_neurons, 4] (unit quaternions)
field_temperature : torch.Tensor
Adaptive temperature parameter
coherence_factor : torch.Tensor
Inter-neuron coupling strength
Methods
-------
forward(update_dynamics=True)
Compute dynamic distance matrix
reset(preserve_memory=True)
Reset field state
restore_from_memory(blend_factor=0.5)
SLERP restoration from memory
compute_folding_energy()
Calculate total field energy
get_field_state_summary()
Get state statistics dict
"""class QuaternionRuffleOptimizer:
"""
Energy-based optimizer with adaptive perturbations.
Parameters
----------
field : QuaternionRuffleField
The field to optimize
fold_threshold : float, default=1.5
Energy threshold for perturbations
ruffle_scale : float, default=0.05
Perturbation magnitude
warmup_steps : int, default=20
Steps before applying ruffles
Methods
-------
step() -> Dict
Apply ruffle step, returns stats dict with:
- energy: Current field energy
- temperature: Field temperature
- coherence: Field coherence
- applied_ruffle: Whether perturbation was applied
"""import torch
from quaternion_ruffle_field import QRFModel, QuaternionRuffleOptimizer
# Model for rotation prediction
model = QRFModel(
input_dim=6, # [original_xyz, rotated_xyz]
hidden_dim=128,
output_dim=4, # quaternion
n_neurons=64,
use_attention=True,
use_sequence_memory=True
)
# Quaternion loss function
def quaternion_loss(pred, target):
pred = pred / pred.norm(dim=-1, keepdim=True)
target = target / target.norm(dim=-1, keepdim=True)
dot = (pred * target).sum(dim=-1)
return (1.0 - torch.abs(dot)).mean()
# Training
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)
qrf_opt = QuaternionRuffleOptimizer(model.field)
for epoch in range(30):
for points, rotations in dataloader:
pred = model(points).mean(dim=1) # Pool over sequence
loss = quaternion_loss(pred, rotations)
optimizer.zero_grad()
loss.backward()
optimizer.step()
qrf_opt.step()from quaternion_ruffle_field import QuaternionMemoryTracer
import matplotlib.pyplot as plt
tracer = QuaternionMemoryTracer(model.field)
# Record during training
for epoch in range(100):
tracer.record()
# ... training step ...
# Plot evolution
history = tracer.export()
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
axes[0].plot(history['energy'].numpy())
axes[0].set_title('Field Energy')
# Quaternion coherence over time
coherence = tracer.compute_memory_coherence()
print(f"Memory Coherence: {coherence:.4f}")If you use Quaternion Ruffle Field in your research, please cite:
@article{author2025quaternion,
title={Quaternion Ruffle Field: Neural Networks with Orientational State},
author={Your Name},
journal={arXiv preprint arXiv:2025.XXXXX},
year={2025},
note={Achieves 92.6\% accuracy on 3D rotation prediction,
outperforming Transformers by 14\%}
}We welcome contributions! info@maplebrainhealth.com
# Development setup
git clone https://github.com/reneemgagnon/QRFF/
cd quaternion-ruffle-field
pip install -e ".[dev]"
pytest tests/ -vThe QRFF reference implementation is licensed under the PolyForm Noncommercial 1.0.0 license. See LICENSE.
Commercial use requires a separate license. See COMMERCIAL_LICENSE.md.
If you use QRFF in academic work, please cite this repository (see CITATION.cff) and include the project name: "Quaternion Ruffle Field (QRFF)".
โญ Star us on GitHub โ it motivates us a lot!
Made with โค๏ธ and ๐ฎ quaternions