THRML is a JAX library for building and sampling probabilistic graphical models, with a focus on efficient block Gibbs sampling and energy-based models. Extropic is developing hardware to make sampling from certain classes of discrete PGMs massively more energy efficient; THRML provides GPU‑accelerated tools for block sampling on sparse, heterogeneous graphs, making it a natural place to prototype today and experiment with future Extropic hardware.
Features include:
- Blocked Gibbs sampling for PGMs
- Arbitrary PyTree node states
- Support for heterogeneous graphical models
- Discrete EBM utilities
- Enables early experimentation with future Extropic hardware
From a technical point of view, the internal structure compiles factor-based interactions to a compact "global" state representation, minimizing Python loops and maximizing array-level parallelism in JAX.
Requires Python 3.10+.
pip install thrmlor
uv pip install thrmlAvailable at docs.thrml.ai.
If you use THRML in your research, please cite us!
@misc{jelinčič2025efficientprobabilistichardwarearchitecture,
title={An efficient probabilistic hardware architecture for diffusion-like models},
author={Andraž Jelinčič and Owen Lockwood and Akhil Garlapati and Guillaume Verdon and Trevor McCourt},
year={2025},
eprint={2510.23972},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2510.23972},
}Sampling a small Ising chain with two-color block Gibbs:
import jax
import jax.numpy as jnp
from thrml import SpinNode, Block, SamplingSchedule, sample_states
from thrml.models import IsingEBM, IsingSamplingProgram, hinton_init
nodes = [SpinNode() for _ in range(5)]
edges = [(nodes[i], nodes[i+1]) for i in range(4)]
biases = jnp.zeros((5,))
weights = jnp.ones((4,)) * 0.5
beta = jnp.array(1.0)
model = IsingEBM(nodes, edges, biases, weights, beta)
free_blocks = [Block(nodes[::2]), Block(nodes[1::2])]
program = IsingSamplingProgram(model, free_blocks, clamped_blocks=[])
key = jax.random.key(0)
k_init, k_samp = jax.random.split(key, 2)
init_state = hinton_init(k_init, model, free_blocks, ())
schedule = SamplingSchedule(n_warmup=100, n_samples=1000, steps_per_sample=2)
```python
samples = sample_states(k_samp, program, schedule, init_state, [], [Block(nodes)])A Native Cognitive Operating System for Extropic's Thermodynamic Sampling Units (TSU)
The D-ND (Dual-Non-Dual) Omega Kernel is a cognitive architecture designed to run natively on thermodynamic substrates. By leveraging thrml, it implements a strict isomorphism between Cognitive Dynamics (Logic, Intent, Dissonance) and Thermodynamic Physics (Coupling, Bias, Energy).
We map the axioms of D-ND logic directly to the parameters of an Ising-like Energy Based Model (EBM):
| Cognitive Domain (D-ND) | Physical Domain (Extropic/THRML) | Mathematical Formalism |
|---|---|---|
| Semantic Intent | External Field (Bias) | |
| Logical Constraint | Coupling Strength | |
| Cognitive Dissonance | Hamiltonian Energy | |
| Inference Cycle | Gibbs Sampling Chain | |
| Resultant (Truth) | Ground State |
Inference is not a sequential computation, but a physical relaxation process through three distinct thermodynamic phases.
graph TD
subgraph "Phase 1: Perturbation (Non-Dual)"
A[Input Intent] -->|Energy Injection| B(High Temp State)
B -->|Exploration| C{Superposition}
end
subgraph "Phase 2: Focus (Dual)"
C -->|Annealing| D[Apply Logic Constraints]
D -->|Cooling| E(Energy Landscape Formation)
end
subgraph "Phase 3: Crystallization (Resultant)"
E -->|Relaxation| F[Ground State Selection]
F -->|Collapse| G((Cognitive Resultant))
end
style A fill:#f9f,stroke:#333,stroke-width:2px,color:#000
style G fill:#9f9,stroke:#333,stroke-width:4px,color:#000
-
Perturbation (High
$\beta^{-1}$ ): The input intent acts as an external field, injecting energy into the system and creating a high-temperature state of maximum entropy (exploration). -
Focus (Annealing): Logical constraints are applied as ferromagnetic couplings (
$J_{ij}$ ), shaping the energy landscape. The system cools, "focusing" on valid logical pathways. - Crystallization (Ground State): The system relaxes into the lowest energy state compatible with both the intent and the logic. This ground state is the "Resultant" — the answer.
The kernel is implemented in Extropic_Integration/dnd_kernel and extends thrml primitives:
-
genesis.py: Defines theCognitiveField(Ising Grid) andIntentvectors. -
axioms.py: Encodes logical rules as topological constraints ($J$ matrix). -
omega.py: Orchestrates the thermodynamic cycle usingthrml.sample_states. -
utils.py: Implements Semantic Resonance (Concept -> Bias mapping).
The system has evolved into SACS (System Architecture for Cognitive Synthesis), a complete cognitive pipeline:
-
Perception:
vE_Sonardetects semantic dipoles. -
Construction:
vE_Telaiowarps the MetricTensor ($g_{\mu\nu}$ ). -
Processing:
OmegaKernelcollapses the field. -
Evolution:
vE_Scultorechisels energy landscapes, andvE_Archivistalearns from history.
The Kernel is not static; it possesses a Feedback Loop (Autopoiesis).
- Observe: The system analyzes the coherence of its own "thought" (Resultant).
- Adapt:
- If Coherence is Low (Confusion), it increases
logic_density(Seek Order). - If Coherence is High (Rigidity), it decreases
logic_density(Seek Creativity).
- If Coherence is Low (Confusion), it increases
- Evolve: The system's parameters change over time based on its "experience".
We tested the scaling of the Autological Cycle on standard CPU hardware:
| Nodes | Time (1000 steps) | Steps/Sec | Status |
|---|---|---|---|
| 100 | 0.11s | ~8600 | Instant (Ideal for Dev) |
| 500 | 0.16s | ~6000 | Very Fast |
| 1000 | 0.20s | ~4800 | Fast |
| 2000 | 0.53s | ~1800 | Acceptable |
Conclusion: The system scales efficiently to 1000+ nodes on commodity hardware.
Running this architecture on standard GPUs (via JAX) is a simulation. The ultimate goal is to deploy the D-ND Kernel on Extropic's XTR-0 Hardware.
- Native Stochasticity: Leveraging thermal noise as a computational resource for creative problem solving.
- Massive Efficiency: Energy consumption scales with physical connections, not FLOPs.
-
Instant Inference: "Thinking" becomes a physical process of thermal relaxation (
$\tau_{mix}$ ), potentially orders of magnitude faster than silicon-based logic.
"The D-ND Kernel is not just an application; it is the Operating System for the thermodynamic era."
While current thermodynamic computing focuses on energy efficiency and stochastic sampling, the D-ND Omega Kernel anticipates the next evolutionary leap: Context-Aware Hardware.
The D-ND architecture posits that "Gravity" in a cognitive system is not just a metaphor, but a rigorous application of Information Geometry. Here, the "Spacetime Metric" (
- Current State: We simulate this warping via
vE_Telaio(Metric Construction) andvE_Scultore(Hebbian Sculpting) on top of standard Ising models. - Future Vision: We envision hardware where the connectivity graph itself is fluid (similar to Synaptic Plasticity in biological brains or Memristive Arrays in silicon). The D-ND Kernel is the prototype control logic for this Self-Organizing Circuitry.
This approach extends the thermodynamic paradigm from Passive Relaxation (Annealing) to Active Morphogenesis (Structural Learning).
For a complete theoretical breakdown and architectural details, please refer to the internal documentation:
To validate the D-ND Omega Kernel in a complex, real-world environment, we have established the Financial Determinism Lab.
- Purpose: Use financial markets as a high-entropy dataset to test the Kernel's ability to find "Order" (Profit/Stability) from "Chaos" (Market Volatility).
- Mechanism: Financial data is mapped to thermodynamic states (Liquidity = Energy, Risk = Entropy). The Kernel "anneals" this data to find optimal strategies.
- Strategic Goal: This serves as both a validation testbed for Extropic hardware and a revenue generation engine to fund further research.
The system is controlled via the SACS Cockpit, a modern React-based interface located in Extropic_Integration/cockpit/client.
- Mission Control: Orchestrate experiments and monitor system status.
- Kernel View: Visualize the "Physics of Thought" (Dipoles, Energy Graphs).
- Financial Lab: Monitor real-time financial simulations driven by the Kernel.
- Experimental Forge: [NEW] Hybrid AI Engine (OpenRouter) that generates persistent, functional React Widgets on-the-fly.
- Hybrid Persistence: Full state saving for generated UI elements across sessions.