This repository contains the official PyTorch reference implementations for concepts explained in the Neural Network Lexicon.
Each folder corresponds to a specific lexicon entry and provides:
- Clean, minimal, runnable PyTorch code
- Educational, readable implementations
- Direct mapping between theory and practice
- Supporting files for experimentation and learning
This repository is designed to answer one critical question:
"How does this neural network concept actually work in code?"
Read the full theoretical explanations:
Neural Network Lexicon
https://neuralnetworklexicon.com
Each lexicon entry links directly to its corresponding implementation in this repository.
neural-network-lexicon-code/ │ ├── code/ │ ├── reward_modeling/ │ │ ├── reward_modeling_pairwise.py │ │ └── README.md │ │ │ ├── gradient_descent/ │ │ └── ... │ │ │ ├── attention/ │ │ └── ... │ │ │ └── ...
Each concept lives in its own folder.
This repository follows five core principles:
No unnecessary abstractions.
The goal is understanding, not production engineering.
Examples run on:
- CPU
- No GPU required
- No external datasets required
Each script demonstrates exactly one concept.
Examples avoid unnecessary complexity.
Code prioritizes:
- Readability
- Explicitness
- Learning value
Over performance optimization.
Every script directly corresponds to a Neural Network Lexicon concept.
This repository will include implementations for:
Training Concepts
- Gradient Descent
- Backpropagation
- Weight Initialization
- Vanishing Gradients
- Optimization Algorithms
Architecture Concepts
- Attention
- Transformer Blocks
- Residual Connections
- LayerNorm
Alignment Concepts
- Reward Modeling
- Preference Learning
- Policy Optimization
Evaluation Concepts
- Calibration
- Overfitting
- Generalization
Advanced Concepts
- Gradient Flow in Transformers
- Sparse Neural Networks
- Mechanistic Interpretability
And many more.