A brain-inspired AI architecture that integrates multiple cognitive mechanisms to create a more human-like learning agent.
The architecture combines:
- Hyperdimensional Computing (HDC) - For symbolic and distributed representation of sensory inputs
- Spiking Neural Networks (SNN) - For temporal processing and learning
- Cellular Automata (CA) - For emergent pattern formation and processing
- Memory Systems - For episodic and semantic memory
- Brain Systems - For biologically-inspired information processing
The system incorporates these key biological components:
- Thalamic Gating - Filters sensory inputs based on attention and relevance
- Basal Ganglia Loop - Action selection through direct (Go) and indirect (NoGo) pathways
- Cerebellar Error Correction - Motor command refinement and error prediction
- Autonomic System - Homeostatic regulation of drives and needs
- Combines multiple brain-inspired approaches to create a more cognitive architecture
- Learns from few examples using HDC semantic binding and SNN temporal dynamics
- Forms emergent patterns through cellular automata processing
- Stores and recalls episodic memories for experience replay
- Builds semantic knowledge through memory consolidation
- Implements biologically-plausible action selection and sensory filtering
- Corrects motor errors through cerebellar predictive processing
- Maintains homeostatic balance through autonomic regulation
# Clone the repository
git clone https://github.com/yourusername/brAin.git
cd brAin
# Install dependencies
pip install -r requirements.txtfrom brain.agent.hdc_snn_agent import HDCSNNAgent
# Create agent
agent = HDCSNNAgent(
input_shape=(120, 160, 3),
hd_dim=1000,
snn_neurons=500,
num_actions=5
)
# Train agent
for episode in range(1000):
observation = environment.reset()
done = False
while not done:
# Select action
action = agent.act(observation)
# Execute action in environment
next_observation, reward, done, info = environment.step(action)
# Learn from experience
agent.learn(observation, action, reward, next_observation, done)
observation = next_observation
# Process episodic memory
agent.replay_experience(batch_size=32)The thalamic gating system implements:
- Attention-based filtering of sensory inputs
- Multi-channel sensory processing
- Salience detection for information flow control
- Dynamic working memory trace
The basal ganglia loop implements:
- Direct pathway (D1/Go) for action facilitation
- Indirect pathway (D2/NoGo) for action inhibition
- Dopamine-modulated learning through prediction errors
- Action selection through disinhibition mechanism
The cerebellar correction system implements:
- Predictive error correction for motor commands
- Massive granule cell expansion for pattern separation
- Purkinje cell-based error learning
- Context-based error memory system
The autonomic regulation system implements:
- Homeostatic drive regulation (energy, safety, curiosity, etc.)
- Neuromodulator level coordination (dopamine, serotonin, etc.)
- Drive-action mapping for intrinsic motivation
- Adaptive urgency detection
This project is licensed under the MIT License - see the LICENSE file for details.
brAin/
├── brain/ # Core brain-inspired modules
│ ├── agent/ # Agent implementations
│ │ ├── hdc_snn_agent.py # Main agent class combining all components
│ │ └── trainer.py # Training utilities for agents
│ ├── encoders/ # Perception modules
│ │ └── hdc_encoder.py # Hyperdimensional Computing encoder
│ ├── memory/ # Memory systems
│ │ ├── episodic.py # Episodic memory (hippocampus-inspired)
│ │ └── semantic.py # Semantic memory (neocortex-inspired)
│ ├── networks/ # Neural processing
│ │ ├── cellular_automata.py # Cellular automata for pattern processing
│ │ └── snn.py # Spiking Neural Network implementation
│ ├── perception/ # Advanced perception components
│ │ └── yolo_detector.py # YOLO-based object detection
│ └── utils/ # Utility functions
├── config/ # Configuration files
│ └── scenarios.py # Scenario definitions for VizDoom
├── environment/ # Environment wrappers
│ └── doom_environment.py # VizDoom environment interface
├── evaluation/ # Evaluation tools
│ └── metrics.py # Performance metrics and visualization
├── results/ # Training/testing results (generated)
├── main.py # Main script for training and testing
└── README.md # This file
To train an agent on the basic scenario:
python main.py train --scenario basic --episodes 1000 --output-dir results/basicWith YOLO object detection:
python main.py train --scenario basic --episodes 1000 --use-yolo --output-dir results/basic_yoloOptional parameters:
--render: Show the game window during training--hd-dim 800: Set the dimensionality of hyperdimensional vectors--snn-neurons 500: Set the number of neurons in the SNN--learning-rate 0.01: Set the learning rate--model results/my_model: Path to save the trained model--use-yolo: Enable YOLO object detection for enhanced perception--show-yolo-detections: Visualize YOLO detections during training/testing
To test a trained agent:
python main.py test --scenario basic --model results/basic/agent_final --renderWith YOLO visualization:
python main.py test --scenario basic --model results/basic/agent_final --render --use-yolo --show-yolo-detectionsTo evaluate an agent's performance across multiple scenarios:
python main.py eval --model results/my_model --output-dir results/evaluationTo visualize the agent's internal representations:
python main.py visualize --scenario basic --model results/my_model --render --visualize-internalsWith YOLO detection visualization:
python main.py visualize --scenario basic --model results/my_model --render --visualize-internals --use-yolo --show-yolo-detectionsThe agent can be trained and tested on various VizDoom scenarios, each requiring different skills:
- basic: Simple navigation and shooting
- defend_center: Defend against enemies coming from all sides
- deadly_corridor: Navigate corridor while eliminating enemies
- health_gathering: Find and collect health packs to survive
- defend_line: Defend a line against approaching enemies
Implements a brain-inspired encoding mechanism for visual and state information, similar to the visual processing stream from V1-V5 in the brain.
A biologically plausible neural network with leaky integrate-and-fire dynamics, refractory periods, and spike-timing-dependent weight updates.
Simulates cortical sheet dynamics with local inhibitory and excitatory interactions, spatial pattern formation and self-organizing dynamics.
Stores experiences in a way analogous to hippocampal function, with pattern completion and temporal sequence learning capabilities.
Stores semantic knowledge and associations, similar to how the neocortex consolidates information from episodic memory.
YOLO-based object detector that mimics the ventral stream's object recognition capabilities in the visual cortex. Enhances perception by detecting objects like enemies, health packs, weapons, and more.
The agent integrates YOLO (You Only Look Once) object detection to enhance its perception capabilities:
- Detects objects in the game environment (enemies, health packs, weapons, etc.)
- Creates attention maps focused on important objects
- Generates object-centric representations for improved decision making
- Visualizes detections during training/testing
The system now integrates with Brain-Cog, a spiking neural network based brain-inspired cognitive intelligence engine, providing several key enhancements:
- Biologically Plausible Neurons: Uses Brain-Cog's LIF (Leaky Integrate-and-Fire) neurons that more accurately model biological neuron behavior with membrane potentials and spiking mechanisms
- Advanced Basal Ganglia Model: Implements direct (D1) and indirect (D2) pathways for biologically accurate action selection and inhibition
- Sophisticated Learning Rules: Incorporates STDP (Spike-Timing-Dependent Plasticity) and dopamine-modulated reinforcement learning
- Multi-level Neuromodulation: Simulates dopamine, serotonin, noradrenaline, and acetylcholine effects on learning and behavior
- Performance Optimization: Leverages GPU acceleration and batch processing for efficient computation
To train an agent with Brain-Cog enhancements:
python main.py train --scenario basic --episodes 1000 --use-braincog --output-dir results/braincog_agentTo test a Brain-Cog enhanced agent:
python main.py test --scenario basic --model results/braincog_agent/agent_final --render --use-braincogThe Brain-Cog enhanced agent provides:
- More biologically accurate neural dynamics
- Improved decision-making through realistic basal ganglia modeling
- Enhanced learning with multiple biologically-inspired mechanisms
- Better scaling on GPU hardware for larger networks
The system implements a full basal ganglia circuit using Brain-Cog components:
- Direct pathway (D1/Go) for action facilitation
- Indirect pathway (D2/NoGo) for action inhibition
- STN-GPe-GPi circuit for selective disinhibition
- Dopamine-modulated learning with reward prediction error
The SNN implementation is enhanced with:
- Multiple encoding options (rate, temporal, and phase)
- Biologically plausible STDP learning
- Reinforcement learning with reward signals
- Spike-based information processing
The agent's performance is evaluated using various metrics:
- Reward Distribution: Statistical analysis of rewards across episodes
- Action Distribution: Analysis of action selection patterns
- Learning Curve: Progression of learning over time
- Generalization: How well the agent transfers to different scenarios
- Implement save/load functionality for agent models
- Add more sophisticated memory consolidation mechanisms
- Improve the integration between episodic and semantic memory
- Implement attention mechanisms for more effective visual processing
- Support for additional environments beyond VizDoom
- Create a Doom-specific dataset for YOLO fine-tuning
This project takes inspiration from various fields including:
- Hyperdimensional Computing
- Spiking Neural Networks
- Neuroscience and cognitive architecture
- Cellular Automata
- The VizDoom environment
- YOLO object detection
The brAin system now includes an optimized implementation that leverages GPU acceleration and other performance improvements:
- GPU Acceleration: Uses PyTorch tensors on CUDA for faster processing
- JIT Compilation: Critical operations are JIT-compiled for improved performance
- Batch Processing: Processes multiple inputs simultaneously where possible
- Efficient Memory Operations: Optimized vector operations for memory retrieval
- Improved HDC Operations: Faster hyperdimensional computing with vectorized operations
- Convolutional CA Updates: Cellular automata updated using efficient convolution operations
The optimized implementation can provide significant speedups, especially for:
- Large memory capacity settings
- High-dimensional vectors
- Complex environments requiring more inference steps
- Systems with CUDA-compatible GPUs
To use the optimized version, add the --use-optimized flag to any command:
# Train with optimized implementation on GPU
python main.py train --scenario basic --use-optimized
# Specify a particular device
python main.py train --scenario basic --use-optimized --device cuda:0
# Test with optimizations but disable JIT compilation for debugging
python main.py test --scenario basic --use-optimized --disable-jit --model models/my_modelThe following flags control optimization behavior:
--use-optimized: Enables the optimized implementation--device: Sets computation device ('cpu', 'cuda', 'cuda:0', etc.)--disable-jit: Disables JIT compilation (useful for debugging)--disable-batch: Disables batch processing--seed: Sets random seed for reproducibility
The optimized implementation requires:
- PyTorch 1.12.0 or higher
- CUDA toolkit (for GPU acceleration)
- Additional packages listed in requirements.txt