A scientifically rigorous Python toolbox for Bayesian reinforcement learning in foraging contexts, with initial focus on multi-armed bandit problems.
- Bayesian RL Agents: Reward maximization policies with belief state tracking and uncertainty quantification
- Foraging Environments: Custom multi-armed bandit environments (extensible to other foraging paradigms)
- Scientific Validation: Parameter recovery and cross-validated model fitting
- Modular Design: Object-oriented architecture for easy extension
🚧 Under Development 🚧
This project is currently in active development following a multi-agent test-driven development workflow.
# Clone the repository
git clone <repository-url>
cd foraging
# Install in development mode
pip install -e .Coming soon - awaiting initial implementation
Coming soon
This project follows a structured multi-agent development workflow with emphasis on:
- Test-driven development (TDD)
- Scientific rigor (parameter recovery, validation)
- Modular, object-oriented design
- Comprehensive documentation
See .agents/WORKFLOW.md for detailed development protocols.
foraging/
├── src/foraging/ # Core library code
│ ├── agents/ # Bayesian RL agents
│ ├── environments/ # Foraging environments
│ ├── belief/ # Belief state tracking
│ ├── policies/ # Reward maximization policies
│ └── utils/ # Utility functions
├── tests/ # Test suite
├── docs/ # Documentation
├── examples/ # Tutorial notebooks
└── scripts/ # Analysis scripts
This is a research project following strict development protocols. See .agents/WORKFLOW.md for contribution guidelines.
To be determined
Coming soon
Project maintainer information