This project demonstrates various algorithms for solving mazes, combining classical techniques with reinforcement learning (RL) approaches such as Q-learning and Sarsa. It features a menu-driven interface to generate mazes, solve them using different strategies, and visualize the results for deeper analysis.
- Maze generation and interactive visualization
- Wall Follower (classical) maze-solving algorithm
- Q-learning and Sarsa reinforcement learning agents
- Multi-agent Sarsa experiments
- Projectile-avoidance reinforcement learning scenarios
- Performance visualization using
matplotlib
- Python 3.9+
numpymatplotlib
pip install numpy matplotlibRun the main program:
python src/main.py1 : Create another Maze layout 2 : Wall Follower Solving 3 : Run Q-learning 4 : Run Sarsa 5 : Run multi agent Sarsa 6 : Run Q-learning with projectile 7 : Run Sarsa with projectile 8 : Run Sarsa with projectile (multi-agent) 9 : Exit the application
src/ ├── main.py # Main entry point and experiment menu ├── Grid.py # Maze grid and environment logic ├── Agent.py # Agent behavior and learning ├── Cell.py # Maze cell structure ├── Wall.py # Maze wall logic ├── Figures/ # Output images and visualizations
-
The maze is procedurally generated and visualized.
-
Classical and RL algorithms are used to solve the maze.
-
RL agents learn optimal paths through repeated exploration episodes.
-
Multi-agent and projectile-avoidance modes introduce advanced RL dynamics.
-
All outcomes are visualized using matplotlib for performance and policy analysis.