Skip to content

OmkarChekuri/ReinforcementLearning

Repository files navigation

Maze Solving Using Reinforcement Learning

This project demonstrates various algorithms for solving mazes, combining classical techniques with reinforcement learning (RL) approaches such as Q-learning and Sarsa. It features a menu-driven interface to generate mazes, solve them using different strategies, and visualize the results for deeper analysis.


Features

  • Maze generation and interactive visualization
  • Wall Follower (classical) maze-solving algorithm
  • Q-learning and Sarsa reinforcement learning agents
  • Multi-agent Sarsa experiments
  • Projectile-avoidance reinforcement learning scenarios
  • Performance visualization using matplotlib

Requirements

  • Python 3.9+
  • numpy
  • matplotlib

Install Dependencies

pip install numpy matplotlib

Usage

Run the main program:

python src/main.py

You’ll be presented with the following menu:

1 : Create another Maze layout 2 : Wall Follower Solving 3 : Run Q-learning 4 : Run Sarsa 5 : Run multi agent Sarsa 6 : Run Q-learning with projectile 7 : Run Sarsa with projectile 8 : Run Sarsa with projectile (multi-agent) 9 : Exit the application

Project Structure

src/ ├── main.py # Main entry point and experiment menu ├── Grid.py # Maze grid and environment logic ├── Agent.py # Agent behavior and learning ├── Cell.py # Maze cell structure ├── Wall.py # Maze wall logic ├── Figures/ # Output images and visualizations

How It Works

  • The maze is procedurally generated and visualized.

  • Classical and RL algorithms are used to solve the maze.

  • RL agents learn optimal paths through repeated exploration episodes.

  • Multi-agent and projectile-avoidance modes introduce advanced RL dynamics.

  • All outcomes are visualized using matplotlib for performance and policy analysis.

About

Q language, SARSA and mutli agent SARSA with multithreading to solve mazes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages