Custom integration of OpenAI Gym and Gazebo environment! This module allows seamless implementation of gym functions on a custom Gazebo environment, enabling you to train reinforcement learning agents for robot navigation tasks.
- Complete integration of OpenAI Gym and Gazebo environment
- Support for higher-level RL libraries for robot navigation tasks (Such as PTAN, Tensorflow-RL)
- Customizable Gazebo-X-Gym code for easy adaptation to your specific needs
- ROS (Robot Operating System)
- Gazebo
- OpenAI Gym
- Python 3.x
-
Clone this repository:
git clone https://github.com/your-username/rl-robot-navigation.git cd rl-robot-navigation -
Install the required dependencies:
pip install -r requirements.txt
-
Start the Gazebo environment:
rosrun rl_robot_navigation GazeboEnv-V8.py -
Open your Jupyter Notebook:
jupyter notebook -
Navigate to your notebook file and open it.
-
Execute the cells in your notebook to run your reinforcement learning code.
That's it! Your reinforcement learning agent should now be interacting with the Gazebo environment.
You can easily adapt the Gazebo-X-Gym code to fit your specific needs. Refer to the GazeboEnv-V8.py file for details on the environment setup and modify as needed.
Contributions are welcome! Please feel free to submit a Pull Request.