This repository contains several deep reinforcement learning (DRL) algorithms for training a drone to avoid obstacles in the AirSim simulation.
You can experiment with your own agents and rewards!
AirSim environment are wrapped in Gym environment so we can interact with it jist like with any Gym environment.
Why use AirSim?
AirSim have ArduPilot and ROS support - what can be very helpful if you gonna do inference in real-world
source - https://imrclab.github.io/workshop-uav-sims-icra2023/papers/RS4UAVs_paper_10.pdf
official repo :
https://github.com/microsoft/AirSim
AirSim good at simulation of drone physic and render of environment good enough:
![]() |
|---|
- To launch environment you should have .exe (if you are on Windows) file with rendered UnrealEngine environment (and UE4 installed as well).
- You can find zipped .exe environments here - https://github.com/microsoft/AirSim/releases
- Also setup instruction to make AirSim API work can be found in official AirSim repo
This repository uses a Docker-based setup for training a reinforcement learning (RL) agent in Ubuntu container, while running the Unreal Engine 4 (UE4) simulation on the host machine.
These instructions are based on a Windows host, but can be adapted for Linux.
- Unreal Engine 4: Run the AirSim environment (
.exe) directly on the host machine. - Development Environment: Use the
.devcontainerfeature in VS Code to run the development container (configured with--net=host).- More info on dev containers: VS Code Dev Containers Documentation
<-- press spoiler
To connect from the Docker container to the AirSim simulation running on the host:
-
AirSim Server Info
- Default port:
41451(hardcoded in theairsimPython library — no need to manually configure it). - You’ll need the host IP address that the AirSim server is running on.
- Default port:
-
Networking Considerations (Windows-specific)
- Docker on Windows runs inside WSL, not natively on Windows.
- To enable proper networking between Windows → WSL → Docker, you need to configure WSL as follows:
<-- press spoiler
-
Locate your
.wslconfigfile (typically atC:\Users\<your-username>\.wslconfig) and add the following line:networkingMode=mirrored -
Open the project in VS Code using “Open Folder in Container”.
-
Inside the container, run the connection test:
python3 ./test_connection_docker.py
✅ Success! If the script connects successfully — You're all set! 🎉
If not, please open an issue and include details about your network setup so we can help troubleshoot.
Currently, the repository contains code for training:
- Deep Deterministic Policy Gradient (with PER)
- Dueling Double Deep Q Network (with PER)
To train an agent you need to set up config at configs/agents_conf and simply run:
python3 main.py
