This repository, Real-Time-TurtleBot3-Virtual-Obstacle-Mapping-Localization, is the continuation of my previous project, TurtleBot3 Virtual Obstacle Mapping & Localization in Simulation. The earlier work focused on designing, testing, and validating the complete autonomous mapping pipeline entirely within a simulated environment. Before starting this project, it is strongly recommended to review the simulation repo for a full understanding of the mapping logic and system architecture: https://github.com/Ureed-Hussain/TurtleBot3_Virtual_Obstacle_Mapping_Localization In this new repository, the same virtual obstacle mapping and localization framework is extended and implemented in real time on the actual TurtleBot3 robot, using live sensor data and onboard processing to achieve real-world autonomous navigation.
This is very important step, For this I recommended to follow this tutorial " https://emanual.robotis.com/docs/en/platform/turtlebot3/appendix_raspi_cam/ "
Use a black and white Checkerboard, usually 7×6 or 8×6 in size, print it out and attach it to a solid surface, and measure the squares of your checkerboard accurately.
[Remote PC]
sudo apt update
sudo apt install ros-${ROS_DISTRO}-camera-calibration
source /opt/ros/${ROS_DISTRO}/setup.bash
First connect with turtlebot by their IP using "ssh", and then Run the camera node based on the camera package you installed.
- For camera-ros
[TurtleBot3 SBC]
ros2 run camera_ros camera_node --ros-args -p format:='RGB888'
After runing this command verify you got camera topics.
[Remote PC]
ros2 topic list
Topic lists:
/camera/camera_info
/camera/image_raw
/camera/image_raw/compressed
/camera/image_raw/theora
Specify the size of the checkerboard and the size of the squares as execution arguments. The size of the checkerboard is the number of intersections.
[Remote PC]
ros2 run camera_calibration cameracalibrator \
--size 8x6 --square 0.023 \
image:=/camera/image_raw camera:=/camera
When a checkerboard is detected, each intersection is connected. Modify the position of the checkerboard until the green bar on the right is filled to activate the button.
Use the results to modify the format of the calibration yaml file you created when you installed the camera package.
[Result]
**** Calibrating ****
mono pinhole calibration...
D = [-0.24956645163546864, 0.04200151320596171, -0.0015198011708923138, -0.001999724710681652, 0.0]
K = [380.7469672813422, 0.0, 406.9705341838116, 0.0, 378.37382728825185, 281.3751983751762, 0.0, 0.0, 1.0]
R = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P = [261.0140380859375, 0.0, 405.063803910336, 0.0, 0.0, 296.8274841308594, 265.35472820934956, 0.0, 0.0, 0.0, 1.0, 0.0]
None
# oST version 5.0 parameters
[image]
width
800
height
600
[narrow_stereo]
camera matrix
380.746967 0.000000 406.970534
0.000000 378.373827 281.375198
0.000000 0.000000 1.000000
distortion
-0.249566 0.042002 -0.001520 -0.002000 0.000000
rectification
1.000000 0.000000 0.000000
0.000000 1.000000 0.000000
0.000000 0.000000 1.000000
projection
261.014038 0.000000 405.063804 0.000000
0.000000 296.827484 265.354728 0.000000
0.000000 0.000000 1.000000 0.000000
-
Create the calibration directory (if it doesn’t exist)
[TurtleBot3 SBC]
mkdir -p /home/ubuntu/.ros/camera_info/ sudo nano /home/ubuntu/.ros/camera_info/<camera_name>.yaml -
Copy the yaml file
Make sure to update the image_width, image_height values and camera_name to match your actual camera settings. Ensure the camera_name in the .yaml file matches the actual camera name.
image_width: 800 # Update to your camera's actual resolution
image_height: 600 # Update to your camera's actual resolution
camera_name: ov5647__base_soc_i2c0mux_i2c_1_ov5647_36_800x600 # Replace with the actual camera name
frame_id: camera
camera_matrix:
rows: 3
cols: 3
data: [380.74, 0, 406.97, 0, 378.3738, 281.3751, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [-0.249566, 0.042002, -0.001520, -0.002000, 0]
rectification_matrix:
rows: 3
cols: 3
data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
rows: 3
cols: 4
data: [261.014, 0, 405.0638, 0, 0, 296.82, 265.354, 0, 0, 0, 1, 0]
Intrinsic calibration focuses on correcting lens distortion and determining the camera’s internal properties, such as focal length and optical center. In real robots, this process is essential. For intrinsic calibration, the TurtleBot3 camera is first activated and its raw image stream is published on the /camera/image_raw topic. The intrinsic launch file then processes this stream using the image_proc pipeline: the DebayerNode converts the raw Bayer-pattern sensor data into a proper RGB image, and the RectifyNode prepares the image for calibration by correcting distortions using the camera model. The image_transport republisher converts compressed images to raw format so the calibration tool can receive clean, uncompressed frames. Together, these nodes supply the camera calibration tool with undistorted, correctly formatted images, enabling accurate computation of the camera's intrinsic parameters.
[TurtleBot3 SBC]
ros2 run camera_ros camera_node --ros-args -p format:='RGB888'
[Remote PC]
Terminal 1:
ros2 launch turtlebot3_autorace_camera intrinsic_camera_calibration.launch.py
check you got these topic:
/camera/image_rect
/camera/image_rect/compressed
/camera/image_rect/compressedDepth
/camera/image_rect/theora
/camera/image_rect_color/compressed
Terminal 2:
rqt
and you clearly see you got rectify image:
Extrinsic calibration aligns the camera’s perspective with the robot’s coordinate system, ensuring that objects detected in the camera’s view correspond to their actual positions in the robot’s environment. i updated little bit image_projection.py in turtlebot3_autorace_camera package, because red trapezoidal shape never comes on ground so i added X-offsets and Y-offsets.
Once the intrinsic_camera_calibration is running, launch the extrinsic calibration process:
ros2 launch turtlebot3_autorace_camera extrinsic_camera_calibration.launch.py calibration_mode:=True
This will activate the nodes responsible for camera-to-ground projection and compensation.
-
Execute rqt on Remote PC.
rqt -
Navigate to Plugins > Visualization > Image view. Create two image view windows.
-
Select the /camera/image_extrinsic_calib topic in one window and /camera/image_projected in the other.
The first topic shows an image with a red trapezoidal shape and the latter shows the ground projected view (Bird’s eye view).
/camera/image_extrinsic_calib (Left) and /camera/image_projected (Right)
-
Navigate to Plugins > Configuration > Dynamic Reconfigure.
-
Adjust the parameters in /camera/image_projection and /camera/image_compensation to tune the camera’s calibration.
Change the /camera/image_projection value to adjust the /camera/image_extrinsic_calib topic. Intrinsic camera calibration modifies the perspective of the image in the red trapezoid.
Adjust /camera/image_compensation to fine-tune the /camera/image_projected bird’s-eye view.
Once the best projection settings are found, the calibration data must be saved to ensure that the parameters persist across sessions. One way to save the extrinsic calibration data is by manually editing the YAML configuration files.
- Navigate to the directory where the calibration files are stored:
cd ~/turtlebot3_ws/src/turtlebot3_autorace/turtlebot3_autorace_camera/calibration/extrinsic_calibration/ - Open the relevant YAML file (e.g., projection.yaml) in a text editor:
gedit projection.yaml - Modify the projection parameters to match the values obtained from dynamic reconfiguration.
This method ensures that the extrinsic calibration parameters are correctly saved for future runs.
After completing the calibration process, follow the instructions below on the Remote PC to verify the calibration results.
-
Stop the current extrinsic calibration process.
If the extrinsic calibration was launched in calibration_mode:=True, stop the process by closing the terminal or pressing Ctrl + C
-
Launch the extrinsic calibration node without calibration mode.
This ensures that the system applies the saved calibration parameters for verification.
ros2 launch turtlebot3_autorace_camera extrinsic_camera_calibration.launch.py -
Execute rqt and navigate Plugins > Visualization > Image view.
rqt -
With successful calibration settings, the bird-eye view image should appear like below when the /camera/image_projected topic is selected.
Lane detection allows the TurtleBot3 to recognize lane markings and follow them autonomously. The system processes camera images from a real TurtleBot3 , applies color filtering, and identifies lane boundaries.
This section explains how to launch the lane detection system, visualize the detected lane markings, and calibrate the parameters to ensure accurate tracking.
To begin, start the Turtlebot3 camera.
[TurtleBot3 SBC]
ros2 run camera_ros camera_node --ros-args -p format:='RGB888'
Next, run the camera calibration processes, which ensure that the detected lanes are accurately mapped to the robot’s perspective:
[Remote PC]
ros2 launch turtlebot3_autorace_camera intrinsic_camera_calibration.launch.py
ros2 launch turtlebot3_autorace_camera extrinsic_camera_calibration.launch.py
These steps activate intrinsic and extrinsic calibration to correct any distortions in the camera feed.
Finally, launch the lane detection node in calibration mode to begin detecting lanes:
[Remote PC]
ros2 launch turtlebot3_autorace_detect detect_lane.launch.py calibration_mode:=True
To inspect the detected lanes, open rqt on Remote PC
rqt
Then navigate to Plugins > Visualization > Image View and open three image viewers to display different lane detection results:
-
/detect/image_lane/compressed, | /detect/image_yellow_lane_marker/compressed : a yellow range color filtered image. | /detect/image_white_lane_marker/compressed : a white range color filtered image.
Navigate to Plugins > Configuration > Dynamic Reconfigure > Select the lane Detect and adjust yellow and white line.
For optimal accuracy, tuning detection parameters is necessary. Adjusting these parameters ensures the robot properly identifies lanes under different lighting and environmental conditions.
- Open the lane.yaml file located in turtlebot3_autorace_detect/param/lane/ and write your modified values to this file. This will ensure the camera uses the modified parameters for future launches.
cd ~/turtlebot3_ws/src/turtlebot3_autorace/turtlebot3_autorace_detect/param/lane
gedit lane.yaml
Once calibration is complete, restart the lane detection node without the calibration option:
[Remote PC]
ros2 launch turtlebot3_autorace_detect detect_lane.launch.py
Make Sure Yellow line on Left side and White Line on right otherwise this code will be crash.
Final step, to run the robot that follow the lane.
[TurtleBot3 SBC]
ros2 run camera_ros camera_node --ros-args -p format:='RGB888'
[Remote PC]
Terminal 1:
ros2 launch turtlebot3_autorace_camera intrinsic_camera_calibration.launch.py
Terminal 2:
ros2 launch turtlebot3_autorace_camera extrinsic_camera_calibration.launch.py
Terminal 3:
ros2 launch turtlebot3_autorace_detect detect_lane.launch.py
Terminal 4:
ros2 launch turtlebot3_autorace_mission control_lane.launch.py
for_github.mp4
In this step we need the accurate offsets of our robot, offsets means the yellow and white distance from the robot.
The horizontal offset of a detected lane line from the robot’s camera center is computed from the centroid position of the segmented line. If the image width is W, the center is Cx=W/2, and the detected lane line is at pixel coordinate x:
- Left (yellow) line offset
- Right (white) line offset
This yields positive offsets for both sides.
During manual calibration, the robot is placed at a known physical distance D from the lane line.
Given the measured pixel offset Δx, the scale factor (meters per pixel) is:
Once the scale s is known, all pixel offsets are converted to meters:
The two lane boundaries are assumed to be parallel, so the lane width is simply the sum of the left and right distances:
This represents the physical distance between the yellow and white lane lines.
If the robot is not perfectly centered inside the lane, its lateral shift relative to the lane’s geometric center is:
A positive value means the robot is shifted left; a negative value means it is shifted right.
First clone the "slam_lane_tracking_ros2" package.
First run the gazebo world:
[TurtleBot3 SBC]
ros2 run camera_ros camera_node --ros-args -p format:='RGB888'
[Remote PC]
Then run Intrinsic calibration and Extrinsic calibration launch file and make sure you got the projected image topic as we seen in Extrinsic calibration.
ros2 launch turtlebot3_autorace_camera intrinsic_camera_calibration.launch.py
ros2 launch turtlebot3_autorace_camera extrinsic_camera_calibration.launch.py
Then run you just know the aprrox. distance (in meters) of any closed line to the robot, and then run
ros2 run slam_lane_tracking_ros2 lane_offsets_logger --calibration-method manual --known-distance 0.013
Then Press enter and your calibration will start
output to visulized the lane offset accurate:
If you think your measurment is not correct, enter "y" and move the robot little bit and again press enter:
After this you will get left and right distance of the lane you can see this in image:
We are runing slam node because we need the map coordinate, so later we can convert this map into our virtual map by using the offsets of lane.
[TurtleBot3 SBC]
export TURTLEBOT3_MODEL=burger
ros2 launch turtlebot3_bringup robot.launch.py
[Remote PC]
ros2 launch turtlebot3_cartographer cartographer.launch.py use_sim_time:=True
Automatically Rviz is open and you will see this:
After runing this just save the rough map, because later we need the Resoulation of map to convert the robot pose coordinates into the map
[Remote PC]
ros2 run nav2_map_server map_saver_cli -f ~/check2
we are doing this step is just for we need the value of resolution: 0.05 because this value help us to convert world coordinates into map coordinates
If you want to store the origin, you can run the following command before making the virtual map.
Using camera calibration and known marker size, OpenCV solves a PnP problem:
t = [ x y z ]ᵀ
r ∈ ℝ³
- t : marker position w.r.t. camera (meters)
- r : rotation vector (axis–angle)
Distance to marker:
d = √(x² + y² + z²)
Rotation vector is converted to a rotation matrix using Rodrigues’ formula:
R = Rodrigues(r)
Then converted to a quaternion:
q = (qₓ, qᵧ, q_z, q_w)
At calibration time, the system stores:
-
Robot pose (odometry): pᵣ⁰ , qᵣ⁰
-
Marker pose (camera frame): pₘ⁰ , qₘ⁰
[Remote PC]
ros2 run slam_lane_tracking_ros2 aruco_detector --ros-args -p calibration:=True
The robot’s position (x,y,θ) is obtained from the TF transform map → base_link. To compute yaw from quaternion q=(x,y,z,w), the standard equation is used:
This gives the robot’s heading angle in the 2D map.
The corridor boundaries are offset from the robot by fixed distances L (left) and R (right), perpendicular to the heading.
A perpendicular direction to heading θ is θ ± π/2. Thus, the world-coordinate positions of the left and right boundary points are:
These points trace the corridor walls as the robot moves.
The occupancy grid map uses pixel coordinates with the origin at the top-left, while ROS2 uses a metric coordinate frame (origin at map origin). Using map resolution r (meters per pixel) and origin (x0,y0):
The Y-axis is flipped because image coordinates increase downward.
The collected left boundary points and the reversed right boundary points form a closed polygon:
This polygon represents the drivable corridor around the robot.
OpenCV’s polygon fill marks the interior region as free (white) and the exterior as occupied (black):
An optional dilation enlarges the corridor by a structuring kernel (navigation safety margin).
ROS2 uses values:
-
0 = free
-
100 = occupied
Thus the conversion is:
Image to Occupancy Grid Mapping:
The image is vertically flipped before publishing, aligning image coordinates with ROS map coordinates.
PGM format uses:
-
254 = free,
-
0 = occupied.
Thus:
Complete Image to PGM Pipeline:
The YAML file stores resolution, origin, and thresholds for Nav2 map loading.
Run the Following commands on each Terminal: [TurtleBot3 SBC]
export TURTLEBOT3_MODEL=burger
ros2 launch turtlebot3_bringup robot.launch.py
[TurtleBot3 SBC]
ros2 run camera_ros camera_node --ros-args -p format:='RGB888'
[Remote PC]
Terminal 1:
ros2 launch turtlebot3_autorace_camera intrinsic_camera_calibration.launch.py
Terminal 2:
ros2 launch turtlebot3_autorace_camera extrinsic_camera_calibration.launch.py
Terminal 3:
ros2 launch turtlebot3_autorace_detect detect_lane.launch.py
Terminal 4:
ros2 launch turtlebot3_cartographer cartographer.launch.py use_sim_time:=True
Change the Left and Right offset that we have calculate in get lane offset In the virtual_map_builder script.
Then Run
Terminal 5:
ros2 run slam_lane_tracking_ros2 virtual_map_builder
Terminal 6:
ros2 launch turtlebot3_autorace_mission control_lane.launch.py
robotics_part2_for_github1.mp4
When robot explore the full map then run the command to save the map:
Terminal 7:
ros2 run nav2_map_server map_saver_cli -f ~/my_virtual_map -t /virtual_map
After saved the map you got two files one is my_virtual_map.pgm file and other is my_virtual_map.yaml
In this Virtual map Normal parameters of Turtelbot3_navigation2 is not work very well so for this we should update the burger_cam.yaml file:
which is located:
~/ros2_ws/src/turtlebot3/turtlebot3_navigation2/param/humble
reduce the robot size:
local_costmap:
robot_radius: 0.05
inflation_radius: 0.5
global_costmap:
robot_radius: 0.05
inflation_radius: 0.1
Then run:
cd ~/turtlebot3_ws && colcon build --symlink-install
After Colcon build Open gazebo and load your map.
[TurtleBot3 SBC]
export TURTLEBOT3_MODEL=burger
ros2 launch turtlebot3_bringup robot.launch.py
e_p = pᵣ − pᵣ⁰
Component-wise:
eₓ = x − x₀
eᵧ = y − y₀
e_z = z − z₀
Quaternion inverse:
q⁻¹ = (−x, −y, −z, w)
Orientation error:
q_err = q_current ⊗ q_reference⁻¹
Extract angle from quaternion:
θ = 2 · acos(q_w)
θ_deg = θ · (180 / π)
Marker position error:
eₘ = pₘ − pₘ⁰
Marker distance:
dₘ = |eₘ|
If all conditions are met → robot is aligned with saved origin.
First, find the origin again, for this run:
[Remote PC]
ros2 run slam_lane_tracking_ros2 aruco_detector --ros-args -p calibration:=False
How to set the robot:
Aruco.tag1.mp4
Now load the map and run the navigation:
[Remote PC]
ros2 launch turtlebot3_navigation2 navigation2.launch.py use_sim_time:=True map:=$HOME/my_virtual_map.yaml
First, estimate the pose, and then send the goal using Nav2 Goal:
Loactization_git.mp4
In future work, a key improvement is to increase the reliability of navigation, as the robot occasionally drifts outside the intended driving corridor. A promising solution is to replace the standard global costmap with a lane-corridor mask generated directly from the lane-detection system. By converting the detected lane boundaries into an occupancy mask—where all pixels outside the lane are marked as obstacles (100) and all pixels inside the lane are marked as free space (0)—Nav2 will be constrained to plan only within the lane. This approach effectively transforms the TurtleBot3 into a lane-aware autonomous robot, similar to the behavior of real self-driving cars, and can significantly improve path-planning accuracy and stability.


