The Environment Model node receives multiple inputs from various modules essential for building a comprehensive understanding of the car’s surroundings. These inputs include the ego vehicle's position and velocity data from the /odom topic. Obstacle details are gathered from the /obstacle_detection topic, while object detections such as pedestrians or vehicles are received via the /detection topic.
The primary function of the Environment Model is to fuse data from various perception sources to create an accurate and real-time representation of the driving environment for the car. It processes odometry to understand vehicle dynamics and localize detected objects.
The Environment Model node provides 1 output to other functional modules. /detected_objects_pos provides the real world coordinates of the detected objects with respect to v2x frame in Model Car.
1. Locate all the detected objects with respect to v2x_frame frame in Model Car
| Activity Diagram | Detection and Aligned Depth |
|---|---|
![]() |
![]() ![]() |
The activity diagram illustrates the complete workflow for locating detected objects with respect to the v2x_frame using the RGB and depth data from the Intel RealSense camera. The process begins by initializing the camera, aligning the depth image to the RGB frame, and receiving object detections. For each detected object, the bounding box (bbox) centre coordinates, width and height are used to extract depth values in the bbox window, which are filtered (depth > 0.1m and < 10m) to retain only valid measurements. If valid depth data is available, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used for clustering the depth values. The median depth of the largest cluster is then used to compute the 3D coordinates of the object in the camera reference frame, which are further transformed into the v2x_frame using tf2. If there is only one cluster, then median depth is calculated directly. The object’s position, class, and confidence values are stored in an array. This loop continues for all detections, and once complete, the array of detected objects with their positions is published to the /detected_objects_pos topic.
2. Broadcast a new dynamic v2x_frame (located on the ground below the center of the front bumper’s starting point)
a. Create a new frame relative to the base_link with following translation and rotation values -
translation_x = 0.65m, translation_y = 0.0m, translation_y = -0.07m
rotation based on roll = 0, pitch = 0 and yaw = -1*(dynamic yaw value calculated from the car's orientation given by optitrack system).
b. This frame will have the same orientation as the map frame but will be translated based on the position of the car.
Density-Based Spatial Clustering of Applications with Noise (DBSCAN):
Necessity:
a. Center pixel depth is unreliable because the center of the bounding box may fall on the background instead of the object.
b. Objects often have non-uniform depth (e.g., person, potted plant), so a single depth value cannot represent the entire object accurately.
c. For clustering depth values, number of depth clusters is unknown and varies per object and scene, so a fixed-cluster algorithm (e.g., K-Means) is unsuitable.
Design:
a. eps (radius of the neighborhood around a data point) = 0.1m
b. min_samples (minimum number of samples required within the eps radius to form cluster) = 30
Evaluation:
Verified the algorithm by giving 2 different clusters in component testing (TC_OD005 - Test step 1).
graph LR
subgraph Input topics
EVSEAL["/odom"]:::grayEllipse
OD["/obstacles"]:::grayEllipse
ODM["/detection"]:::grayEllipse
D["/aligned_depth_to_color/image_raw"]:::grayEllipse
end
EM["environment_model"]:::cyanEllipse
EVSEAL --> EM
OD --> EM
ODM --> EM
D --> EM
EM --> DOP
subgraph Output topics
DOP["/detected_objects_pos"]:::grayEllipse
end
%% Ellipse shape class
classDef soft_rectangle stroke:#FFFFFF,rx:20,ry:20;
classDef component font-weight:bold,stroke-width:2px;
%% Cyan for path planner
classDef cyanEllipse fill:#00CED1, color:#000000;
%% Gray for others
classDef grayEllipse fill:#D3D3D3, color:#000000;
%% Apply ellipse shape to each node
class EVSEAL soft_rectangle;
class MS soft_rectangle;
class OD soft_rectangle;
class ODM soft_rectangle;
class LD soft_rectangle;
class D soft_rectangle;
class DOP soft_rectangle;
class LOI soft_rectangle;
class EM component;
| Name | IO | Type | Description |
|---|---|---|---|
/odom |
Input | nav_msgs/msg/Odometry.msg |
Provides position and velocity of ego vehicle |
/obstacles |
Input | custom_msgs/msg/ObstacleDetectionArray.msg |
Provides angles and ranges |
/detection |
Input | vision_msgs/msg/Detection2DArray.msg |
Array of 2D detecttions with class labels and confidence score |
/detected_objects_pos |
Output | custom_msgs/msg/DetectedObjectsPositionArray.msg |
Provides the real world coordinates of the detected objects with respect to v2x frame in Model Car. |
/lane_obstacle_info |
Output | custom_msgs/msg/LaneObstacleArray.msg |
Provides data related to lane and obstacles with respect to base link in Model Car. |
Message: ObstacleDetectionArray.msg
| Name | Type | Description |
|---|---|---|
detection_time |
float64 |
The time when the objects were detected |
array |
DetectedObject[] |
Information of detected objects in array |
| Name | Type | Description |
|---|---|---|
id |
uint8 |
Incremental number of detected objects |
class_id |
string |
Unique class id of the detected objects |
confidence |
uint8 |
Confidence score (0-100) of the detection |
x |
int32 |
x coordinate (cm) of the detected object with respect to v2x frame |
y |
int32 |
y coordinate (cm) of the detected object with respect to v2x frame |
z |
int32 |
z coordinate (cm) of the detected object with respect to v2x frame |
- Create workspace, src and go to src
mkdir temp_ws
cd temp_ws
mkdir src
cd src- Clone component repository
git clone https://github.com/surendrakoganti/environment_model.git- Clone additional repositories
git clone https://github.com/surendrakoganti/custom_msgs.git
git clone https://git.hs-coburg.de/Autonomous_Driving/car_description.git- Return to workspace and build the packages
cd ..
colcon build- Source the setup files
source install/setup.bash- Launch the car description
ros2 launch car_description visualize_model.launch.py car_id:=4- Launch the environment model node
ros2 launch environment_model environment_model_launch.pyLicensed under the Apache 2.0 License. See LICENSE for details.


