- π¦ 19 Feb 2026: Release of MineInsight v2.
- π 31 Jan 2026: MineInsight has been accepted for presentation at ICRA 2026 in Vienna!
- π 11 Dec 2025: Published on IEEE RA-L.
- π¦ 5 Jun 2025: Initial dataset release on GitHub (v1).
- [1] Motivation
- [2] Experimental Setup
- [3] Environments and Sequences
- [4] Targets
- [5] Calibration
- [6] Data
- [7] Acknowledgments
- [8] Citation
- [9] License
- [10] Related Work
Landmines remain a persistent threat in conflict-affected regions, posing risks to civilians and impeding post-war recovery. Traditional demining methods are often slow, hazardous, and costly, necessitating the development of robotic solutions for safer and more efficient landmine detection.
MineInsight is a publicly available multi-spectral dataset designed to support advancements in robotic demining and off-road navigation. It features a diverse collection of sensor data, including visible (RGB, monochrome), short-wave infrared (VIS-SWIR), long-wave infrared (LWIR), and LiDAR scans. The dataset includes dual-view sensor scans from both a UGV and its robotic arm, providing multiple viewpoints to mitigate occlusions and improve detection accuracy.
With over 38,000 RGB frames, 53,000 VIS-SWIR frames, and 108,000 LWIR frames recorded in both daylight and nighttime conditions, featuring 35 different targets distributed along 3 tracks, MineInsight serves as a benchmark for developing and evaluating detection algorithms.
ion follows the terminology and conventions outlined in the accompanying paper.
For a more detailed understanding of the methodology and experimental design, please refer to the paper.
| Platform and Robotic Arm | Platform Sensor Suite | Robotic Arm Sensor Suite |
|---|---|---|
| Clearpath Husky A200 UGV Universal Robots UR5e Robotic Arm |
Livox Mid-360 LiDAR Sevensense Core Research Module Microstrain 3DM-GV7-AR IMU |
Teledyne FLIR Boson 640 Alvium 1800 U-130 VSWIR Alvium 1800 U-240 Livox AVIA |
The coordinate systems (and their TF name) of all sensors in our platform are illustrated in the figure below.
Note: The positions of the axis systems in the figure are approximate.
This visualization provides insight into the relative orientations between sensors,
whether in the robotic arm sensor suite or the platform sensor suite.
For the full transformation chain, refer to the following ROS 2 topics in the dataset:
/tf_staticβ Contains static transformations between sensors./tfβ Contains dynamic transformations recorded during operation.
The dataset was collected across 3 distinct tracks, each designed to represent a demining scenario with varying terrain and environmental conditions. These tracks contain a diverse set of targets, positioned to challenge algorithm development. The figure below represents a top-view pointcloud distribution of the targets along the tracks.
To support reproducibility and research into auto-labeling pipelines, we release the intermediate data used to generate our ground truth. This allows the community to study the "Reference Data Generation" process described in Section IV.D of our paper.
We release the raw ROS 2 bags for the 3 Reference Sequences. These sequences were recorded with AprilTags placed at every target location to facilitate precise pose estimation.
- Note: These bags are provided exactly as recorded (raw data) and have not been processed or topic-remapped like the main dataset sequences.
Download:
- TRACK 1 Reference Sequence ROS 2 Bag
- TRACK 2 Reference Sequence ROS 2 Bag
- TRACK 3 Reference Sequence ROS 2 Bag
We provide the ground truth position of each AprilTag stick relative to the reference frame map. These are released as JSON files.
For details on how these were derived, please refer to Section IV.D ("Reference data generation") in the paper.
We also provide the automatic label ground truth generated by our pipeline before human supervision or further refinements.
These labels were created by detecting AprilTags in the reference sequences, calculating their 3D pose via SLAM + ICP, and projecting them into the evaluation sequences (see Section IV.D of the paper for the full methodology).
Released as TXT files in a single folder for all tracks, these labels follow the naming convention detailed in Section 6 of this repository to ensure temporal synchronization with the images.
For each track, a detailed inventory PDF is available, providing the full list of targets along with their respective details.
You can find them in the **tracks inventory** folder of this repository:π Track 1 Inventory Β |Β π Track 2 Inventory Β |Β π Track 3 Inventory
Each PDF catalogs each item with:
- ID: Unique identifier for each target;
- Name: Official name of the target;
- Image: A visual reference of the object for recognition;
- CAT-UXO link: Detailed explanation of the target (available only for landmines).
The dataset includes intrinsic and extrinsic calibration files for all cameras and LiDARs.
intrinsics_calibration/
lwir_camera_intrinsics.yamlβ LWIR camerargb_camera_intrinsics.yamlβ RGB camerasevensense_cameras_intrinsics.yamlβ Sevensense grayscale camerasswir_camera_intrinsics.yamlβ VIS-SWIR camera
extrinsics_calibration/
lwir_avia_extrinsics.yamlβ LWIR β Livox AVIArgb_avia_extrinsics.yamlβ RGB β Livox AVIAsevensense_mid360_extrinsics.yamlβ Sevensense β Livox Mid-360swir_avia_extrinsics.yamlβ VIS-SWIR β Livox AVIA
Note:
Intrinsic parameters are also included in the extrinsics calibration files, as they were evaluated using raw camera images.
We release 2 sequences per track, resulting in a total of 6 sequences.
The data is available in three different formats:
- ποΈ ROS 2 Bags
- ποΈ ROS 2 Bags with Livox Custom Msg
- π· Raw Images
Each ROS 2 Bag, includes:
Click here to view all the topics with a detailed explaination
| Topic | Message Type | Description |
|---|---|---|
| /allied_swir/image_raw/compressed | sensor_msgs/msg/CompressedImage | VIS-SWIR camera raw image |
| /allied_swir/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | VIS-SWIR camera rectified image |
| /allied_rgb/image_raw/compressed | sensor_msgs/msg/CompressedImage | RGB camera raw image |
| /allied_rgb/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | RGB camera rectified image |
| /alphasense/cam_0/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 0 raw image |
| /alphasense/cam_0/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 0 rectified image |
| /alphasense/cam_1/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 1 raw image |
| /alphasense/cam_1/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 1 rectified image |
| /alphasense/cam_2/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 2 raw image |
| /alphasense/cam_2/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 2 rectified image |
| /alphasense/cam_3/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 3 raw image |
| /alphasense/cam_3/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 3 rectified image |
| /alphasense/cam_4/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 4 raw image |
| /alphasense/cam_4/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 4 rectified image |
| /alphasense/imu | sensor_msgs/msg/Imu | IMU data from Sevensense Core |
| /avia/livox/imu | sensor_msgs/msg/Imu | IMU data from Livox AVIA LiDAR |
| /avia/livox/lidar/pointcloud2 | sensor_msgs/msg/PointCloud2 | Point cloud data from Livox AVIA LiDAR |
| /flir/thermal/compressed | sensor_msgs/msg/CompressedImage | LWIR camera raw image |
| /flir/thermal/rectified/compressed | sensor_msgs/msg/CompressedImage | LWIR camera rectified image |
| /flir/thermal/colorized/compressed | sensor_msgs/msg/CompressedImage | LWIR camera raw image with colorized overlay |
| /flir/thermal/rectified/colorized/compressed | sensor_msgs/msg/CompressedImage | LWIR camera rectified image with colorized overlay |
| /microstrain/imu | sensor_msgs/msg/Imu | IMU data from Microstrain (internal) |
| /mid360/livox/imu | sensor_msgs/msg/Imu | IMU data from Livox Mid-360 LiDAR |
| /mid360/livox/lidar/pointcloud2 | sensor_msgs/msg/PointCloud2 | Point cloud data from Livox Mid-360 LiDAR |
| /odometry/filtered | nav_msgs/msg/Odometry | Filtered odometry data (ROS 2 localization, fusion output ) |
| /odometry/wheel | nav_msgs/msg/Odometry | Wheel odometry data from UGV wheel encoder |
| /tf | tf2_msgs/msg/TFMessage | Real-time transformations between coordinate frames |
| /tf_static | tf2_msgs/msg/TFMessage | Static transformations |
If you are downloading a ROS 2 Bag with Livox Custom Msg, you will find the following additional topics:
| Topic | Message Type | Description |
|---|---|---|
| /avia/livox/lidar | livox_interfaces/msg/CustomMsg | Raw point cloud data from Livox AVIA LiDAR in custom Livox format |
| /mid360/livox/lidar | livox_ros_driver2/msg/CustomMsg | Raw point cloud data from Livox Mid-360 LiDAR in custom Livox format |
Note:
These messages include timestamps for each point in the point cloud scan.
To correctly decode and use these messages, install the official Livox drivers:
- Livox AVIA (π livox_ros2_driver)
- Livox Mid-360 (π livox_ros_driver2)
For installation instructions, please take a look at the documentation in the respective repositories.
We provide the raw data in two formats. Standard bags use standard ROS 2 message type (PointCloud2), while Livox Custom Msg bags include the raw driver data for users who need the raw polar coordinate data.
| Track / Seq | Standard ROS 2 Bag | Livox Custom Msg Bag |
|---|---|---|
| Track 1 - Seq 1 | Download 19.1 GB |
Download 19.6 GB |
| Track 1 - Seq 2 | Download 75.3 GB |
Download 77.9 GB |
| Track 2 - Seq 1 | Download 15.1 GB |
Download 15.5 GB |
| Track 2 - Seq 2 | Download 68.9 GB |
Download 71.0 GB |
| Track 3 - Seq 1 | Download 5.5 GB |
Download 5.9 GB |
| Track 3 - Seq 2 | Download 24.4 GB |
Download 26.0 GB |
Detailed explanations of file formats, directory structures, and specific annotation details (including SAM2 masks and LWIR labels) can be found in the Data Format & Notes section immediately following this table.
| Track / Seq | RGB Data | VIS-SWIR Data | LWIR Data |
|---|---|---|---|
| Track 1 - Seq 1 | πΌοΈ Images 3.8 GBπ·οΈ Labels (v2) 1.2 MBπ Masks (auto) 4.8 MB |
πΌοΈ Images 465 MB π·οΈ Labels (v2) 1 MBπ Masks (auto) 1.2 MB |
πΌοΈ Images 669 MB π·οΈ Reproj. Labels (v2) 2.5 MBπ·οΈ Auto Labels (v2) 2.2 MB |
| Track 1 - Seq 2 | πΌοΈ Images 12.0 GB π·οΈ Labels (v2) 6.5 MBπ Masks (auto) 27.2 MB |
πΌοΈ Images 4.2 GB π·οΈ Labels (v2) 5.1 MBπ Masks (auto) 7.5 MB |
πΌοΈ Images 3.0 GB π·οΈ Reproj. Labels (v2) 12.2 MBπ·οΈ Auto Labels (v2) 12.2 MB |
| Track 2 - Seq 1 | πΌοΈ Images 2.8 GB π·οΈ Labels (v2) 1.2 MBπ Masks (auto) 4.7 MB |
πΌοΈ Images 872 MB π·οΈ Labels (v2) 1 MBπ Masks (auto) 1 MB |
πΌοΈ Images 520 MB π·οΈ Reproj. Labels (v2) 2.3 MBπ·οΈ Auto Labels (v2) 2.3 MB |
| Track 2 - Seq 2 | πΌοΈ Images 15.8 GB π·οΈ Labels (v2) 8.7 MBπ Masks (auto) 42 MB |
πΌοΈ Images 2.9 GB π·οΈ Labels (v2) 4 MBπ Masks (auto) 6 MB |
πΌοΈ Images 2.3 GB π·οΈ Reproj. Labels (v2) 12.2 MBπ·οΈ Auto Labels (v2) 12.4 MB |
| Track 3 - Seq 1 | β |
πΌοΈ Images 630 MB π·οΈ Labels (v2) 1 MB π Masks N/A |
πΌοΈ Images 566 MB π·οΈ Reproj. Labels (v2) 2 MBπ·οΈ Auto Labels (v2) 2.3 MB |
| Track 3 - Seq 2 | β |
πΌοΈ Images 2.6 GB π·οΈ Labels (v2) 3.5 MBπ Masks N/A |
πΌοΈ Images 2.0 GB π·οΈ Reproj. Labels (v2) 7.3 MBπ·οΈ Auto Labels (v2) 8 MB |
Key: πΌοΈ: Images | π·οΈ: Annotations | π: Mask (SAM2)
Each archive (.zip) follows the naming convention: track_(nt)_s(ns)_camera_(type).zip.
- (nt) β Track number (
1, 2, 3) - (ns) β Sequence number (
1, 2) - camera β Sensor (
rgb, swir, lwir) - type β Type of resource (
images, labels, masks) and only for LWIR (labels_reproj, labels_auto)
Inside, files are named: track_(nt)_s(ns)_camera_timestampsec_timestampnanosec (.jpg / .txt)
- Bounding Boxes (YOLOv8):
Target positions are provided in
.txtfiles:Classes list: tracks_inventory/targets_list.yaml<class_id> <x_center> <y_center> <width> <height>
-
π’ SAM2 Masks (Segmentation) We provide binary segmentation masks generated via SAM2 to assist with pixel-level analysis.
β οΈ Disclaimer: These masks are raw, auto-generated outputs produced by SAM2. They were initialized using our ground-truth bounding boxes as prompts and propagated temporally across the target's lifecycle. These masks have not been human-verified or corrected. While many object IDs exhibit high stability (even in vegetated terrain), others may show temporal fluctuation, dimensional instability, or include background noise such as debris and leaves. Availability: Due to the low-light conditions on Track 3, masks are currently released only for Track 1 & 2 (RGB and VIS-SWIR, S1 and S2).- Format: Binary
.pngimages.- How to Read: The files are single-channel grayscale.
- Value 255 (White): Target Object.
- Value 0 (Black): Background.
- How to Read: The files are single-channel grayscale.
* **Directory Structure & Naming:** Masks are organized into subfolders by **Object ID**. Inside each ID folder, the mask filename corresponds to the original image timestamp, ending with the object ID.track_2_s2_swir_masks/ βββ id_2/ β βββ track_2_s2_swir_1730300015_687251662_id2.png β βββ track_2_s2_swir_1730300015_753865168_id2.png β βββ ... βββ id_4/ β βββ track_2_s2_swir_1730300015_687251662_id4.png β βββ ... βββ ...- Naming Key:
track_(nt)_s(ns)_(camera)_(sec)_(nanosec)_id(N).png
- Format: Binary
Due to faint thermal signatures, direct manual annotation was unfeasible. As mentioned in the reference paper, we proceeded with reprojection with added human input. We provide two label variants:
1. Reprojected Labels (labels_reproj)
- Methodology: Manual RGB labels are reprojected into the LWIR frame using LiDAR depth. To reduce errors from sparse LiDAR or vegetation, we apply depth-gating logic to reject implausible depth jumps.
- Pros/Cons: High geometric consistency but dependent on the source camera's FOV. Labels may be missing if reprojection fails (ex: occlusions, depth rejection) or if the target exits the source FOV.
2. Automatic Labels (labels_auto)
- Methodology: Generated from the Automatic Labels (see Section 3.1), correcting their trajectories using the locations of the manual reprojections.
- Purpose: Provides denser temporal coverage to fill gaps where reprojection fails. These, however, provide larger bounding boxes designed to indicate the general area of the target (like a ROI), rather than a tight fit.
π Track 3 Constraint: Since Track 3 lacks RGB data, reprojection relies solely on the VIS-SWIR sensor. Because the VIS-SWIR FOV is narrower than the Thermal FOV, you will observe more "missing labels" in Track 3 LWIR reprojections whenever targets are outside the VIS-SWIR view.
We provide the climatology data for the two key days surrounding the test campaign:
π Climatology 29 & 30 Oct 2024.xlsx
29 October 2024 β the day before the campaign, when targets were placed on the soil at around 09:00 AM local time. 30 October 2024 β the day of the campaign, when sensor measurements were conducted.
The full Excel file contains minute-by-minute measurements collected across both days. These measurements are useful for processing the thermal camera data, as they allow correlation between atmospheric and surface conditions and thermal imaging performance.
The following parameters are available in the dataset (in the order of the Excel file):
| Parameter | Unit |
|---|---|
| Time | HH:MM:SS |
| Wind force (10 m) | kt |
| Wind gusts | kt |
| Wind direction | Β° (deg) |
| Air temperature | Β°C |
| T β5 cm (soil) | Β°C |
| T β10 cm (soil) | Β°C |
| T β20 cm (soil) | Β°C |
| T β50 cm (soil) | Β°C |
| Road surface temperature | Β°C |
| Grass surface temperature | Β°C |
| Dew point temperature | Β°C |
| Relative Humidity (HR) | % |
| Pressure | hPa |
| Clouds (octas @ height @ type) | β |
| Total clouds | octas |
| Precipitation quantity (1 min) | mm |
| Precipitation quantity (1 hour) | mm |
| Precipitation quantity (1 day) | mm |
To facilitate analysis, the table below shows the exact climatology time windows corresponding to each recorded sequence.
All times refer to 30 October 2024 (campaign day).
| Track | Sequence | Bag file start time (local) | Duration | Climatology window |
|---|---|---|---|---|
| 1 | Seq 1 | 13:17:49 | 4 min 12 s | 13:17:49 β 13:22:01 |
| 1 | Seq 2 | 13:54:26 | 19 min 58 s | 13:54:26 β 14:14:24 |
| 2 | Seq 1 | 15:16:35 | 3 min 42.8 s | 15:16:35 β 15:20:17 |
| 2 | Seq 2 | 15:47:05 | 14 min 46 s | 15:47:05 β 16:01:51 |
| 3 | Seq 1 | 17:42:19 | 3 min 41.5 s | 17:42:19 β 17:46:00 |
| 3 | Seq 2 | 17:28:07 | 13 min 18 s | 17:28:07 β 17:41:25 |
By aligning the timestamps of each ROS 2 bag with this climatology log, users can extract the environmental conditions (temperature, humidity, wind, etc.) at the exact moment of each recording.
The figure below shows the air and soil temperatures (β5 cm, β10 cm, β20 cm, β50 cm) throughout the campaign day (30 October 2024).
Red shaded regions correspond to the time windows when each track sequence was recorded.
During Track 3 recordings (30 October 2024), the RGB camera experienced a progressive failure.
- The first part of the recording (starting at 17:28:07 and 17:42:19, see Climatology section) already shows frames that would have been very dark, making it extremely difficult to detect any target or terrain details.
- By the end of the sequences, the RGB feed would have been completely black, given the near-nighttime conditions.
- This issue affects both Sequence 1 (3 min 41.5 s) and Sequence 2 (13 min 18 s).
We recovered the bag metadata and extracted a short video from the RGB camera illustrating the Track 3 illumination condition at the beginning of the recordings:
The authors thank Alessandra Miuccio and TimothΓ©e FrΓ©ville for their support in the hardware and software design.
They also thank Sanne Van Hees and Jorick Van Kwikenborne for their assistance in organizing the measurement campaign.
Finally, they thank the Belgian Meteo Wing for providing the climatology study during the days of the test campaign.
If you use MineInsight in your own work, please cite the accompanying paper:
@article{11297788,
author={Malizia, Mario and Hamesse, Charles and Hasselmann, Ken and De Cubber, Geert and Tsiogkas, Nikolaos and Demeester, Eric and Haelterman, Rob},
journal={IEEE Robotics and Automation Letters},
title={MineInsight: A Multi-Sensor Dataset for Humanitarian Demining Robotics in Off-Road Environments},
year={2026},
volume={11},
number={2},
pages={1650-1657},
doi={10.1109/LRA.2025.3643265}}
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
For full details, see:
CC BY-NC-SA 4.0 License
If you are interested in this dataset, you may also be interested in some of the related work:
- PFM-1 Landmine Detection in Vegetation Using Thermal Imaging with Limited Training Data. Malizia, Mario, Ken Hasselmann, Alessandra Miuccio, Rob Haelterman, Nikolaos Tsiogkas, and Eric Demeester. 2025. Proceedings of the 25th International Conference on Control, Automation and Systems (ICCAS), Incheon, South Korea, pp. 1864-1869. https://doi.org/10.23919/ICCAS66577.2025.11301116
- Multimodal Ensemble with Verification Mechanism for Landmine Detection. Melnykova, Nataliia, and Anna Vechirska. 2025. Science and Technology Today (ΠΠ°ΡΠΊΠ° Ρ ΡΠ΅Ρ Π½ΡΠΊΠ° ΡΡΠΎΠ³ΠΎΠ΄Π½Ρ), No. 9(50), pp. 939-948. https://doi.org/10.52058/2786-6025-2025-9(50)-939-948






