GIMLI (Guided Integrated Maritime Logistics and Intelligence) is a software solution developed to navigation support for a semi-autonomous vessel. This guide outlines the necessary components, installation steps, configuration details, and instructions for running the solution in a simulated environment.
- Purpose:
GIMLI integrates LIDAR and camera sensor data to deliver real-time obstacle detection and classification for safe maritime navigation.
- Implement reliable obstacle detection via sensor fusion.
- Synchronise real-time image and LIDAR data.
- Integrate and validate the system using the AILiveSim simulator.
- Description:
AILiveSim provides a realistic maritime simulation environment built on Unreal Engine 4. - License Requirement:
A valid product key is required. Contact AILiveSim for licensing details. - Documentation:
Follow the instructions in the AILiveSim documentation.
- Python Dependencies:
All required Python libraries are listed in therequirements.txtfile. - Key Modules:
- ALSLib: For sensor data acquisition and control.
- Open3D & OpenCV: For 3D visualisation and image processing.
- PyTorch & YOLOv5: For object detection within the camera feed.
- Scikit-learn: For DBSCAN clustering.
- Simulation Configuration:
Underlying config files (e.g., sensor settings, extrinsic/intrinsic parameters) must be set up as required. - User Modifications:
Adjust file paths and sensor configurations within these files to match your environment.
- Project Repository:
Contains the complete implementation including:- Data acquisition threads.
- Sensor fusion and calibration modules.
- YOLO inference integration.
- Point cloud clustering.
- Object tracking.
- Visualisation components.
- Code Location:
The code is provided within this repository along with this guide.
- Hardware:
A system capable of running the simulator (ensure your system meets the hardware specifications as per AILiveSim) and process real-time sensor data. - Operating System:
Any OS that supports Python and the AILiveSim simulator. - Software Requirements:
- Python 3.11.9 or newer.
- Obtain License:
Contact AILiveSim to receive a valid product key. - Download & Install:
Follow the AILiveSim documentation to install the simulator on your machine. - Activate Simulator:
Place the license key in the approapriate folder and start the AILiveSim executable.
- Clone the Repository:
git clone https://github.com/belindbl/gimli.git cd gimli - Create a virtual environment:
Then activate the virtual environment:
python -m venv venv
venv\Scripts\activate - Install dependencies:
This ensures all necessary dependencies are installed.
pip install -r requirements.txt
- Edit Config Files:
Locate and adjust the simulation configuration files (e.g., sensor settings, extrinsic/intrinsic parameters) as necessary. - Verify Settings:
Ensure that the sensor configuration matches the setup described in the project.
- Launch the AILiveSim executable.
- Ensure the simulator is running and that the desired scenario (e.g., 'ABoat') is loaded.
- Run the Main Script:
Execute the final code to start the sensor fusion and visualisation processes (automatically loads a predetermined scenario):
python \GIMLI\Python\CustomScripts\GIMLI\RTDP9.py # RTDP9 is the current main script- Monitoring Output:
- A window should display the camera feed overlaid with the projected LiDAR points and annotation texts.
- A separate window will visualise the point cloud in real time.
- User Interaction:
Use the interface to monitor real-time data fusion and observe obstacle detection and collision avoidance overlays.