Skip to content

dtc111111/MNESLAM

Repository files navigation

Logo

MNE-SLAM: Multi-Agent Neural SLAM for Mobile Robots

[CVPR 2025]

INS Dataset Page | Paper


Logo


Table of Contents
  1. Installation
  2. Online Demo
  3. Usage
  4. Downloads
  5. Benchmarking
  6. Acknowledgement
  7. Citation

Notes

  • We have updated the README.md and are preparing to open-source our code!
  • Code for main parts, including optimizer, renderer, mapping modules, Joint Scene Representation
  • Installation setup
  • Multi-agent Communication

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

Please follow the instructions below to install the repo and dependencies.

git clone https://github.com/dtc111111/MNESLAM.git
cd MNE-SLAM

Install the environment

You can create an anaconda environment called mneslam. For linux, you need to install libopenexr-dev before creating the environment. Install all the dependencies via conda (Please note here pytorch3d and tinycudann requires ~10min to build)

sudo apt-get install libopenexr-dev
conda env create -f environment.yaml
conda activate mneslam

Build extension (Lietorch/droid)

python setup.py install
pip install thirdparty/lietorch

Build extension (marching cubes from neuralRGBD)

cd NumpyMarchingCubes
python setup.py install
cd ..

For tinycudann, if you cannot access network when you use GPUs, you can also try build from source as below:

# Build tinycudann 
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn

# Try this version if you cannot use the latest version of tinycudann
#git reset --hard 91ee479d275d322a65726435040fc20b56b9c991
cd tiny-cuda-nn/bindings/torch
python setup.py install

If desired, the Open3D package can be installed in the headless rendering mode. This is useful for running MNESLAM on a server without a display. We recommend to install from this commit as we observed bugs in other releases of Open3D.

Install the Environment (via Docker)

We now recommend using Docker to set up the mneslam environment for consistency and ease of installation.

Install Docker (if not already installed)

Follow the official instructions: https://docs.docker.com/get-docker/

Make sure you can run containers with GPU support. If you're using NVIDIA GPUs, install the NVIDIA Container Toolkit.

Build the Docker Image

# Build the image
docker build -t mneslam:latest .

Run the Docker Container

docker run --gpus all -it --rm \
  --name mneslam_container \
  -v $(pwd):/workspace \
  mneslam:latest

This will mount the current repository inside the container and open an interactive shell.

Pretrained Model

This project requires several pretrained models for visual feature extraction, retrieval, and SLAM initialization. Please download the following checkpoints before running any demo:

Run

Replica

Download the data as below and the data is saved into the ./Datasets/Replica folder.

bash scripts/download_replica.sh

ScanNet

Please follow the data downloading procedure on ScanNet website, and extract color/depth frames from the .sens file using this code.

[Directory structure of ScanNet (click to expand)]

DATAROOT is ./Datasets by default. If a sequence (sceneXXXX_XX) is stored in other places, please change the input_folder path in the config file or in the command line.

  DATAROOT
  └── scannet
      └── scans
          └── scene0000_00
              └── frames
                  ├── color
                  │   ├── 0.jpg
                  │   ├── 1.jpg
                  │   ├── ...
                  │   └── ...
                  ├── depth
                  │   ├── 0.png
                  │   ├── 1.png
                  │   ├── ...
                  │   └── ...
                  ├── intrinsic
                  └── pose
                      ├── 0.txt
                      ├── 1.txt
                      ├── ...
                      └── ...

TUM RGB-D

Download the data as below and the data is saved into the ./Datasets/TUM folder.

bash scripts/download_tum.sh

INS Dataset

Download the data as below and the data is saved into the ./Datasets/INS folder. You can download the dataset in INS Dataset Page.

Reproduction of CP-SLAM

This is the unofficial implementation of CP-SLAM: Collaborative Neural Point-based SLAM System. The original CP-SLAM code contained certain issues that hindered its proper functionality. We have addressed and resolved these issues to ensure correct operation. Additionally, we provided further details on the execution steps and added code for the evaluation section.

Run

You can run MNESLAM using the code below: Ideally, our system needs n GPUs where n is the nubmer of agents. If you want to run the system for debugging purposes, set multi_gpu: False and set the agent number 1. This configuration runs a single agent and uses the same GPU for both the server and the agent. You can start the system by running:

python multi_agents.py --config configs/Replica/room0.yaml --num_gpus 2

Evaluation

We employ a slightly different evaluation strategy to measure the quality of the reconstruction, you can find out the code here.

Reference

Bibtex

@inproceedings{deng2025mne,
title={MNE-SLAM: Multi-Agent Neural SLAM for Mobile Robots},
author={Deng, Tianchen and Shen, Guole and Xun, Chen and Yuan, Shenghai and Jin, Tongxin and Shen, Hongming and Wang, Yanbo and Wang, Jingchuan and Wang, Hesheng and Wang, Danwei and others},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
pages={1485--1494},
address={Nashville, USA},
year={2025}
}

Acknowledgement

We adapt codes from some awesome repositories, including NICE-SLAM, NeuralRGBD, tiny-cuda-nn, NICE-SLAM, iMAP, ESLAM ,CoSLAM Thanks for making the code available.

About

[CVPR 2025] MNE-SLAM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors