Table of Contents
- We have updated the
README.mdand are preparing to open-source our code! - Code for main parts, including
optimizer,renderer,mapping modules,Joint Scene Representation - Installation setup
- Multi-agent Communication
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
Please follow the instructions below to install the repo and dependencies.
git clone https://github.com/dtc111111/MNESLAM.git
cd MNE-SLAMYou can create an anaconda environment called mneslam. For linux, you need to install libopenexr-dev before creating the environment.
Install all the dependencies via conda (Please note here pytorch3d and tinycudann requires ~10min to build)
sudo apt-get install libopenexr-dev
conda env create -f environment.yaml
conda activate mneslam
python setup.py install
pip install thirdparty/lietorchcd NumpyMarchingCubes
python setup.py install
cd ..For tinycudann, if you cannot access network when you use GPUs, you can also try build from source as below:
# Build tinycudann
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
# Try this version if you cannot use the latest version of tinycudann
#git reset --hard 91ee479d275d322a65726435040fc20b56b9c991
cd tiny-cuda-nn/bindings/torch
python setup.py installIf desired, the Open3D package can be installed in the headless rendering mode. This is useful for running MNESLAM on a server without a display. We recommend to install from this commit as we observed bugs in other releases of Open3D.
We now recommend using Docker to set up the mneslam environment for consistency and ease of installation.
Follow the official instructions: https://docs.docker.com/get-docker/
Make sure you can run containers with GPU support. If you're using NVIDIA GPUs, install the NVIDIA Container Toolkit.
# Build the image
docker build -t mneslam:latest .docker run --gpus all -it --rm \
--name mneslam_container \
-v $(pwd):/workspace \
mneslam:latestThis will mount the current repository inside the container and open an interactive shell.
This project requires several pretrained models for visual feature extraction, retrieval, and SLAM initialization. Please download the following checkpoints before running any demo:
-
DROID-SLAM pretrained model:
-
NetVLAD Models:
Download the data as below and the data is saved into the ./Datasets/Replica folder.
bash scripts/download_replica.shPlease follow the data downloading procedure on ScanNet website, and extract color/depth frames from the .sens file using this code.
[Directory structure of ScanNet (click to expand)]
DATAROOT is ./Datasets by default. If a sequence (sceneXXXX_XX) is stored in other places, please change the input_folder path in the config file or in the command line.
DATAROOT
└── scannet
└── scans
└── scene0000_00
└── frames
├── color
│ ├── 0.jpg
│ ├── 1.jpg
│ ├── ...
│ └── ...
├── depth
│ ├── 0.png
│ ├── 1.png
│ ├── ...
│ └── ...
├── intrinsic
└── pose
├── 0.txt
├── 1.txt
├── ...
└── ...
Download the data as below and the data is saved into the ./Datasets/TUM folder.
bash scripts/download_tum.shDownload the data as below and the data is saved into the ./Datasets/INS folder. You can download the dataset in INS Dataset Page.
This is the unofficial implementation of CP-SLAM: Collaborative Neural Point-based SLAM System. The original CP-SLAM code contained certain issues that hindered its proper functionality. We have addressed and resolved these issues to ensure correct operation. Additionally, we provided further details on the execution steps and added code for the evaluation section.
You can run MNESLAM using the code below: Ideally, our system needs n GPUs where n is the nubmer of agents. If you want to run the system for debugging purposes, set multi_gpu: False and set the agent number 1. This configuration runs a single agent and uses the same GPU for both the server and the agent. You can start the system by running:
python multi_agents.py --config configs/Replica/room0.yaml --num_gpus 2
We employ a slightly different evaluation strategy to measure the quality of the reconstruction, you can find out the code here.
Bibtex
@inproceedings{deng2025mne,
title={MNE-SLAM: Multi-Agent Neural SLAM for Mobile Robots},
author={Deng, Tianchen and Shen, Guole and Xun, Chen and Yuan, Shenghai and Jin, Tongxin and Shen, Hongming and Wang, Yanbo and Wang, Jingchuan and Wang, Hesheng and Wang, Danwei and others},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
pages={1485--1494},
address={Nashville, USA},
year={2025}
}
We adapt codes from some awesome repositories, including NICE-SLAM, NeuralRGBD, tiny-cuda-nn, NICE-SLAM, iMAP, ESLAM ,CoSLAM Thanks for making the code available.