LET-NET2 transforms the traditional LK optical flow process into a neural network layer by computing its derivatives, enabling end-to-end training of sparse optical flow. It preserves the original lightweight network structure and, during training on simulated datasets, autonomously learns capabilities such as edge orientation extraction and active enhancement of weak-texture regions. As a result, it demonstrates stronger tracking performance under challenging conditions, including dynamic lighting, weak textures, low illumination, and underwater blur.
| Original LK Optical Flow | LETNet | LETNet2 |
|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Download this project
git clone git@github.com:linyicheng1/LET-NET2.git
cd LET-NET2Build the dockerfile
docker build . -t letnet2Waiting for Docker to build, this will take some time.
We train using TartanAir data from the simulated environment.
According to the tutorial, we used the following commands to download the datasets, including both monocular and depth datasets.
pip install boto3 colorama miniopython download_training.py --output-dir OUTPUTDIR --rgb --only-left --depth --unzipThe data is hosted on two servers located in the United States. By default, it downloads from AirLab data server. If you encounter any network issues, please try adding --cloudflare for an alternative source.
After downloading the monocular images and depth maps, we obtain the following file structure.
./office
- Easy
- P001
- pose_left.txt
- image_left
- 000001_left.png
- 000xxx_left.png
- depth_left
- 000001_left_depth.npy
- 000xxx_left_depth.npy
- P00x
- Hard
- P001
- P00x
./seasidetown
./westerndesert
./amusement
./gascola
./ocean
./carwelding
./hospital
./abandonedfactory
./oldtown
./office2
./soulcity
./japanesealley
./abandonedfactory_night
./neighborhood
./seasonsforest
./seasonsforest_winter
./endofworldFirst, run Docker and transfer the code and training dataset into the Docker container.
docker run -it --gpus all -v CODE_DIR:/home/code -v DATA_DIR:/home/data -p 2222:22 --name letnet2 letnet2:latestInside Docker, we can run the following command to start training.
In a new terminal
docker exec -it letnet2 bash
cd /home/code/LET-NET2/Run Training CMD
python3 train.pyInstall dependencies
pip install torch torchvision opencv-python numpyRun Demo
cd interface/python
python3 demo.py -m /home/code/weight/letnet2.pth -i /home/code/interface/assets/nyu_snippet.mp4- OpenCV (https://docs.opencv.org/3.4/d7/d9f/tutorial_linux_install.html)
- ncnn (https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux)
Notes: After installing ncnn, you need to change the path in CMakeLists.txt
set(ncnn_DIR "<your_path>/install/lib/cmake/ncnn" CACHE PATH "Directory that contains ncnnConfig.cmake")
mkdir build && cd build
cmake .. && make -j4
./demo ../weights/letnet_480x640.ncnn.param ../weights/letnet_480x640.ncnn.bin ../../assets/nyu_snippet.mp4
Given that model export and GPU inference environments can be quite complex, we adopt Docker technology here to simplify this process.
cd LET-NET2/interface
docker build . -t let2_interfacedocker run -it --gpus all -v ${CODE_DIR}/LET-NET2:/home/code -v ${DATA_DIR}/euroc/:/home/data/ let2_interface:latest cd /home/code/interface/python/
python3.10 export.py --model /home/code/weight/letnet2.pth --height 480 --width 640You can adjust the image dimensions to export various models.
cd /home/code/interface/cpp_tensorrt/
mkdir build && cd build/
cmake .. && make -j4
./demo_trt ../weights/letnet_480x640.engine ../../assets/nyu_snippet.mp4cd LET-NET2/VINS
docker build . -t let_vinsdocker run -it --gpus all -v ${CODE_DIR}/LET-NET2/VINS:/home/code/src -v ${DATA_DIR}/euroc/:/home/data/ --net=host --env ROS_MASTER_URI=http://localhost:11311 --env ROS_IP=$(hostname -I | awk '{print $1}') let_vins:latest my sample
docker run -it --gpus all -v /home/server/linyicheng/LETNET2/LET-NET2/VINS:/home/code/src -v /media/server/4cda377d-28db-4424-921c-6a1e0545ceeb/4cda377d-28db-4424-921c-6a1e0545ceeb/4cda377d-28db-4424-921c-6a1e0545ceeb/Dataset/euroc/:/home/data/ --net=host --env ROS_MASTER_URI=http://localhost:11311 --env ROS_IP=$(hostname -I | awk '{print $1}') let_vins:latest cd /home/code/ && catkin_makesource devel/setup.bash
rosrun vins vins_node src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yamlplay rosbag
source /opt/ros/melodic/setup.bash
cd /home/data/
rosbag play MH_05_difficult.bag




