This repository is the official implementation of our paper accepted by IEEE RAL 2024. [IEEE Xplore] [arXiv]
Sensor fusion has been considered an effective method to overcome the weaknesses of individual sensors. Most existing multimodal place recognition methods only use limited field-of-view camera images, which leads to an imbalance between features from different modalities and limits the effectiveness of sensor fusion. Thus, we propose a novel multimodal place recognition neural network LCPR. It takes multi-view RGB images and LiDAR range images as input, extracts discriminative and yaw-rotation invariant global descriptors for fast query-database matching.LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition. Zijie Zhou, Jingyi Xu, Guangming Xiong, Junyi Ma*
- Ubuntu 20.04 + Python 3.8
- PyTorch 1.12.1 + CUDA 11.8
git clone https://github.com/ZhouZijie77/LCPR.git
cd LCPR
conda create -n lcpr python=3.8
conda activate lcpr
pip install -r requirements.txt
- Please download the offical nuScenes dataset.
- Generate the infos and range data needed to run the code.
cd tools
python gen_info.py
python gen_index.py
python gen_range.py
cd ..
- The final data structure should be like:
nuScenes
├─ samples
│ ├─ CAM_BACK
│ ├─ CAM_BACK_LEFT
│ ├─ CAM_BACK_RIGHT
│ ├─ CAM_FRONT
│ ├─ CAM_FRONT_LEFT
│ ├─ CAM_FRONT_RIGHT
│ ├─ LIDAR_TOP
│ ├─ RANGE_DATA
├─ sweeps
│ ├─ ...
├─ maps
│ ├─ ...
├─ v1.0-test
│ ├─ attribute.json
│ ├─ calibrated_sensor.json
│ ├─ ...
├─ v1.0-trainval
│ ├─ attribute.json
│ ├─ calibrated_sensor.json
│ ├─ ...
├─ nuscenes_infos-bs.pkl
├─ nuscenes_infos-shv.pkl
├─ nuscenes_infos-son.pkl
├─ nuscenes_infos-sq.pkl
├─ bs_db.npy
├─ bs_test_query.npy
├─ bs_train_query.npy
├─ bs_val_query.npy
├─ shv_db.npy
├─ shv_query.npy
├─ son_db.npy
├─ son_query.npy
├─ sq_db.npy
├─ sq_test_query.npy
└─ sq_train_query.npy
First you need to set the file paths in config/config.yaml
. Then, run the following script to train the model:
python train.py
Set the model path that you need to load in test.py
. Then run the script:
python test.py
You can download our pre-trained models from this link.
If you use our code in your academic work, please cite our paper:
@ARTICLE{zhou2024lcpr,
author={Zhou, Zijie and Xu, Jingyi and Xiong, Guangming and Ma, Junyi},
journal={IEEE Robotics and Automation Letters},
title={LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition},
year={2024},
volume={9},
number={2},
pages={1342-1349},
doi={10.1109/LRA.2023.3346753}}