- Download S3DIS dataset and symlink the paths to them as follows (you can alternatively modify the relevant paths specified in folder
config
):mkdir -p dataset ln -s /path_to_s3dis_dataset dataset/s3dis
-
Requirement:
- Hardware: 1 GPU to hold 6000MB for CUDA version, 2 GPUs to hold 10000MB for non-CUDA version.
- Software: PyTorch>=1.5.0, Python3.7, CUDA>=10.2, tensorboardX, tqdm, h5py, pyYaml
-
Train:
-
Specify the gpu used in config and then do training:
sh tool/train.sh s3dis pointnet2_paconv # non-cuda version sh tool/train.sh s3dis pointnet2_paconv_cuda # cuda version
-
-
Test:
-
Download pretrained models and put them under folder specified in config or modify the specified paths. Our CUDA-implemented PAConv achieves 66.01 mIoU (w/o voting) and vanilla PAConv without CUDA achieves 66.33 mIoU (w/o voting) in s3dis Area-5 validation set.
-
For full testing (get listed performance):
CUDA_VISIBLE_DEVICES=0 sh tool/test.sh s3dis pointnet2_paconv # non-cuda version CUDA_VISIBLE_DEVICES=0 sh tool/test.sh s3dis pointnet2_paconv_cuda # cuda version
-
For 6-fold validation (calculating the metrics with results from different folds merged):
- Change the test_area index in the config file to 1;
- Finish full train and test, the test result files of Area-1 will be saved in corresponding paths after the test;
- Repeat a,b by changing the test_area index to 2,3,4,5,6 respectively;
- Collect all the test result files of all areas to one directory and state the path to this directory here;
- Run the code for 6-fold validation to get the final 6-fold results:
python tool/test_s3dis_6fold.py
-
The code for visualizing the segmentation result is also incorporated in test_s3dis_6fold.py (called at line #55-#57), you can run it after generating the pickle file via finishing test.
-
If you find our work helpful in your research, please consider citing:
@inproceedings{xu2021paconv,
title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
booktitle={CVPR},
year={2021}
}
You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu ([email protected]) or Runyu Ding ([email protected]).
The code is partially borrowed from PointWeb.