- Hardware: GPU to hold 6000M. (Better with two gpus or higher-level gpu to satisfy the need of paralleled cuda_kernels.)
- Software: Linux (tested on Ubuntu 18.04) PyTorch>=1.5.0, Python>=3, CUDA>=10.1, tensorboardX, h5py, pyYaml, scikit-learn
Download and unzip ModelNet40 (415M). Then symlink the paths to it as follows (you can alternatively modify the path here):
mkdir -p data
ln -s /path to modelnet40/modelnet40_ply_hdf5_2048 data
-
Build the CUDA kernel:
When you run the program for the first time, please wait a few moments for compiling the cuda_lib automatically. Once the CUDA kernel is built, the program will skip this in the future running.
-
Train:
-
Multi-thread training (nn.DataParallel) :
-
We also provide a fast multi-process training (nn.parallel.DistributedDataParallel, recommended) with official nn.SyncBatchNorm. Please also remind to specify the GPU ID:
-
-
Test:
-
Download our pretrained model and put it under the obj_cls folder.
-
Run the voting evaluation script to test our pretrained model, after this voting you will get an accuracy of 93.9% if all things go right:
python eval_voting.py --config config/dgcnn_paconv_test.yaml
-
You can also directly test our pretrained model without voting to get an accuracy of 93.6%:
python main.py --config config/dgcnn_paconv_test.yaml
-
For full test after training the model:
-
Specify the
eval
toTrue
in your config file. -
Make sure to use main.py (main_ddp.py may lead to wrong result due to the repeating problem of all_reduce function in multi-process training) :
python main.py --config config/your config file.yaml
-
-
-
Visualization: tensorboardX incorporated for better visualization.
tensorboard --logdir=checkpoints/exp_name
If you find the code or trained models useful, please consider citing:
@inproceedings{xu2021paconv,
title={PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds},
author={Xu, Mutian and Ding, Runyu and Zhao, Hengshuang and Qi, Xiaojuan},
booktitle={CVPR},
year={2021}
}
You are welcome to send pull requests or share some ideas with us. Contact information: Mutian Xu ([email protected]) or Runyu Ding ([email protected]).
This code is is partially borrowed from DGCNN.