This repo contains the code and configuration files for point cloud semantic segmentation.
- A Unified Query-based Paradigm for Point Cloud Understanding (paper)
Note.
All models below are trained with 8 1080TI GPU, follow EQ-Paradigm, and use a Q-Net to enable a free combination
between backbones and heads. (*)
means the improvement compared to the model with its original backbone network without Q-Net.
Backbone | mIoU | mAcc | allAcc | download | |
---|---|---|---|---|---|
EQNet | SparseConvNet | 75.1 (+2.2) | 82.7 (+1.9) | 91.1 (+0.7) | model |
To be released soon.
Performance of other backbones supported in this codebase will be released soon.
-
Data preparation:
Download ScanNet v2 here and preprocess the data.cd /path/to/DeepVision3D/DVSegmentation/data/scannetv2 python prepare_data.py --scannet_path /path/to/ScanNet --split [train/val/test]
-
Training:
You can train on ScanNet v2 with following codes.cd /path/to/DeepVision3D/DVSegmentation CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash train_segmentation.sh 8 --config config/eqnet_scannet.yaml
-
Testing:
python test_segmentation.py --config config/eqnet_scannet.yaml --set NECK.QUERY_POSITION_CFG.SELECTION_FUNCTION _get_point_query_position
For testing our provided model:
CHECKPOINT=/path/to/eqnet_scannet_v2-000000600.pth python test_segmentation.py --config config/eqnet_scannet.yaml --pretrain ${CHECKPOINT} --set NECK.QUERY_POSITION_CFG.SELECTION_FUNCTION _get_point_query_position