Skip to content

Commit

Permalink
0425v1
Browse files Browse the repository at this point in the history
  • Loading branch information
blackfeather-wang committed Apr 24, 2020
1 parent b1d1022 commit 9e9a87c
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 53 deletions.
60 changes: 8 additions & 52 deletions Semantic segmentation on Cityscapes/README.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,14 @@
# Pytorch-segmentation-toolbox Pytorch-1.1 [DOC](https://weiyc.github.io/assets/pdf/toolbox.pdf)
Pytorch code for semantic segmentation. This is a minimal code to run PSPnet and Deeplabv3 on Cityscape dataset.
Shortly afterwards, the code will be reviewed and reorganized for convenience.
# Semantic Segmentation on Cityscapes

### Highlights of Our Implementations
- Synchronous BN
- Fewness of Training Time
- Better Reproduced Performance
Our code is mainly based on
[pytorch-segmentation-toolbox](https://github.com/speedinghzl/pytorch-segmentation-toolbox).
Please refer to their docs.

### Requirements && Install
Python 3.7

4 x 12g GPUs (e.g. TITAN XP)
## Run

```bash
# Install **Pytorch-1.1**
$ conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
Train Deeplab-V3 on Cityscapes

# Install **Apex**
$ git clone https://github.com/NVIDIA/apex
$ cd apex
$ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

# Install **Inplace-ABN**
$ git clone https://github.com/mapillary/inplace_abn.git
$ cd inplace_abn
$ python setup.py install
```

### Dataset and pretrained model

Plesae download cityscapes dataset and unzip the dataset into `YOUR_CS_PATH`.

Please download MIT imagenet pretrained [resnet101-imagenet.pth](http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet101-imagenet.pth), and put it into `dataset` folder.

### Training and Evaluation
```bash
./run_local.sh YOUR_CS_PATH [pspnet|deeplabv3] 40000 769,769 0
```

### Benefits
Some recent projects have already benefited from our implementations. For example, [CCNet: Criss-Cross Attention for semantic segmentation](https://github.com/speedinghzl/CCNet) and [Object Context Network(OCNet)](https://github.com/PkuRainBow/OCNet) currently achieve the state-of-the-art resultson Cityscapes and ADE20K. In addition, Our code also make great contributions to [Context Embedding with EdgePerceiving (CE2P)](https://github.com/liutinglt/CE2P), which won the 1st places in all human parsing tracks in the 2nd LIP Challange.

### Citing

If you find this code useful in your research, please consider citing:

@misc{huang2018torchseg,
author = {Huang, Zilong and Wei, Yunchao and Wang, Xinggang, and Liu, Wenyu},
title = {A PyTorch Semantic Segmentation Toolbox},
howpublished = {\url{https://github.com/speedinghzl/pytorch-segmentation-toolbox}},
year = {2018}
}

### Thanks to the Third Party Libs
[inplace_abn](https://github.com/mapillary/inplace_abn) -
[Pytorch-Deeplab](https://github.com/speedinghzl/Pytorch-Deeplab) -
[PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding)
./run_local.sh YOUR_CS_PATH [deeplabv3|deeplabv3_isda] 40000 769,769 0
```
2 changes: 1 addition & 1 deletion Semantic segmentation on Cityscapes/run_local.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@ INPUT_SIZE=$4
OHEM=$5
GPU_IDS=0,1,2,3

CUDA_VISIBLE_DEVICES=${GPU_IDS} python -m torch.distributed.launch --nproc_per_node=4 train_isda.py --data-dir ${CS_PATH} --model ${MODEL} --random-mirror --random-scale --restore-from ./dataset/resnet101-imagenet.pth --input-size ${INPUT_SIZE} --gpu ${GPU_IDS} --learning-rate ${LR} --weight-decay ${WD} --batch-size ${BS} --num-steps ${STEPS} --ohem ${OHEM}
CUDA_VISIBLE_DEVICES=${GPU_IDS} python -m torch.distributed.launch --nproc_per_node=4 train_isda.py --data-dir ${CS_PATH} --model ${MODEL} --random-mirror --random-scale --restore-from ./dataset/resnet101-imagenet.pth --input-size ${INPUT_SIZE} --gpu ${GPU_IDS} --learning-rate ${LR} --weight-decay ${WD} --batch-size ${BS} --num-steps ${STEPS} --ohem ${OHEM} --lambda_0 7.5

CUDA_VISIBLE_DEVICES=${GPU_IDS} python -m torch.distributed.launch --nproc_per_node=4 evaluate.py --data-dir ${CS_PATH} --model ${MODEL} --input-size ${INPUT_SIZE} --batch-size 4 --restore-from isda_results/CS_scenes_${STEPS}.pth --gpu 4

0 comments on commit 9e9a87c

Please sign in to comment.