Skip to content

Official Pytorch implementations for "Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation"(ECCV 2024)

License

Notifications You must be signed in to change notification settings

hyunwoo137/EDAFormer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project Static Badge

Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation (ECCV 2024)

Hyunwoo Yu1*, Yubin Cho1,2*, Beoungwoo Kang1*, Seunghun Moon1*, Kyeongbo Kong3*, Suk-ju Kang1†

* Equal contribution, Correspondence

1 Sogang University, 2 LG Electronics, 3 Pusan National University

This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for ISR method and EDAFormer.

edaformer

Installation

For install and data preparation, please refer to the guidelines in MMSegmentation v0.13.0.

Other requirements: pip install timm==0.3.2

An example (works for me): CUDA 11.1 and pytorch 1.8.0

pip install torchvision==0.8.2
pip install timm==0.3.2
pip install mmcv-full==1.2.7
pip install opencv-python==4.5.1.48
cd EDAFormer && pip install -e . --user

Evaluation

Download EDAFormer weights into the /path/to/checkpoint_file.

local_configs/ contains config files. To apply our ISR method, adjust --backbone_reduction_ratios and --decoder_reduction_ratios.

Example: Evaluate EDAFormer-T on ADE20K:

# Single-gpu testing
CUDA_VISIBLE_DEVICES=0 python ./tools/test.py local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py /path/to/checkpoint_file

# Multi-gpu testing
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/dist_test.sh local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM>

# Multi-gpu, multi-scale testing
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/dist_test.sh local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM> --aug-test

Example: Evaluate EDAFormer-T with ISR on ADE20K:

# Single-gpu testing
CUDA_VISIBLE_DEVICES=0 python ./tools/test.py local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py /path/to/checkpoint_file --backbone_reduction_ratios "2211" --decoder_reduction_ratios "222"

# Multi-gpu testing
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/dist_test.sh local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM> --backbone_reduction_ratios "2211" --decoder_reduction_ratios "222"

# Multi-gpu, multi-scale testing
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/dist_test.sh local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM> --aug-test --backbone_reduction_ratios "2211" --decoder_reduction_ratios "222"

Training

Example: Train EDAFormer-T on ADE20K:

# Single-gpu training
CUDA_VISIBLE_DEVICES=0 python ./tools/train.py local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py 

# Multi-gpu training
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/dist_train.sh local_configs/edaformer/tiny/edaformer.tiny.512x512.ade.160k.py <GPU_NUM>

Citation

@article{yu2024embedding,
  title={Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation},
  author={Yu, Hyunwoo and Cho, Yubin and Kang, Beoungwoo and Moon, Seunghun and Kong, Kyeongbo and Kang, Suk-Ju},
  journal={arXiv preprint arXiv:2407.17261},
  year={2024}
}

About

Official Pytorch implementations for "Embedding-Free Transformer with Inference Spatial Reduction for Efficient Semantic Segmentation"(ECCV 2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages