This repository is the official PyTorch implementation of AAAI-21 paper Bag of Tricks for Long-Tailed Visual Recognition with Deep Convolutional Neural Networks, which provides practical and effective tricks used in long-tailed image classification.
Trick gallery: trick_gallery.md
Trick combinations: trick_combination.md
- The tricks will be constantly updated. If you have or need any long-tail related trick newly proposed, please to open an issue or pull requests. Make sure to attach the results in corresponding md files if you pull a request with a new trick.
- For any problem, such as bugs, feel free to open an issue.
-
2020-12-26
- Reorignize all the codes, according to Megvii-Nanjing/BBN. -
2020-12-30
- Add codes of torch.nn.parallel.DistributedDataParallel. Support apex in both torch.nn.DataParallel and torch.nn.parallel.DistributedDataParallel. -
2021-01-02
- Add LDAMLoss, NeurIPS 2019, and a regularization method: label smooth cross-entropy, CVPR 2016. -
2021-01-05
- Add SEQL (softmax equalization loss), CVPR 2020. -
2021-01-10
- Add CDT (class-dependent temparature), arXiv 2020, BSCE (balanced-softmax cross-entropy), NeurIPS 2020, and support a smooth version of cost-sensitive cross-entropy (smooth CS_CE), which add a hyper-parameter $ \gamma$ to vanilla CS_CE. In smooth CS_CE, the loss weight of class i is defined as:$(\frac{N_{min}}{N_i})^\gamma$ , where$\gamma \in [0, 1]$ ,$N_i$ is the number of images in class i. We can set$\gamma = 0.5$ to get a square-root version of CS_CE. -
2021-01-11
- Add a mixup related method: Remix, ECCV 2020 workshop. -
2021-02-19
- Test and add the results of two-stage training in trick_gallery.md -
2021-01-30
- [20%] Add the results of trick combinations. - Add the results of best bag of tricks on all long-tailed datasets.
- Add more backbones in each long-tailed benchmark to exlpore the influence of network capacity.
- Add trick family:
post-processing
and corresponding experiments, such as $\tau$-normalization, ICLR 2020.
We divided the long-tail realted tricks into four families: re-weighting, re-sampling, mixup training, and two-stage training. For more details of the above four trick families, see the original paper.
-
Trick gallery:
Tricks, corresponding results, experimental settings, and running commands are listed in trick_gallery.md.
-
Trick combinations:
Combinations of different tricks, corresponding results, experimental settings, and running commands are listed in trick_combination.md.
-
These tricks and trick combinations, which provide the corresponding results in this repo, have been reorgnized and tested. We are trying our best to deal with the rest, which will be constantly updated.
torch >= 1.4.0
torchvision >= 0.5.0
tensorboardX >= 2.1
tensorflow >= 1.14.0 #convert long-tailed cifar datasets from tfrecords to jpgs
Python 3
apex
- We provide the detailed requirements in requirements.txt. You can run
pip install requirements.txt
to create the same running environment as ours. - The apex must be installed:
pip install -U pip
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
We provide three datasets in this repo: long-tailed CIFAR (CIFAR-LT), long-tailed ImageNet (ImageNet-LT), and iNaturalist 2018 (iNat18).
The detailed information of these datasets are shown as follows:
Datasets | CIFAR-10-LT | CIFAR-100-LT | ImageNet-LT | iNat18 | ||
---|---|---|---|---|---|---|
Imbalance factor | ||||||
100 | 50 | 100 | 50 | |||
Training images | 12,406 | 13,996 | 10,847 | 12,608 | 11,5846 | 437,513 |
Classes | 50 | 50 | 100 | 100 | 1,000 | 8,142 |
Max images | 5,000 | 5,000 | 500 | 500 | 1,280 | 1,000 |
Min images | 50 | 100 | 5 | 10 | 5 | 2 |
Imbalance factor | 100 | 50 | 100 | 50 | 256 | 500 |
- CIFAR-10-LT-100
means the long-tailed CIFAR-10 dataset with the imbalance factor
- Imbalance factor
is defined as
The annotation of a dataset is a dict consisting of two field: annotations
and num_classes
.
The field annotations
is a list of dict with
image_id
, fpath
, im_height
, im_width
and category_id
.
Here is an example.
{
'annotations': [
{
'image_id': 1,
'fpath': '/data/iNat18/images/train_val2018/Plantae/7477/3b60c9486db1d2ee875f11a669fbde4a.jpg',
'im_height': 600,
'im_width': 800,
'category_id': 7477
},
...
]
'num_classes': 8142
}
-
There are two versions of CIFAR-LT.
-
Cui et al., CVPR 2019 firstly proposed the CIFAR-LT. They provided the download link of CIFAR-LT, and also the codes to generate the data, which are in TensorFlow.
You can follow the steps below to get this version of CIFAR-LT:
- Download the Cui's CIFAR-LT in GoogleDrive or Baidu Netdisk (password: 5rsq). Suppose you download the data and unzip them at path
/downloaded/data/
. - Run tools/convert_from_tfrecords, and the converted CIFAR-LT and corresponding jsons will be generated at
/downloaded/converted/
.
- Download the Cui's CIFAR-LT in GoogleDrive or Baidu Netdisk (password: 5rsq). Suppose you download the data and unzip them at path
# Convert from the original format of CIFAR-LT python tools/convert_from_tfrecords.py --input_path /downloaded/data/ --out_path /downloaded/converted/
- Cao et al., ICLR 2020 followed Cui et al., CVPR 2019's method to generate the CIFAR-LT randomly. They modify the CIFAR datasets provided by PyTorch as this file shows.
-
-
You can use the following steps to convert from the original images of ImageNet-LT.
- Download the original ILSVRC-2012. Suppose you have downloaded and reorgnized them at path
/downloaded/ImageNet/
, which should contain two sub-directories:/downloaded/ImageNet/train
and/downloaded/ImageNet/val
. - Download the train/test splitting files (
ImageNet_LT_train.txt
andImageNet_LT_test.txt
) in GoogleDrive or Baidu Netdisk (password: cj0g). Suppose you have downloaded them at path/downloaded/ImageNet-LT/
. - Run tools/convert_from_ImageNet.py, and you will get two jsons:
ImageNet_LT_train.json
andImageNet_LT_val.json
.
# Convert from the original format of ImageNet-LT python tools/convert_from_ImageNet.py --input_path /downloaded/ImageNet-LT/ --image_path /downloaed/ImageNet/ --output_path ./
- Download the original ILSVRC-2012. Suppose you have downloaded and reorgnized them at path
-
You can use the following steps to convert from the original format of iNaturalist 2018.
- The images and annotations should be downloaded at iNaturalist 2018 firstly. Suppose you have downloaded them at path
/downloaded/iNat18/
. - Run tools/convert_from_iNat.py, and use the generated
iNat18_train.json
andiNat18_val.json
to train.
# Convert from the original format of iNaturalist # See tools/convert_from_iNat.py for more details of args python tools/convert_from_iNat.py --input_json_file /downloaded/iNat18/train2018.json --image_path /downloaded/iNat18/images --output_json_file ./iNat18_train.json python tools/convert_from_iNat.py --input_json_file /downloaded/iNat18/val2018.json --image_path /downloaded/iNat18/images --output_json_file ./iNat18_val.json
- The images and annotations should be downloaded at iNaturalist 2018 firstly. Suppose you have downloaded them at path
In this repo:
-
The results of CIFAR-LT (ResNet-32) and ImageNet-LT (ResNet-10), which need only one GPU to train, are gotten by DataParallel training with apex.
-
The results of iNat18 (ResNet-50), which need more than one GPU to train, are gotten by DistributedDataParallel training with apex.
-
If more than one GPU is used, DistributedDataParallel training is efficient than DataParallel training, especially when the CPU calculation forces are limited.
1, To train
# To train long-tailed CIFAR-10 with imbalanced ratio of 50.
# `GPUs` are the GPUs you want to use, such as `0,4`.
bash data_parallel_train.sh configs/test/data_parallel.yaml GPUs
1, Change the NCCL_SOCKET_IFNAME in run_with_distributed_parallel.sh to [your own socket name].
export NCCL_SOCKET_IFNAME = [your own socket name]
2, To train
# To train long-tailed CIFAR-10 with imbalanced ratio of 50.
# `GPUs` are the GPUs you want to use, such as `0,1,4`.
# `NUM_GPUs` are the number of GPUs you want to use. If you set `GPUs` to `0,1,4`, then `NUM_GPUs` should be `3`.
bash distributed_data_parallel_train.sh configs/test/distributed_data_parallel.yaml NUM_GPUs GPUs
- We use Top-1 error rates as our evaluation metric.
- From the results of two CIFAR-LT, we can see that the CIFAR-LT provided by Cao has much lower Top-1 error rates on CIFAR-10-LT, compared with the baseline results reported in his paper. So, in our experiments, we use the CIFAR-LT of Cui for fairness.
- For the ImageNet-LT, we find that the color_jitter augmentation was not included in our experiments, which, however, is adopted by other methods. So, in this repo, we add the color_jitter augmentation on ImageNet-LT. The old baseline without color_jitter is 64.89, which is +1.15 points higher than the new baseline.
- You can click the
Baseline
in the table below to see the experimental settings and corresponding running commands.
Datasets | Cui et al., 2019 | Cao et al., 2020 | ImageNet-LT | iNat18 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
CIFAR-10-LT | CIFAR-100-LT | CIFAR-10-LT | CIFAR-100-LT | |||||||
Imbalance factor | Imbalance factor | |||||||||
100 | 50 | 100 | 50 | 100 | 50 | 100 | 50 | |||
Backbones | ResNet-32 | ResNet-32 | ResNet-10 | ResNet-50 | ||||||
Baseline
| 30.12 | 24.81 | 61.76 | 57.65 | 28.05 | 23.55 | 62.27 | 56.22 | 63.74 | 40.55 |
Reference (Cui;Cao; Liu) | 29.64 | 25.19 | 61.68 | 56.15 | 29.64 | 25.19 | 61.68 | 56.15 | 64.40 | 42.86 |
@inproceedings{zhang2020tricks,
author = {Yongshun Zhang and Xiu{-}Shen Wei and Boyan Zhou and Jianxin Wu},
title = {Bag of Tricks for Long-Tailed Visual Recognition with Deep Convolutional Neural Networks},
booktitle = {AAAI},
year = {2021},
}
- pages need to be added.
If you have any question about our work, please do not hesitate to contact us by emails provided in the paper.