Skip to content

[ACM MM 2023] An official implementation for "Style-Controllable Generalized Person Re-identification"

Notifications You must be signed in to change notification settings

liyuke65535/Style-Controllable-Generalized-Person-Re-identification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Style-Controllable-Generalized-Person-Re-identification

Here's the official implementation of [ACM MM 2023] "Style Controllable Generalized Person Re-identification".

Abstract

Domain generalizable person Re-identification is a challenging and realistic task. It requires a model to train on multi-source domains and then generalizes well on unseen target domains. Existing approaches typically mix images from different domains in a mini-batch for training, but this can increase discrimination within a mini-batch due to the vast style differences among domains. As a result, the model may converge easily by mining domain-related information, while neglecting identity-discriminative information, especially for metric learning. To improve the difficulty of metric learning under multi-source training, we design a Style-aware Hard-negative Sampling (SHS) strategy. SHS effectively improves metric learning but reduces the style diversity within the batch. To enhance style diversity, we devise a Dynamic Style Mixing (DSM) which memorizes single-domain styles and synthesizes novel styles, which largely raises the diversity of source domains. Extensive experiments prove the effectiveness of our method. In both single-source and multi-source settings, our approach significantly outperforms the state-of-the-art (SOTA).

Framework

T-SNE Visualization

Where to Apply DSM?

Instructions

Here are some instructions to run our code. Our code is based on TransReID, thanks for their excellent work.

1. Clone this repo

git clone https://github.com/liyuke65535/Style-Controllable-Generalized-Person-Re-identification.git

2. Prepare your environment

conda create -n screid python==3.10
conda activate screid
bash enviroments.sh

3. Prepare pretrained model (ViT-B) and datasets

You can download it from huggingface, rwightman, or else where. For example, pretrained model is avaliable at ViT-B.

As for datasets, follow the instructions in MetaBIN.

4. Modify the config file

# modify the model path and dataset paths of the config file
vim ./config/SHS_DSM_vit_b.yml

5. Train a model

bash run.sh

6. Evaluation only

# modify the trained path in config
vim ./config/SHS_DSM_vit.yml

# evaluation
python test.py --config ./config/SHS_DSM_vit.yml

Citation

@article{Li2023StyleControllableGP,
  title={Style-Controllable Generalized Person Re-identification},
  author={Yuke Li and Jingkuan Song and Hao Ni and Heng Tao Shen},
  journal={Proceedings of the 31st ACM International Conference on Multimedia},
  year={2023},
  url={https://api.semanticscholar.org/CorpusID:264492134}
}

About

[ACM MM 2023] An official implementation for "Style-Controllable Generalized Person Re-identification"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published