- Python 3.7
- PyTorch 1.9.0
-
Original datasets
- All 4 datasets are the same as previous works (e.g., DeepEMD, renet), and can be download from their links: miniImagenet, tieredImageNet, CIFAR-FS, CUB-FS.
- Download and extract them in a certain folder, let's say
/data/FSLDatasets/LPE_dataset
, then remember to setargs.data_dir
to this folder when running the code later.
-
Semantic embeddings
- Additional semantic embeddings of these 4 datasets leveraged by our method can be downloaded here.
- Download and put them in the corresponding dataset folder (e.g., put
miniimagenet/wnid2CLIPemb_zscore.npy
to/data/FSLDatasets/LPE_dataset/miniimagenet/wnid2CLIPemb_zscore.npy
), then remember to setargs.semantic_path
to the location of this file andargs.sem_dim
accordingly when running the code later.
Our training and testing scripts are all at scripts/train.sh
, and corresponding output logs can found at this folder too.
The 1-shot and 5-shot classification results can be found in the corresponding output logs.
If you find our paper or codes useful, please consider citing our paper:
@InProceedings{Yang_2023_WACV,
author = {Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
title = {Semantic Guided Latent Parts Embedding for Few-Shot Learning},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2023},
pages = {5447-5457}
}
Our codes are based on renet and DeepEMD, and we really appreciate it.
If you have any question, feel free to contact me. My email is [email protected]