Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RandAugment usage #8

Open
rccchoudhury opened this issue May 5, 2024 · 1 comment
Open

RandAugment usage #8

rccchoudhury opened this issue May 5, 2024 · 1 comment

Comments

@rccchoudhury
Copy link

I noticed that RandAugment is only used in the Kinetics dataloader if fast_rrc is off. Does this mean that RandAugment was not used for pre-training or finetuning? I also noticed even if RandAugment is moved to the GPU along with other transforms, data loading speed is quite a bit slower. Have you seen this issue before?

@zhaoyue-zephyrus
Copy link
Owner

zhaoyue-zephyrus commented May 5, 2024

Hi @rccchoudhury ,

Great point. Yes, RandAugment is not supported in the fast-loading mode. And yes It is not used for pre-training. For more detail, please refer to Table 9 of the supplementary material of VideoMAE. My hunch is that 80-90% masking also serves as sufficiently strong augmentation and you don't need this for pre-training phase.

For fine-tuning, it is used instead. RandAugment indeed takes more time than regular augmentations whether it is done on the CPU or GPU end. However, fine-tuning usually takes much fewer epochs (50 to 100 epochs) than pre-training (800 to 1600 epochs). Therefore, keeping RandAugment in the fine-tuning stage won't affect the overall speedup.

Hope this clarification helps.

Best,
Yue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants