Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why channel drop? #20

Closed
apple2373 opened this issue Apr 25, 2019 · 1 comment
Closed

Why channel drop? #20

apple2373 opened this issue Apr 25, 2019 · 1 comment

Comments

@apple2373
Copy link

I was checking other repository (before checking this one), and then found a strange channel drop trick.
huggingface/pytorch-pretrained-BigGAN#9

I can see you also use it here:

# Drop channels in x if necessary
if self.in_channels != self.out_channels:
x = x[:, :self.out_channels]

Could you explain why do you do this? I think it's strange to train with lager channels more than necessary and drop at inference time. Does this trick somehow help for training?

@ajbrock
Copy link
Owner

ajbrock commented Apr 25, 2019

The channel drops in G's blocks are part of the BigGAN-deep architecture, as described in the paper. The channel drop at the output layer you see in Thom's TFHub port (from 128->3) is an implementation detail for taking advantage of TPU accelerators.

@ajbrock ajbrock closed this as completed Apr 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants