-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dense layer without activation? #8
Comments
Just want to send the class semantics for colorization head(a.k.a the decoder), so only logits is enough, probably more stable than a normalized(softmax) result. |
But it makes no sense to have multiple fully-connected hidden layers without activation - that is equivalent to (or potentially less than) just one fully-connected layer. |
@owen8877 yes, your are right, just take another look at the code, even the classification head's dense layers didn't have activation, consider that orignal vgg has relu for both the 4096-dense layers. |
@motionlife That's true. I hope they can fix the mistake and probably yield better performance. |
@owen8877 Or just use one dense layer to make model smaller |
@motionlife Well, we might as well stick to the vanilla VGG16 design since there is a classification loss against the pre-trained VGG16 model. Line 149 in 30e975b
I doubt there might be a performance recession if we cut the FC layers thin. |
@motionlife yes understand, I mean if there is no performance boost when add back activation. As you said, why not just use one dense. |
In
ChromaGAN/SOURCE/ChromaGAN.py
Line 209 in 30e975b
there is no activation parameter designated in the Dense layer. In keras 2.3.1, the default activation is linear activation (i.e. no activation):
The text was updated successfully, but these errors were encountered: