-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confused about the ENT and AdENT #8
Comments
Oops, it seems that i missed the parameter But would it be more clear that set the |
Dear author, |
I knew that the loss should be negative |
SSDA_MME/main.py
Lines 195 to 204 in 81c3a9c
SSDA_MME/utils/loss.py
Lines 28 to 41 in 81c3a9c
Thank you for your code.
From your code it seems that
ENT method try to minimize entropy on classifier but maximize on feature extractor;
AdENT method try to maximize entropy on classifier but minimize on feature extractor, which is proposed in your paper.
BUT, in your paper the ENT method seems to be described as minimize entropy on both classifier and feature extractor, as referred in Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In NIPS, 2005
So, i'm very confused about it. I'm looking forward to hearing from you.
The text was updated successfully, but these errors were encountered: