Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve accuracy for emotion recognition module #34

Closed
andro-galexy opened this issue Oct 3, 2017 · 4 comments
Closed

Improve accuracy for emotion recognition module #34

andro-galexy opened this issue Oct 3, 2017 · 4 comments

Comments

@andro-galexy
Copy link

The work done in the repo, is outstanding, and the reults are pretty good as well,, it's accuracy is much more than what you have mentioned in README,, I guess it's arround 78% or more,, but am really interested to know how I can contribute to improve the overall accuracy of the project, so will changing the dataset affect more or by using some other CNN module parameters will work, or instead can we make use of caffe module directly instead of tensorflow as backend,, so let me know, it will be fun to contribute this repo.

Thanks & Regards,
Swap

@oarriaga
Copy link
Owner

oarriaga commented Oct 4, 2017

Hello @swapgit I am happy to hear that you like the project. In order to improve the emotion classification module we could try several things:

  • pre-train or train along another emotion dataset (I have tried to pre-train with the KDEF dataset but it didn't show any perceivable increase in accuracy).
  • The labels are not uniformly distributed consequently we could try to re-train with the existent dataset using a weighted loss.

@csbotos
Copy link

csbotos commented Oct 11, 2017

Thank you! this repo is a miracle :)

[EDITED] The using the initial settings the preloaded weights would yield a lower performance, however it can be later trained to the reported 66%

How about using balanced training batches (so each batch will contain same amount of samples from each class)? I know it would represent a different class distribution, where the deviation of the small sets would be narrow - for me it worked better than class weights.

@oarriaga
Copy link
Owner

Hello @csbotos I am happy to hear you like the project :). Yes, we can also try to have balanced batches. The drop in accuracy could be related to not using the correct optimizer weights. I encountered an issue in keras in which the optimizer weights were not compatible between keras versions; therefore, I either deleted them entirely from the hdf5 files or I set the compile flag to False when loading the models.

@csbotos
Copy link

csbotos commented Oct 11, 2017

Yeah, I just discovered that the learning rate could be too raw for the pretrained network, now the algorithm topped again at 66%

@oarriaga oarriaga closed this as completed Nov 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants