A deep learning and computer vision project to guess the state of emotion of humans based on facial expressions
- Implemented using pytorch framework
- used efficientnet-b0 architecture for CNN for training the model
- Used cuda to decrease the training time per epoch from around 10 mins to under 1 min
- Gained approx. 65% accuracy on test set
- used OpenCV for reading and pre-processing images
- randomly augmented the images by flipping and rotating images to increase test accuracy