Skip to content

This project performs fine tuning of a deep learning architecture to our model to improve its accuracy

Notifications You must be signed in to change notification settings

likhith00/Image_classification_using_Feature_Extraction

Repository files navigation

Finetuning_image_classification

This project is an extension of previous image-classification project. The accuracy gained by the previous project was 80.0. In order to improve the accuracy we finetune for our model with pretrained deep learning model.

Fine tuning is a process to take a network model that has already been trained for a given task, and make it perform a second similar task. Assuming the original task is similar to the new task, using a network that has already been designed & trained allows us to take advantage of the feature extraction that happens in the front layers of the network without developing that feature extraction network from scratch.

Fine tuning,

1. Replaces the output layer, originally trained to recognize (in the case of imagenet models) 1,000 classes, with a layer that recognizes the number of classes you require

2. The new output layer that is attached to the model is then trained to take the lower level features from the front of the network and map them to the desired output classes, using SGD

3. Once this has been done, other late layers in the model can be set as 'trainable=True' so that in further SGD epochs their weights can be fine-tuned for the new task too.

In this project , I've performed fine tuning using various deep learning architectures such as VGG16, RESNET50, INCEPTIONV3 and XCEPTION. I've used the same hyperparameters and dataset for all architectures in order to compare the model accuracy.

I've uploaded only VGG16 implementation because the other implemetation has exact blue print with exact hyper parameters except The model

To use Resnet50 model,

from keras.applications.resnet50 import ResNet50

resnet = ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

Resnet50 takes 224X224 as an input size hence declare img_height and img_width as 224

To use InceptionV3 model,

from keras.applications.inception_v3 import InceptionV3

inception = InceptionV3(weights='imagenet', include_top=False, input_shape=(299, 299, 3))

InceptionV3 takes 299X299 as an input size hence declare img_height and img_width as 299

To use Xception model,

from keras.applications.xception import Xception

xception = Xception(include_top=False, weights='imagenet', input_shape=(299, 299, 3))

Xception takes 299X299 as an input size hence declare img_height and img_width as 299

After Fine Tuning,

After training, The Accuracy plots of VGG16 are satisfying but inception,xception model plots have lot of fluctuations in the validation accuracy. There is a huge difference of train and validation accuracy of Resnet50. This leads to overfitting and reduction in accuracy.

The Loss plots of resnet50 looks satisfying with a variation between validation accuracy and train accuracy. In other models sometimes the validation accuracy exceeds the train accuracy.This may cause over fitting and reduction in accuracy.

To evaluate the model performance analyse the confusion matrix

       VGG16 Accuracy is 90.0       Resnet50 Accuracy is 80.0       InceptionV3 Accuracy is 40.0      Xception Accuracy is 40.0

From the all Pretrained networks VGG16 has performed well and boosted the accuracy of the model to 90%. Other pretrained networks have not performed well because of overfitting or inefficient hyperparameters. The performance of other models can be improved further building powerful image classification models.

About

This project performs fine tuning of a deep learning architecture to our model to improve its accuracy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published