Transfer Learning using InceptionV3 Keras application for CIFAR-10 Photo Classification

Ahlemkaabi
3 min readMar 4, 2022

Building models from scratch to complete already solved tasks won’t save you time and resources. Transfer learning technique is a much better way!

Introduction

Transfer Learning is another concept in machine learning. It is when we build a learning model based on previous knowledge (we implement already trained models — knowledge = weights and biases)

A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning

In this blog I will share with you how did I implement this technique to train a convolutional neural network model to classify the CIFAR 10 dataset with 87% accuracy.

Dataset: CIFAR-10

The CIFAR-10 data-set consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images:
* five training batches (10000 images)
* one test batch (10000 images)

CIFAR-10

InceptionV3 Keras Application

Keras Applications are deep learning models that are made available alongside pre-trained weights. These models can be used for prediction, feature extraction, and fine-tuning.

My final model was constructed using thelinceptionV3 model.

Code implementation

Transfer learning

# fitting resultsEpoch 1/4
167/167 [==============================] - 470s 3s/step - loss: 0.8206 - accuracy: 0.7439 - val_loss: 0.4804 - val_accuracy: 0.8421
Epoch 2/4
167/167 [==============================] - 431s 3s/step - loss: 0.4708 - accuracy: 0.8439 - val_loss: 0.4306 - val_accuracy: 0.8550
Epoch 3/4
167/167 [==============================] - 431s 3s/step - loss: 0.4224 - accuracy: 0.8590 - val_loss: 0.4088 - val_accuracy: 0.8622
Epoch 4/4
167/167 [==============================] - 431s 3s/step - loss: 0.3918 - accuracy: 0.8695 - val_loss: 0.3903 - val_accuracy: 0.8687
# evaluating results313/313 [==============================] - 90s 277ms/step - loss: 0.3903 - accuracy: 0.8687

The accuracy of our resulted model is not bad, 86% but it can be improved

We will use the technique, fine Tuning:
After training the model this far, we will unfreeze some layers in the base_inception model (our pre-trained model from keras applications). Then we will jointly train both these layers and the part that we added (to the base_inception model)

As a starting point I have chosen to unfreeze the layers from the 164th layer(exactly starting from the mixed5 layer in the inception model!), then I will recompile the model, and see what we will get, we hope not getting the the over-fitting point!

Results

Epoch 1/4
Epoch 1/4
167/167 [==============================] - 437s 3s/step - loss: 0.3574 - accuracy: 0.8798 - val_loss: 0.3781 - val_accuracy: 0.8711
Epoch 2/4
167/167 [==============================] - 430s 3s/step - loss: 0.3544 - accuracy: 0.8811 - val_loss: 0.3768 - val_accuracy: 0.8732
Epoch 3/4
167/167 [==============================] - 430s 3s/step - loss: 0.3521 - accuracy: 0.8807 - val_loss: 0.3762 - val_accuracy: 0.8730
Epoch 4/4
167/167 [==============================] - 430s 3s/step - loss: 0.3491 - accuracy: 0.8821 - val_loss: 0.3757 - val_accuracy: 0.8724
############
313/313 [==============================] - 84s 268ms/step - loss: 0.3757 - accuracy: 0.8724

as you can see the result have been improved with without over-fitting!

More experiments

One more experiment to try, is 1- to use callbacks. Keras early stopping of training via a callback called EarlyStopping. 2- add more epoches.

Last experiment result

Epoch 1/10
167/167 [==============================] - 200s 1s/step - loss: 0.3344 - accuracy: 0.8852 - val_loss: 0.3620 - val_accuracy: 0.8768
Epoch 2/10
167/167 [==============================] - 202s 1s/step - loss: 0.3217 - accuracy: 0.8908 - val_loss: 0.3598 - val_accuracy: 0.8768
Epoch 2: early stopping

here is the link to my code:

Literature Cited

Read or watch:

Definitions to skim:

References:

--

--