A simple classifier using a pre-trained model with Keras

In this article I am going to create a simple classifier in a few lines of Python. I am using the data from Dogs vs. Cats Redux Kaggle competition, but it can be used for any classification task.

To build this model I will use Keras. Keras is an API to create neural networks or use pre-trained networks. It can run on top of Tensorflow or Theano. I use an AWS machine (P2 instance)  to run my script however you can run it on any computer (it will take a little more time…).

 

0. Setup

To use the main functions of Keras easily, the images directory should have a specific structure : each subdirectory should contain the one folder per class (e.g. possible prediction).

 ├── sample 
 │   ├── test 
 │   ├── train 
 │   └── valid 
 ├── test 
 │   └── unknown 
 ├── train 
 │   ├── cats 
 │   └── dogs 
 └── valid 
     ├── cats 
     └── dogs

NB : The test data should also contain a subdirectory called unknown which contains all the test images.

The sample directory is not necessary but it’s useful to test the entire process before you launch it with all the data.

VGG16

For our classifier we are going to use a specific architecture: VGG16. This model was developed for the ImageNet competition by the VGG team at Oxford,  and it contains only 16 layers.

 

VGG16 architecture (picture from here)

 

(224, 224) is the size of the images used for VGG16.

1. Generation of batches of data

Firstly we create batches of data with flow_from_directory()This article by F. Chollet, the author of Keras, explains the method. We need to split the test, train, and validation data in batches.

Found 22500 images belonging to 2 classes.
Found 2500 images belonging to 2 classes.

2. Fine-tune the model

VGG16 is trained with the 1000 categories of ImageNet, but we need to customize the model for our categories (cats and dogs). To do that, we fine-tune it. The idea is to remove the last layer (which is the prediction layer), add a dense layer and train this new layer with our data. The other layers of the VGG16 model remain the same.

Keras documentation gives an example of fine-tuning with an other pre-trained model (InceptionV3).

Now that we have frozen the pre-trained layers, we can train the last one (which will be the predictions layer).

Epoch 1/1
22500/22500 [==============================] - 491s - loss: 0.9527 - acc: 0.9346 - val_loss: 0.5594 - val_acc: 0.9624

3. Predictions

Finally, we can use our model to make predictions on unseen data.

Conclusion

We learn how to build a simple model with Keras. We obtain 96% of accuracy with this model. However the final accuracy could be better with a few tips from the next post 🙂

PS: I included the entire code here