Dogs vs Cats using VGG16


Author : Aman Hussain
Email : aman@amandavinci.me
Description : Classifying images of dogs and cats by finetuning the VGG16 model

Import Libraries

Scientific Computing Stack


In [2]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Custom Packages


In [2]:
import os, json

from helper import utils
from helper.utils import plots

from helper import vgg16
from helper.vgg16 import Vgg16


Using cuDNN version 5103 on context None
Mapped name None to device cuda0: Tesla K80 (0000:00:04.0)
Using Theano backend.

Declaring paths & global parameters

The path to the dataset is defined here. It will point to the sample folder which contains lesser number of images for quick and iterative training on the local machine. For the final training, on the cloud we must change the path to the one commented out below.


In [3]:
# path = '../data/dogscats/sample/'
path = '../data/dogscats/'

The default batchsize for training and validation purposes


In [4]:
batchsize = 64

Data Exploration

Instantiating the VGG16 class which implements the required utility methods


In [5]:
vgg = Vgg16()

Getting the training and validation batches


In [6]:
batches = vgg.get_batches(path+'train', batch_size=batchsize)
val_batches = vgg.get_batches(path+'valid', batch_size=batchsize)


Found 20000 images belonging to 2 classes.
Found 5000 images belonging to 2 classes.

Visualizing the images, only if we are exploring the samples


In [7]:
imgs, labels = next(batches)
val_imgs, val_labels = next(val_batches)
labels = ['dog' if i[0]==0 else 'cat' for i in labels]
val_labels = ['dog' if i[0]==0 else 'cat' for i in val_labels]
plots(val_imgs[:5], figsize=(20,10), titles=val_labels)


Finetuning


In [8]:
vgg.finetune(batches)

In [9]:
%%time
vgg.fit(batches, val_batches, nb_epoch=3)


Epoch 1/3
20000/20000 [==============================] - 464s - loss: 0.1246 - acc: 0.9666 - val_loss: 0.0561 - val_acc: 0.9828
Epoch 2/3
20000/20000 [==============================] - 463s - loss: 0.0957 - acc: 0.9775 - val_loss: 0.0614 - val_acc: 0.9826
Epoch 3/3
20000/20000 [==============================] - 463s - loss: 0.0958 - acc: 0.9768 - val_loss: 0.0538 - val_acc: 0.9852
CPU times: user 31min 11s, sys: 5min 7s, total: 36min 19s
Wall time: 23min 39s

Model Testing

Due to the quirkiness of the ImageDataGenerator.flow_from_directory() used by vgg.get_batches(), we have to make a sub directory under test directory by the name 'subdir_for_keras_ImageDataGenerator'.


In [10]:
batch_size = len(os.listdir(path+'test'+'/subdir_for_keras_ImageDataGenerator'))

Keras ImageDataGenerator does not return the filenames and loads them in the same order as os.listdir() returns. Here, we extract the filenames which will serve as the indexes.


In [11]:
img_index = os.listdir(path+'test'+'/subdir_for_keras_ImageDataGenerator')
img_index = [os.path.splitext(file)[0] for file in img_index]

With the class_mode set to None, it will return only the batch of images without labels


In [12]:
testbatch = vgg.get_batches(path+'test', shuffle=False, batch_size=batch_size, class_mode=None)


Found 12500 images belonging to 1 classes.

In [13]:
test_imgs = next(testbatch)

Manually verifying the predicitons


In [14]:
plots(test_imgs[:5])



In [15]:
probab, prediction, prediction_labels = vgg.predict(test_imgs[:5], details = True)
print(prediction_labels, probab, prediction)


['cats', 'cats', 'cats', 'cats', 'dogs'] [ 1.  1.  1.  1.  1.] [0 0 0 0 1]

In [16]:
img_index[:5]


Out[16]:
['3453', '8574', '2858', '11545', '10164']

We confirm the predictions manually.

Here, we make the predictions using our trained model


In [17]:
%%time
probab, prediction, prediction_labels = vgg.predict(test_imgs, details = True)


CPU times: user 3min 4s, sys: 49 s, total: 3min 53s
Wall time: 3min 53s

Results

Preparing to save the predictions as submissions to the Kaggle competetion


In [18]:
np.save(path+'submissions/index', img_index)
np.save(path+'submissions/probab', probab)
np.save(path+'submissions/prediction', prediction)
np.save(path+'submissions/prediction_labels', prediction_labels)

Save the trained model


In [19]:
vgg.model.save("../models/vgg_dogsVScats.h5")

In [20]:
for predicted, index in enumerate(prediction):
    # When a cat is predicted, get the complimentary value
    if predicted == 0:
        probab[index] = 1 - probab[index]

In [21]:
img_index.insert(0, 'id')

labels_pred = [str(label) for label in prediction]
labels_pred.insert(0, 'label')

labels_prob = [str(label) for label in probab]
labels_prob.insert(0, 'label')

In [22]:
submission_array_pred = np.vstack((img_index, labels_pred)).T.astype('str')
submission_array_prob = np.vstack((img_index, labels_prob)).T.astype('str')

Saving the array as a CSV


In [23]:
np.savetxt(path+'submissions/submission_pred.csv', submission_array_pred, delimiter=",", fmt='%1s')
np.savetxt(path+'submissions/submission_prob.csv', submission_array_prob, delimiter=",", fmt='%1s')

Improving the Score by using Probabilities

Loading the values predicted by the model


In [66]:
img_index = np.load(path+'submissions/index.npy')
probab = np.load(path+'submissions/probab.npy')
prediction = np.load(path+'submissions/prediction.npy')

Visualising the distribution of probabilities


In [67]:
plt.hist(probab)


Out[67]:
(array([    15.,     21.,     15.,     18.,     32.,     32.,     30.,
            59.,     79.,  12199.]),
 array([ 0.50393718,  0.55354347,  0.60314975,  0.65275603,  0.70236231,
         0.75196859,  0.80157487,  0.85118116,  0.90078744,  0.95039372,  1.        ]),
 <a list of 10 Patch objects>)

Since the kaggle competetion evaluates results based on log loss, it heavily penalises values which are 1 or 0. So, we manually modify the 1 and 0 to read 0.95 and 0.05.


In [69]:
np.unique(prediction)


Out[69]:
array([0, 1])

In [70]:
prediction = prediction.astype(float)
prediction[prediction == 1] = 0.95
prediction[prediction == 0] = 0.05

In [71]:
np.unique(prediction)


Out[71]:
array([ 0.05,  0.95])

Now, we will prepare the list for submission


In [72]:
img_index = img_index.tolist()
img_index.insert(0, 'id')

labels_pred = [str(label) for label in prediction]
labels_pred.insert(0, 'label')

submission_array_pred = np.vstack((img_index, labels_pred)).T.astype('str')

Saving the new submission file


In [73]:
np.savetxt(path+'submissions/submission_pred_mod.csv', submission_array_pred, delimiter=",", fmt='%1s')