10 May 2017 - Lecture 2 JNB Code Along - WH Nixalo

Notebook | Lecture1:20:00

1 Linear models with CNN features


In [1]:
# This is to point Python to my utils folder
import sys; import os
# DIR = %pwd
sys.path.insert(1, os.path.join('../utils'))

# Rather than importing everything manually, we'll make things easy
#   and load them all in utils.py, and just import them from there.
import utils; reload(utils)
from utils import *
%matplotlib inline


Using Theano backend.

1.1 Intro

We need to find a way to convert the imagenet predictions to a probability of being a cat or a dog, since that is what the Kaggle copmetition requires us to submit. We could use the imagenet hierarchy to download a list of all the imagenet categories in each of the dog and cat groups, and could then solve our problem in various ways, such as:

  • Finding the largest probability that's either a cat or a dog, and using that label
  • Averaging the prbability of all the cat categories and comparing it to the average of all the dog categories.

But these approaches have some downsides:

  • They require manual coding for something that we should be able to learn from the data
  • They ignore information available in the predictions; for instance, if the models predict that there is a bone in th eimage, it's more likely to be a dog than a cat.

A very simple solution to both of these problems is to learn a linear model that is trained using the 1,000 predictions from the imagenet model for each image as input, and the dog/cat label as target.


In [2]:
%matplotlib inline
from __future__ import division, print_function
import os, json
from glob import glob
import numpy as np
import scipy
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import confusion_matrix
np.set_printoptions(precision=4,  linewidth=100)
from matplotlib import pyplot as plt
import utils; reload(utils)
from utils import plots, get_batches, plot_confusion_matrix, get_data

In [3]:
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom

import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential
from keras.layers import Input
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.preprocessing import image

1.2 Linear models in keras

Let's forget the motivating example for a second a see how we can create a simple Linear model in Keras:

Each of the Dense() layers is just a linear model, followed by a simple activation function.

In a linear model each row is calculated as sum(row * weights), where weights need to be learnt from the data & will be the same for every row. Let's create some data that we know is linearly related:


In [4]:
# we'll create a random matrix w/ 2 columns; & do a MatMul to get our 
# y value using a vector [2, 3] & adding a constant of 1.
x = random((30, 2))
y = np.dot(x, [2., 3.]) + 1.

In [5]:
x[:5]


Out[5]:
array([[ 0.4769,  0.0115],
       [ 0.2924,  0.2354],
       [ 0.5415,  0.4835],
       [ 0.6453,  0.0165],
       [ 0.3601,  0.9353]])

In [6]:
y[:5]


Out[6]:
array([ 1.9884,  2.2909,  3.5334,  2.3402,  4.5262])

We can use kears to create a simple linear model (Dense() - with no activation - in Keras) and optimize it using SGD to minimize mean squared error.


In [7]:
# Keras calls the Linear Model "Dense"; aka. "Fully-Connected" in other 
# libraries.
# So when we go 'Dense' w/ an input of 2 columns, & output of 1 col,
# we're defining a linear model that can go from the 2 col array above, to 
# the 1 col output of y above.
# Sequential() is a way of building multiple-layer networks. It takes an 
# array containing all the layers in your NN. A LM is a single Dense layer.
# This automatically initializes the weights sensibly & calc derivatives.
# We just tell it how to optimize the weights: SGD w/ LR=0.1, minz(MSE).
lm = Sequential([Dense(1, input_shape=(2,))])
lm.compile(optimizer=SGD(lr=0.1), loss='mse')

In [10]:
# find out our loss function w random weights
lm.evaluate(x, y, verbose=0)


Out[10]:
20.422586441040039

In [12]:
# now run SGD for 5 epochs & watch the loss improve
# lm.fit(..) does the solving
lm.fit(x, y, nb_epoch=5, batch_size=1)


Epoch 1/5
30/30 [==============================] - 0s - loss: 1.6037      
Epoch 2/5
30/30 [==============================] - 0s - loss: 0.1901     
Epoch 3/5
30/30 [==============================] - 0s - loss: 0.1220     
Epoch 4/5
30/30 [==============================] - 0s - loss: 0.0789     
Epoch 5/5
30/30 [==============================] - 0s - loss: 0.0431     
Out[12]:
<keras.callbacks.History at 0x114d8f610>

In [13]:
# now evaluate and see the improvement:
lm.evaluate(x, y, verbose=0)


Out[13]:
0.028723947703838348

In [14]:
# take a look at the weights, they should be virt. equal to 2, 3, and 1:
lm.get_weights()


Out[14]:
[array([[ 1.3697],
        [ 2.6763]], dtype=float32), array([ 1.5691], dtype=float32)]

In [16]:
# so let's run another 5 epochs and see if this improves things:
lm.fit(x, y, nb_epoch=5, batch_size=1)
lm.evaluate(x, y, verbose=0)


Epoch 1/5
30/30 [==============================] - 0s - loss: 0.0296     
Epoch 2/5
30/30 [==============================] - 0s - loss: 0.0194     
Epoch 3/5
30/30 [==============================] - 0s - loss: 0.0114     
Epoch 4/5
30/30 [==============================] - 0s - loss: 0.0082         
Epoch 5/5
30/30 [==============================] - 0s - loss: 0.0055         
Out[16]:
0.003555244067683816

In [17]:
# and take a look at the new weights:
lm.get_weights()


Out[17]:
[array([[ 1.7646],
        [ 2.8917]], dtype=float32), array([ 1.1795], dtype=float32)]

Above is everything Keras is doing behind the scenes. So, if we pass multiple layers to Keras via Sequential(..), we can start to build & optimize Deep Neural Networks.

Before that, we can still use the single-layer LM to create a pretty decent entry to the dogs-vs-cats Kaggle competition.

1.3 Train Linear Model on Predictions

Forgetting finetuning -- how do we take the output of an ImageNet network and as simply as possible, create a a good entry to the cats-vs-dogs competition? -- Our current ImageNet network returns a thousand probabilities but we need just cat vs dog. We don't want to manually write code to roll of the hierarchy into cats/dogs.

So what we can do is learn a Linear Model that takes the output of the ImageNet model, all it's 1000 predictions, and uses that as input, and uses the dog/cat label as the target -- and that LM would solve our problem.

1.3.1 Training the model

We start with some basic config steps. We copy a small amount of our data into a 'sample' directory, with the exact same structure as our 'train' directory -- this is always a good idea in all Machine Learning, since we should do all of our initial testing using a dataset small enough that we never have to wait for it.


In [29]:
# setup the directories
os.mkdir('data')
os.mkdir('data/dogscats')

path = "data/dogscats/"
model_path = path + 'models/'
# if the path to our models DNE, make it
if not os.path.exists(model_path): os.mkdir(model_path)
# NOTE: os.mkdir(..) only works for a single folder
#       Also will throw error if dir already exists

We'll process as many images at a time as we can. This is a case of T&E to find the max batch size that doesn't cause a memory error.


In [30]:
batch_size = 100

We need to start with our VGG 16 model, since we're using its predictions & features


In [31]:
from vgg16 import Vgg16
vgg = Vgg16()
model = vgg.model

Our overall approach here will be:

  1. Get the true labels for every image
  2. Get the 1,000 ImageNet category predictions for every image
  3. Feed these predictions as input to a simple linear model. Let's start by grabbing training and validation batches.

(so that's a thousand floats for every image)

use an output of 2 as input to LM

output of 1 as target to our LM, create LM & build predictions

As usual, we start by creating our batches & validation vatches


In [32]:
# Use batch size of 1 since we're just doing preprocessing on the CPU
val_batches = get_batches(path + 'valid', shuffle=False, batch_size=1)
batches = get_batches(path + 'train', shuffle=False, batch_size=1)


Found 50 images belonging to 2 classes.
Found 352 images belonging to 2 classes.

Getting the 1,000 categories for each image will take a long time & there's no reason to do it again & again. So after we do it the first time, let's save the resulting arrays.


In [33]:
import bcolz
def save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w'); c.flush()
def load_array(fname): return bcolz.open(fname)[:]

It's also time consuming to convert all the images in the 224x224 format VGG 16 expects. So get_data will also store a Numpy array of the results of that conversion.


In [34]:
# ?? shows you the source code
??get_data

In [36]:
val_data = get_data(path + 'valid')
trn_data = get_data(path + 'train')


Found 50 images belonging to 2 classes.
Found 352 images belonging to 2 classes.

In [38]:
# so what the above does is createa a Numpy array with our full set of
# training images -- 352 imgs, ea. of which is 3 colors, and 224x224
trn_data.shape


Out[38]:
(352, 3, 224, 224)

In [40]:
save_array(model_path + 'train_data.bc', trn_data)
save_array(model_path + 'valid_data.bc', val_data)

& Now we can load our training & validation data layer without recalculating them


In [41]:
trn_data = load_array(model_path + 'train_data.bc')
val_data = load_array(model_path + 'valid_data.bc')

In [42]:
val_data.shape # our 50 validatn imgs


Out[42]:
(50, 3, 224, 224)

Most Deep Learning is done w/ One-Hot Encoding: prediction = 1, all other classes = 0; & Keras expects labels in a very specific format. Example of One Hot Encoding:

Class: 1Ht Enc:
   0    100
   1    010
   2    001
   1    010
   0    100

1Ht Encoding is used because you can perform a MatMul since the num. weights == encoding length. In the above example W would be a vector of: w1, w2, w3

This lets you do Deep Learning very easily with categorical variables

Keras returns classes as a single column, so we convert to 1Ht.


In [43]:
def onehot(x): return np.array(OneHotEncoder().fit_transform(x.reshape(-1, 1)).todense())

In [45]:
# So, next thing we want to do is grab our labels and One-Hot Encode them
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)

In [50]:
trn_classes.shape # Keras single col of all imgs


Out[50]:
(352,)

In [47]:
trn_labels.shape # One-Hot Encoded: 2 bit-width col <--> 2 classes


Out[47]:
(352, 2)

In [52]:
trn_classes[:4] # taking a look at 1st 4 classes


Out[52]:
array([0, 0, 0, 0], dtype=int32)

In [51]:
trn_labels[:4] # seeing the 1st 4 labels are 1Ht encoded


Out[51]:
array([[ 1.,  0.],
       [ 1.,  0.],
       [ 1.,  0.],
       [ 1.,  0.]])

Now we can finally do Step No.2: get the 1,000 ImageNet categ. preds for every image. Keras makes this easy for us. We can simple call model.predict(..) and pass in our data


In [53]:
trn_features = model.predict(trn_data, batch_size=batch_size)
val_features = model.predict(val_data, batch_size=batch_size)

In [54]:
trn_features.shape # we can see it is indeed No. imgs x 1000 categories


Out[54]:
(352, 1000)

In [55]:
# let's take a look at one of the images (displaying all its categs)
trn_features[0]


Out[55]:
array([  4.2062e-07,   1.5420e-03,   6.3156e-06,   1.6525e-05,   1.7104e-05,   9.9630e-06,
         7.9745e-06,   1.0407e-04,   6.6819e-06,   8.2504e-08,   8.4365e-07,   2.0677e-07,
         8.5876e-07,   1.1499e-06,   4.7719e-08,   6.0018e-06,   9.5073e-07,   1.2924e-06,
         2.9988e-07,   2.5385e-07,   1.5620e-07,   8.7231e-07,   4.5659e-07,   9.3004e-07,
         1.2746e-07,   1.3403e-05,   2.4640e-05,   8.9075e-05,   1.8840e-05,   2.1417e-04,
         3.9174e-06,   1.6371e-05,   1.2553e-05,   8.5407e-06,   2.0350e-06,   1.5323e-06,
         1.3331e-05,   3.3202e-05,   1.8858e-05,   1.0838e-05,   1.7524e-05,   6.8391e-07,
         8.3252e-06,   4.1266e-05,   7.1616e-06,   3.5467e-05,   2.6231e-05,   1.2236e-05,
         1.5050e-06,   1.7549e-06,   7.6077e-06,   9.2585e-04,   7.3585e-07,   4.2305e-07,
         3.6098e-06,   2.8183e-06,   1.5933e-06,   2.2448e-07,   4.4545e-07,   1.8274e-06,
         1.7588e-05,   5.1704e-06,   6.2866e-07,   1.8056e-07,   3.1049e-06,   1.7660e-07,
         5.9780e-06,   1.2211e-06,   7.6613e-07,   3.8372e-07,   1.9019e-07,   3.3872e-05,
         3.5112e-07,   9.9900e-07,   7.8030e-07,   3.5398e-07,   1.1452e-05,   1.5175e-06,
         1.0061e-05,   4.3842e-05,   3.7090e-07,   1.9770e-08,   1.1152e-06,   1.2342e-06,
         6.1768e-08,   1.1088e-06,   3.4628e-07,   1.6480e-05,   2.2799e-06,   1.6602e-07,
         5.4514e-07,   5.7184e-08,   1.0766e-07,   1.6755e-07,   1.6093e-05,   5.9978e-08,
         6.9914e-07,   1.8914e-07,   1.0901e-07,   8.8966e-07,   4.3514e-07,   4.6905e-08,
         1.8878e-06,   4.9739e-05,   2.3863e-04,   1.3293e-05,   6.6047e-05,   1.4282e-06,
         2.8598e-06,   3.6598e-08,   1.9374e-06,   2.2684e-07,   2.8436e-04,   5.1162e-06,
         3.8577e-05,   5.5484e-06,   4.9933e-07,   2.1614e-06,   8.2875e-05,   1.1169e-06,
         2.2831e-06,   3.5188e-05,   1.8849e-04,   9.0652e-06,   2.0171e-04,   1.5025e-05,
         3.2832e-06,   1.0087e-08,   1.8367e-08,   7.7315e-08,   2.8054e-07,   3.1022e-08,
         1.1212e-07,   5.8463e-07,   8.7489e-08,   1.9708e-07,   1.9593e-07,   3.9456e-07,
         1.3445e-07,   6.1155e-07,   1.8185e-07,   2.0945e-07,   2.5200e-07,   2.1542e-07,
         2.0555e-07,   7.1734e-07,   1.3913e-07,   2.3032e-08,   2.8428e-07,   1.4780e-07,
         1.6209e-05,   4.0897e-02,   6.0298e-06,   7.6513e-06,   4.1892e-05,   1.5302e-05,
         1.7005e-05,   5.6559e-04,   1.8613e-02,   1.3646e-03,   1.7493e-05,   2.2589e-04,
         1.0288e-03,   1.2464e-03,   2.1787e-05,   2.6484e-04,   1.4521e-04,   1.0270e-05,
         6.5870e-03,   9.7824e-06,   7.0771e-06,   1.4130e-02,   3.4430e-04,   1.7790e-03,
         6.9570e-04,   1.8353e-06,   3.7828e-05,   1.0028e-05,   2.3639e-04,   4.2366e-04,
         2.3786e-03,   2.9039e-06,   3.2623e-04,   1.0321e-05,   3.0753e-03,   2.6669e-04,
         2.2384e-03,   3.3734e-03,   3.3639e-05,   5.0611e-05,   5.6301e-07,   5.7224e-04,
         2.5945e-04,   3.2610e-03,   2.4956e-07,   3.5114e-03,   2.1093e-04,   3.7247e-05,
         2.4612e-05,   3.0890e-04,   2.3163e-06,   1.4551e-03,   5.4289e-06,   5.7073e-04,
         3.0866e-05,   4.0250e-05,   4.2238e-06,   4.8506e-04,   2.5318e-04,   1.3352e-03,
         5.7185e-06,   2.0333e-03,   2.1694e-05,   1.1244e-03,   1.4016e-04,   2.1493e-04,
         1.5161e-06,   8.9203e-06,   4.8288e-05,   7.4693e-05,   2.6980e-05,   5.4745e-06,
         3.7080e-06,   7.9540e-03,   1.4798e-03,   1.8642e-03,   8.0116e-06,   3.1959e-03,
         5.8512e-07,   2.1439e-06,   2.9008e-03,   3.1923e-04,   3.0315e-05,   2.4971e-06,
         6.2125e-04,   4.1284e-03,   1.4826e-02,   1.0494e-01,   8.5432e-05,   6.6752e-05,
         1.2411e-04,   9.6948e-05,   7.5883e-04,   1.6483e-04,   5.8187e-06,   3.5070e-03,
         4.0790e-04,   2.3711e-05,   2.8131e-04,   1.8526e-04,   7.8725e-04,   1.7679e-04,
         1.3797e-05,   1.2139e-01,   6.9441e-05,   1.9940e-06,   2.8096e-06,   3.7236e-05,
         1.3721e-04,   9.6933e-04,   4.6059e-04,   9.2462e-05,   1.8243e-04,   1.0876e-02,
         5.2478e-03,   4.1431e-04,   2.3826e-05,   9.6184e-06,   3.5141e-03,   9.5573e-06,
         2.3538e-06,   4.4765e-04,   1.3939e-04,   4.7762e-03,   1.1572e-03,   6.9647e-05,
         2.6779e-04,   1.9788e-03,   2.2580e-04,   6.1647e-06,   3.2670e-04,   2.9477e-02,
         3.4332e-02,   1.4926e-04,   7.7160e-03,   7.7673e-02,   2.2655e-04,   5.3335e-04,
         1.2855e-05,   6.2198e-07,   2.6095e-05,   2.2252e-05,   3.6305e-05,   1.4077e-05,
         6.7507e-06,   7.6275e-06,   1.2638e-06,   5.6248e-07,   6.0271e-05,   3.3150e-05,
         2.3222e-07,   4.0126e-06,   3.5088e-07,   1.1920e-06,   1.5514e-06,   2.5798e-07,
         2.5020e-05,   8.8748e-07,   8.8762e-07,   1.4126e-05,   1.1467e-05,   6.6912e-06,
         5.2750e-05,   4.9805e-06,   1.1144e-03,   8.0566e-06,   7.4943e-06,   8.7715e-07,
         6.9761e-07,   4.2937e-07,   6.7189e-08,   6.3062e-06,   7.6049e-07,   7.9192e-07,
         2.1414e-07,   1.8115e-06,   1.0012e-06,   4.1847e-04,   4.5405e-07,   3.9659e-06,
         1.1659e-05,   9.1715e-06,   1.2412e-05,   2.7452e-04,   3.3820e-07,   2.4066e-05,
         5.4255e-08,   1.2311e-06,   1.8837e-05,   2.7683e-05,   5.8317e-06,   3.7146e-05,
         6.3251e-06,   3.4329e-06,   2.8566e-06,   6.9377e-07,   6.2820e-07,   7.5194e-07,
         6.8978e-08,   3.4060e-07,   2.7995e-06,   2.3851e-06,   2.4921e-06,   3.4905e-06,
         5.4981e-07,   2.3415e-06,   2.5357e-04,   7.5745e-06,   2.4343e-05,   4.6287e-05,
         4.1757e-06,   1.4822e-04,   1.4330e-06,   3.0283e-05,   5.8556e-07,   9.3202e-06,
         2.2925e-07,   2.5367e-06,   8.5771e-07,   6.2012e-07,   1.2143e-06,   5.3989e-07,
         6.4911e-07,   2.4051e-06,   2.5170e-06,   1.5214e-07,   9.4662e-07,   1.1220e-05,
         3.4591e-06,   7.4827e-07,   7.2521e-06,   5.2526e-06,   8.1966e-06,   3.5226e-06,
         2.1968e-06,   2.0711e-07,   1.7595e-07,   1.0673e-05,   4.3544e-07,   2.9710e-06,
         1.3156e-06,   2.3731e-05,   7.6191e-07,   6.9890e-07,   1.1051e-06,   2.0290e-06,
         1.2589e-06,   6.5251e-06,   2.8029e-05,   2.1269e-05,   6.0027e-06,   1.2645e-05,
         7.9885e-05,   2.7156e-07,   1.0642e-06,   4.6524e-07,   1.8997e-04,   7.6341e-06,
         4.9172e-08,   3.4001e-05,   2.0425e-07,   1.7085e-04,   1.2562e-04,   1.3658e-05,
         2.7496e-05,   6.6098e-06,   2.5002e-05,   1.3593e-05,   2.9144e-06,   6.8401e-04,
         1.6911e-04,   1.4256e-04,   1.7453e-05,   1.3090e-05,   6.8422e-06,   3.9439e-07,
         8.7440e-05,   3.4588e-05,   1.6963e-05,   2.5916e-06,   2.1339e-06,   1.0555e-04,
         8.3072e-06,   8.3602e-06,   1.0812e-02,   2.4574e-02,   8.2756e-07,   2.9119e-07,
         7.0836e-06,   1.1385e-06,   1.5964e-04,   2.3566e-05,   2.0060e-06,   3.0703e-05,
         3.8633e-07,   5.6876e-06,   2.3361e-05,   6.0082e-06,   2.7763e-06,   2.7730e-08,
         3.2630e-06,   1.2449e-05,   4.0997e-05,   1.7392e-03,   1.0974e-04,   4.6696e-05,
         9.8697e-06,   4.0570e-03,   4.0877e-07,   1.7763e-04,   3.2280e-07,   1.6069e-05,
         2.9807e-05,   2.6663e-04,   7.6206e-06,   2.1384e-05,   1.1105e-05,   5.2455e-05,
         9.9044e-06,   8.9546e-05,   1.3503e-02,   9.9982e-07,   9.5416e-07,   7.5543e-05,
         8.1886e-06,   7.9107e-05,   2.1895e-05,   1.2284e-06,   4.7999e-04,   1.8214e-06,
         1.0290e-04,   1.6476e-05,   2.5106e-06,   3.4050e-06,   6.1817e-08,   2.4029e-05,
         1.6861e-04,   1.0990e-04,   2.3506e-06,   1.2911e-06,   3.1728e-06,   9.4931e-06,
         7.8707e-05,   7.1087e-05,   2.2225e-05,   2.8311e-04,   1.2014e-03,   5.7677e-06,
         6.5653e-06,   1.9826e-04,   5.8766e-07,   4.4524e-05,   1.0336e-05,   3.2354e-05,
         2.0348e-04,   1.2787e-04,   4.3086e-06,   1.5281e-05,   6.5255e-04,   5.4899e-06,
         1.9857e-06,   5.3685e-07,   1.3500e-04,   7.0496e-05,   4.5421e-04,   2.8658e-03,
         8.2682e-04,   2.6274e-07,   2.7924e-05,   6.2805e-06,   2.8475e-04,   8.0523e-04,
         3.2908e-07,   1.3654e-04,   2.9640e-06,   3.3943e-08,   3.0296e-04,   2.1942e-04,
         3.3130e-05,   9.7993e-05,   1.2237e-04,   1.1052e-05,   4.7949e-04,   6.5143e-06,
         9.2607e-04,   3.2698e-07,   3.3758e-07,   4.2776e-06,   8.5167e-07,   9.5194e-04,
         1.4697e-07,   1.2382e-05,   1.1762e-04,   5.4321e-04,   5.5281e-06,   4.9959e-05,
         1.5774e-04,   5.2071e-08,   8.1823e-05,   7.1795e-06,   4.3161e-06,   4.4165e-06,
         1.1001e-02,   5.5113e-04,   9.2337e-08,   2.7418e-07,   1.0111e-03,   4.3551e-07,
         3.9447e-05,   1.2517e-04,   6.2079e-06,   6.1950e-07,   6.0369e-05,   4.1043e-06,
         8.4864e-05,   4.9522e-07,   3.0746e-05,   2.3037e-04,   3.2324e-05,   8.1732e-07,
         3.7764e-04,   1.3100e-06,   1.4618e-04,   1.7773e-07,   2.4482e-07,   8.1421e-07,
         2.4160e-06,   1.0025e-05,   5.9854e-05,   3.5867e-04,   1.1397e-06,   6.0484e-07,
         1.4854e-06,   2.2537e-05,   5.6355e-06,   4.1632e-04,   5.1510e-07,   4.5192e-05,
         1.4317e-04,   2.8019e-03,   1.0675e-06,   1.2978e-05,   9.5276e-06,   1.1600e-04,
         1.8493e-05,   3.5090e-08,   1.6649e-05,   6.5385e-06,   4.0273e-04,   2.1276e-06,
         4.0489e-05,   2.9207e-05,   3.2228e-05,   8.3237e-07,   4.6009e-05,   4.1462e-05,
         3.6041e-03,   4.3763e-03,   6.2141e-05,   3.6627e-07,   5.6821e-05,   7.0348e-04,
         1.7046e-07,   2.4428e-05,   2.1057e-05,   1.9820e-05,   7.3014e-07,   5.3091e-05,
         9.8631e-05,   1.0943e-03,   1.0610e-03,   1.1200e-06,   1.0374e-04,   1.0012e-05,
         6.0722e-06,   7.2972e-07,   4.5038e-04,   7.8519e-06,   9.6683e-07,   9.3181e-05,
         3.7618e-05,   1.1304e-04,   2.9730e-05,   4.2344e-05,   7.5581e-08,   7.0101e-07,
         1.0447e-05,   7.0314e-06,   2.1476e-05,   1.8282e-05,   2.1902e-07,   9.8502e-05,
         6.2261e-05,   1.9709e-04,   2.9045e-05,   9.1672e-08,   4.6053e-06,   5.1510e-06,
         6.7816e-03,   2.0279e-07,   4.9905e-05,   5.0492e-03,   2.0489e-05,   5.8271e-05,
         2.3715e-06,   1.7272e-04,   3.2819e-06,   3.7284e-06,   1.9599e-05,   1.6653e-04,
         9.0203e-07,   4.0740e-07,   1.1607e-05,   7.3446e-07,   5.5089e-04,   2.2962e-05,
         1.2983e-05,   7.0303e-05,   3.8637e-07,   1.0564e-04,   1.5681e-05,   9.9271e-08,
         4.1485e-07,   4.4953e-04,   8.6145e-04,   2.1793e-06,   2.4617e-04,   2.9424e-05,
         3.9920e-04,   1.7739e-06,   2.5909e-04,   2.1571e-04,   1.1619e-06,   3.7833e-05,
         8.3389e-05,   2.3358e-07,   9.1275e-07,   6.5248e-06,   6.4958e-05,   1.2326e-05,
         2.3720e-07,   8.5684e-05,   5.7678e-06,   1.2113e-06,   4.7283e-07,   1.1586e-05,
         7.0856e-05,   4.2630e-04,   9.9222e-07,   4.0429e-05,   3.1443e-02,   1.2467e-06,
         4.8224e-05,   9.6025e-07,   5.1617e-06,   1.3518e-07,   1.3513e-06,   1.2836e-05,
         1.0002e-05,   1.3887e-05,   1.4047e-04,   2.0003e-04,   1.3286e-06,   4.8826e-04,
         9.8872e-06,   1.5511e-06,   1.1534e-04,   2.6000e-07,   2.1680e-07,   2.5045e-03,
         2.8539e-04,   6.4680e-05,   5.7330e-04,   3.4463e-05,   7.5136e-08,   2.2215e-04,
         3.0367e-06,   3.3383e-07,   1.8480e-03,   8.4019e-06,   3.3530e-07,   1.0732e-03,
         2.0520e-05,   2.5729e-05,   2.2230e-06,   9.1608e-06,   1.2415e-03,   1.6720e-04,
         1.0346e-05,   3.5356e-06,   1.9665e-04,   2.9240e-05,   3.8946e-03,   1.0460e-05,
         6.8765e-06,   3.5756e-04,   2.1132e-07,   3.2856e-04,   3.5708e-06,   9.8335e-06,
         1.3856e-03,   4.8384e-07,   4.1143e-06,   3.4378e-03,   1.2167e-05,   1.6135e-07,
         7.6697e-06,   8.3069e-06,   2.9461e-06,   1.6419e-05,   3.9571e-02,   1.7617e-03,
         8.1986e-05,   1.4717e-04,   1.0285e-05,   6.9172e-05,   5.7251e-05,   1.5065e-05,
         3.5294e-06,   4.3188e-05,   2.0623e-05,   2.1983e-04,   9.9541e-06,   6.0049e-04,
         3.5017e-05,   7.1730e-05,   3.6552e-05,   9.0614e-06,   5.4389e-05,   1.3091e-06,
         3.8186e-07,   1.0075e-07,   2.0142e-03,   1.0327e-04,   3.2507e-04,   1.3513e-04,
         1.3027e-03,   4.5812e-06,   6.8764e-04,   2.2306e-04,   2.0461e-05,   9.0925e-06,
         7.0053e-06,   8.5445e-04,   1.3942e-02,   1.3738e-06,   9.3797e-05,   3.3698e-04,
         2.4770e-06,   2.4936e-03,   8.7654e-06,   9.4412e-06,   5.1223e-07,   3.9785e-07,
         5.7280e-03,   3.4126e-04,   2.8046e-04,   6.8147e-08,   1.1059e-04,   2.9950e-05,
         6.3999e-06,   2.5500e-02,   5.1218e-06,   1.2337e-03,   7.8050e-08,   5.3041e-07,
         4.3555e-07,   5.0597e-07,   1.0026e-03,   3.9772e-05,   1.6579e-08,   3.0732e-09,
         1.6937e-06,   3.6999e-05,   3.2603e-05,   1.4532e-07,   3.9082e-06,   1.0721e-03,
         4.4828e-05,   5.8405e-07,   2.0259e-05,   1.4100e-03,   2.2328e-06,   7.4097e-08,
         2.1416e-04,   8.9175e-08,   1.5985e-05,   1.1989e-04,   2.5134e-05,   6.9823e-08,
         4.7810e-05,   1.1744e-04,   1.0029e-05,   2.4044e-06,   1.7751e-04,   5.5550e-04,
         1.2213e-03,   1.3490e-07,   1.2588e-05,   4.8594e-05,   2.3528e-05,   1.3571e-02,
         1.1311e-03,   4.4784e-08,   4.9869e-05,   2.7141e-05,   3.4772e-08,   1.7386e-05,
         4.6310e-06,   2.6662e-03,   1.5560e-06,   5.1446e-03,   2.6245e-04,   1.6104e-06,
         3.0440e-07,   2.1517e-05,   1.1438e-07,   2.0973e-07,   5.8012e-05,   6.1119e-05,
         3.1156e-05,   4.8264e-08,   3.5156e-05,   6.0061e-07,   1.0511e-07,   7.8858e-05,
         2.7205e-02,   3.0228e-05,   1.0112e-06,   4.2109e-05,   5.3628e-05,   1.1363e-04,
         1.1064e-03,   2.1331e-04,   6.3870e-06,   4.7314e-06,   3.6211e-05,   7.7816e-06,
         7.2126e-08,   2.8607e-04,   2.2714e-06,   6.2130e-05,   2.6103e-03,   4.2108e-06,
         1.2684e-03,   4.0869e-07,   6.6935e-03,   1.2411e-03,   5.2569e-04,   2.3227e-04,
         1.1093e-07,   1.4515e-06,   2.8783e-05,   3.7250e-04,   1.6783e-03,   1.7781e-03,
         1.8889e-05,   9.9939e-05,   1.3973e-06,   1.7233e-03,   2.7672e-04,   7.6719e-07,
         6.7136e-07,   1.9878e-06,   7.4817e-08,   8.1419e-08,   7.9806e-06,   3.9490e-05,
         1.3269e-04,   1.5526e-06,   5.9472e-06,   6.5243e-05,   2.0703e-05,   6.8017e-05,
         9.8233e-06,   3.3525e-05,   1.0256e-05,   3.0903e-05,   5.7322e-06,   9.1057e-05,
         4.6695e-07,   3.5024e-06,   5.2827e-05,   4.4349e-06,   2.7081e-05,   8.1402e-06,
         1.4341e-06,   2.1629e-05,   8.6960e-07,   4.3312e-05,   3.5712e-06,   9.4487e-06,
         4.0934e-05,   1.9323e-05,   2.4970e-07,   1.5628e-05,   1.0121e-07,   2.9144e-06,
         1.0362e-05,   9.1329e-06,   4.7316e-05,   7.6786e-06,   2.3833e-06,   1.6729e-04,
         7.3254e-05,   1.3238e-06,   6.4334e-07,   2.1421e-05,   6.1346e-07,   4.5913e-06,
         1.5347e-05,   6.2167e-06,   3.0712e-06,   3.1466e-04,   2.9109e-06,   1.3428e-05,
         7.7566e-05,   5.1199e-06,   6.9805e-05,   2.1997e-05,   7.1616e-07,   1.3792e-04,
         7.6439e-07,   4.1244e-07,   1.1059e-06,   5.5652e-07,   3.9512e-07,   7.5780e-07,
         4.0083e-06,   4.8476e-08,   7.4627e-07,   3.3869e-07,   5.3920e-05,   3.9657e-06,
         2.7180e-07,   1.7022e-06,   1.2795e-06,   1.2636e-04,   3.1045e-06,   3.2106e-06,
         7.4805e-06,   1.2071e-06,   2.3869e-06,   6.3930e-07,   2.1771e-07,   4.4992e-07,
         4.4579e-07,   3.2240e-07,   3.9332e-05,   7.6541e-02], dtype=float32)

Not surprisingly, nearly all of these numbers are near zero.

Now we can define our linear model, just like we did earlier; now that we have our 1000 features for each image


In [56]:
# 1000 inputs, since those're the saved features, and 2 outputs: dog & cat
lm = Sequential([Dense(2, activation='softmax', input_shape=(1000,))])
lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])

& Now we're ready to fit the model! RMSprop is somewhat better than SGD. It's a minor tweak on SGD that tends to be much faster.


In [57]:
batch_size=4

In [59]:
lm.fit(trn_features, trn_labels, batch_size=batch_size, nb_epoch=3, 
       validation_data = (val_features, val_labels))


Train on 352 samples, validate on 50 samples
Epoch 1/3
352/352 [==============================] - 0s - loss: 0.1856 - acc: 0.9375 - val_loss: 0.1782 - val_acc: 0.9000
Epoch 2/3
352/352 [==============================] - 0s - loss: 0.0682 - acc: 0.9688 - val_loss: 0.1416 - val_acc: 0.9200
Epoch 3/3
352/352 [==============================] - 0s - loss: 0.0517 - acc: 0.9773 - val_loss: 0.1161 - val_acc: 0.9200
Out[59]:
<keras.callbacks.History at 0x1158afcd0>

In [60]:
# let's have a look at our model
lm.summary()


____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
dense_6 (Dense)                  (None, 2)             2002        dense_input_3[0][0]              
====================================================================================================
Total params: 2,002
Trainable params: 2,002
Non-trainable params: 0
____________________________________________________________________________________________________

So it ran almost instantly because running 3 epochs on a single layer with 2000 is really quick for my little i5 MacBook :3

We got an accuracy of .92. Let's run another 3 epochs and see if this changes:


In [63]:
lm.fit(trn_features, trn_labels, batch_size=batch_size, nb_epoch=3,
       validation_data = (val_features, val_labels))


Train on 352 samples, validate on 50 samples
Epoch 1/3
352/352 [==============================] - 0s - loss: 0.0267 - acc: 0.9915 - val_loss: 0.2057 - val_acc: 0.9000
Epoch 2/3
352/352 [==============================] - 0s - loss: 0.0266 - acc: 0.9886 - val_loss: 0.2279 - val_acc: 0.9000
Epoch 3/3
352/352 [==============================] - 0s - loss: 0.0221 - acc: 0.9915 - val_loss: 0.2109 - val_acc: 0.9400
Out[63]:
<keras.callbacks.History at 0x11dd78c10>

(I actually ran 9, bc on a tiny set of 350 images it took a bit more to improve: no change on the 1st, dropped to .90 on the 2nd, and finally up to .94 on the final)

Here we haven't done any finetuning. All we did was take the ImageNet model of predictions, and built a model that maps from those predictions to either 'Cat' or 'Dog'

This is actually what most amatuer Machine Learning researchers do. They take a pretrained model, they grab the outputs, stick it into a linear model -- and it actually often works pretty well!

To get this 94% accuracy, we haven't done used any magical libraries at all. We just grabbed our batches up, we turned the images into a Numpy array, we took the Numpy array and ran model.predict(..) on them, we grabbed our labels and One-Hot Encoded them, and finally we took the 1Ht Enc labels and the 1,000 probabilities and fed them to a Linear Model with a thousand inputs and 2 outputs - and trained it and ended up with a validationa ccuracy of 0.9400

1.3.3 About Activation Functions

The last thing we're going to do is take this and turn it into a finetuning model. For that we need to understand activation functions. We've been looking at our Linear Model as a series of matrix multiplies. But a series of matrix multiplies is itself a matrix multiply --> a series of linear models is itself a linear model. Deep Learning must be doing something more than just this. At each stage (layer) it is putting the activations, the results of the previous layer, through a non-Linearity of some sort. tanh, sigmoid, max(0,x) (ReLU), etc.

Using the activation functions at each layer, we now have a genuine, modern (ca.2017), Deep Learning Neural Network. This kind of NN is capable of approximating any given function of arbitrary complexity.

A series of matrix-multiplies & activation (sa. ReLU) is actually what's going on in a DLNN.

Remember how we defined our model:

lm = Sequential([Dense(2, activation='softmax', input_shape(1000,))])

And the definition of a fully connected layer in the original VGG:

model.add(Dense(4096, activation='relu'))

What that activation parameter says is "after you do the Matrix Π, do a activation of (in this case): max(0, x)"

2 Modifying the Model

2.1 Retrain last layer's Linear Model

So what we need to do is take our final layer, which has a Matrix Multip and & activation function, and we're going to remove it. To understand why, take a look at our DLNN layers:


In [64]:
vgg.model.summary()


____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
lambda_1 (Lambda)                (None, 3, 224, 224)   0           lambda_input_1[0][0]             
____________________________________________________________________________________________________
zeropadding2d_1 (ZeroPadding2D)  (None, 3, 226, 226)   0           lambda_1[0][0]                   
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D)  (None, 64, 224, 224)  1792        zeropadding2d_1[0][0]            
____________________________________________________________________________________________________
zeropadding2d_2 (ZeroPadding2D)  (None, 64, 226, 226)  0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)  (None, 64, 224, 224)  36928       zeropadding2d_2[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)    (None, 64, 112, 112)  0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
zeropadding2d_3 (ZeroPadding2D)  (None, 64, 114, 114)  0           maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D)  (None, 128, 112, 112) 73856       zeropadding2d_3[0][0]            
____________________________________________________________________________________________________
zeropadding2d_4 (ZeroPadding2D)  (None, 128, 114, 114) 0           convolution2d_3[0][0]            
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D)  (None, 128, 112, 112) 147584      zeropadding2d_4[0][0]            
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)    (None, 128, 56, 56)   0           convolution2d_4[0][0]            
____________________________________________________________________________________________________
zeropadding2d_5 (ZeroPadding2D)  (None, 128, 58, 58)   0           maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D)  (None, 256, 56, 56)   295168      zeropadding2d_5[0][0]            
____________________________________________________________________________________________________
zeropadding2d_6 (ZeroPadding2D)  (None, 256, 58, 58)   0           convolution2d_5[0][0]            
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D)  (None, 256, 56, 56)   590080      zeropadding2d_6[0][0]            
____________________________________________________________________________________________________
zeropadding2d_7 (ZeroPadding2D)  (None, 256, 58, 58)   0           convolution2d_6[0][0]            
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D)  (None, 256, 56, 56)   590080      zeropadding2d_7[0][0]            
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D)    (None, 256, 28, 28)   0           convolution2d_7[0][0]            
____________________________________________________________________________________________________
zeropadding2d_8 (ZeroPadding2D)  (None, 256, 30, 30)   0           maxpooling2d_3[0][0]             
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D)  (None, 512, 28, 28)   1180160     zeropadding2d_8[0][0]            
____________________________________________________________________________________________________
zeropadding2d_9 (ZeroPadding2D)  (None, 512, 30, 30)   0           convolution2d_8[0][0]            
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D)  (None, 512, 28, 28)   2359808     zeropadding2d_9[0][0]            
____________________________________________________________________________________________________
zeropadding2d_10 (ZeroPadding2D) (None, 512, 30, 30)   0           convolution2d_9[0][0]            
____________________________________________________________________________________________________
convolution2d_10 (Convolution2D) (None, 512, 28, 28)   2359808     zeropadding2d_10[0][0]           
____________________________________________________________________________________________________
maxpooling2d_4 (MaxPooling2D)    (None, 512, 14, 14)   0           convolution2d_10[0][0]           
____________________________________________________________________________________________________
zeropadding2d_11 (ZeroPadding2D) (None, 512, 16, 16)   0           maxpooling2d_4[0][0]             
____________________________________________________________________________________________________
convolution2d_11 (Convolution2D) (None, 512, 14, 14)   2359808     zeropadding2d_11[0][0]           
____________________________________________________________________________________________________
zeropadding2d_12 (ZeroPadding2D) (None, 512, 16, 16)   0           convolution2d_11[0][0]           
____________________________________________________________________________________________________
convolution2d_12 (Convolution2D) (None, 512, 14, 14)   2359808     zeropadding2d_12[0][0]           
____________________________________________________________________________________________________
zeropadding2d_13 (ZeroPadding2D) (None, 512, 16, 16)   0           convolution2d_12[0][0]           
____________________________________________________________________________________________________
convolution2d_13 (Convolution2D) (None, 512, 14, 14)   2359808     zeropadding2d_13[0][0]           
____________________________________________________________________________________________________
maxpooling2d_5 (MaxPooling2D)    (None, 512, 7, 7)     0           convolution2d_13[0][0]           
____________________________________________________________________________________________________
flatten_1 (Flatten)              (None, 25088)         0           maxpooling2d_5[0][0]             
____________________________________________________________________________________________________
dense_3 (Dense)                  (None, 4096)          102764544   flatten_1[0][0]                  
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 4096)          0           dense_3[0][0]                    
____________________________________________________________________________________________________
dense_4 (Dense)                  (None, 4096)          16781312    dropout_1[0][0]                  
____________________________________________________________________________________________________
dropout_2 (Dropout)              (None, 4096)          0           dense_4[0][0]                    
____________________________________________________________________________________________________
dense_5 (Dense)                  (None, 1000)          4097000     dropout_2[0][0]                  
====================================================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
____________________________________________________________________________________________________

The last layer is a Dense (FC/Linear) layer. It doesn't make sense to add another dense layer atop of a dense layer that's already tuned to classify the 1,000 ImageNet categories. We'll remove it, and use the previous Dense layer with it's 4096 activations to find Cats & Dogs.

We do this by calling model.pop() to pop off the last layer, and set all remaining layers to be fixed, so they aren't altered.


In [80]:
model.pop()
for layer in model.layers: layer.trainable=False

Now we add our final Cat vs Dog layer


In [83]:
model.add(Dense(2, activation='softmax'))

To see what happened when we called vgg.finetune() earlier: Basically what it does is a model.pop() and a model.add(Dense(..))


In [84]:
??vgg.finetune()

After we add our new final layer, we'll setup our batches to use preprocessed images (and we'll also shuffle the traiing batches to add more randomness when using multiple epochs):


In [86]:
gen = image.ImageDataGenerator()
batches = gen.flow(trn_data, trn_labels, batch_size=batch_size, shuffle=True)
val_batches = gen.flow(val_data, val_labels, batch_size=batch_size, shuffle=False)

Now we have a model designed to classify Cats vs Dogs instead of the 1,000 ImageNet categories & THEN Cats vs Dogs. After this, everything is done the same as before. Compile the model & choose optimizer, fit the model (btw, whenever we work with batches in Keras, we'll be using model.function_generator(..) instead of model.function(..)

So let's do that and see what we get after 2 epochs of training: We'll also define a function for fitting models to save time typing.


In [87]:
# NOTE: now use batches.n instead of batches.N
def fit_model(model, batches, val_batches, nb_epoch=1):
    model.fit_generator(batches, samples_per_epoch=batches.n, nb_epoch=nb_epoch,
                        validation_data=val_batches, nb_val_samples=val_batches.n)

It'll run a bit slowly since it has to calculate all previous layers in order to know what input to pass to the new final layer. We can save time by precalculating the output of the penultimate layer, like we did for the final layer earlier. Note for later work.


In [88]:
# compile the new model
opt = RMSprop(lr=0.1)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])

In [89]:
# then fit it
fit_model(model, batches, val_batches, nb_epoch=2)


Epoch 1/2
352/352 [==============================] - 170s - loss: 1.4733 - acc: 0.8949 - val_loss: 1.6794 - val_acc: 0.8600
Epoch 2/2
352/352 [==============================] - 164s - loss: 1.5168 - acc: 0.9006 - val_loss: 0.3224 - val_acc: 0.9800

Note how little actual code was needed to finetune the model. Because this is such an important and common operation, Keras is set up to make it as easy as possible. Not external helper functions were needed.

It's a good idea to save weights of all your models, so you can re-use them later. Be sure to note the git log number of your model when keeping a research journal of your results.


In [90]:
model.save_weights(model_path + 'finetune1.h5')

In [91]:
# We can now use this as a good starting point for future Dogs v Cats models
model.load_weights(model_path + 'finetune1.h5')

In [92]:
model.evaluate(val_data, val_labels)


50/50 [==============================] - 18s    
Out[92]:
[0.32237322405329905, 0.97999999046325681]

Week 2 Assignments:

Take it further -- now that you know what's going on with finetuning and linear layers -- think about everything you know: the evaluation function, the categorical cross entropy loss function, finetuning: and see if you can find ways to make your model better and see how high up the rankings in Kaggle you can get.

If you want to push yourself -- see if you can do the same thing by writing all the code yourself. Don't use the class notebooks at all -- build it all from scratch.

If you want to go Even further -- see if you can enter another Kaggle competition (Galaxy Zoo, Plankton, Statefarm Distracted Driver, etc)

-- end of lecture 2 --

10 May 2017 WNx

We can look at the earlier prediction examples visualizations by redefiing probs and preds and re-using our earlier code.


In [ ]:
preds = model.predict_classes(val_data, batch_size=batch_size)
probs = model.predict_proba(val_data, batch_size=batch_size)[:,0]

2.2 Retraining more layers

2.2.1 An Introduction to back-propagation

2.2.2 Training multiple layers in Keras