Convolutional Neural Networks


In this notebook, we train a CNN to classify images from the CIFAR-10 database.

1. Load CIFAR-10 Database


In [1]:
import keras
from keras.datasets import cifar10

# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()


Using TensorFlow backend.

2. Visualize the First 24 Training Images


In [2]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

fig = plt.figure(figsize=(20,5))
for i in range(36):
    ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
    ax.imshow(np.squeeze(x_train[i]))


3. Rescale the Images by Dividing Every Pixel in Every Image by 255


In [3]:
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255

4. Break Dataset into Training, Testing, and Validation Sets


In [4]:
from keras.utils import np_utils

# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]

# print shape of training set
print('x_train shape:', x_train.shape)

# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')


x_train shape: (45000, 32, 32, 3)
45000 train samples
10000 test samples
5000 validation samples

5. Define the Model Architecture


In [5]:
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout

model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', 
                        input_shape=(32, 32, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))

model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 32, 32, 16)        208       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 16)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 16, 16, 32)        2080      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 32)          0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 8, 8, 64)          8256      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 500)               512500    
_________________________________________________________________
dropout_2 (Dropout)          (None, 500)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5010      
=================================================================
Total params: 528,054
Trainable params: 528,054
Non-trainable params: 0
_________________________________________________________________

6. Compile the Model


In [6]:
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', 
                  metrics=['accuracy'])

7. Train the Model


In [7]:
from keras.callbacks import ModelCheckpoint   

# train the model
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1, 
                               save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=100,
          validation_data=(x_valid, y_valid), callbacks=[checkpointer], 
          verbose=2, shuffle=True)


Train on 45000 samples, validate on 5000 samples
Epoch 1/100
Epoch 00000: val_loss improved from inf to 1.35820, saving model to model.weights.best.hdf5
46s - loss: 1.6192 - acc: 0.4140 - val_loss: 1.3582 - val_acc: 0.5166
Epoch 2/100
Epoch 00001: val_loss improved from 1.35820 to 1.22245, saving model to model.weights.best.hdf5
53s - loss: 1.2881 - acc: 0.5402 - val_loss: 1.2224 - val_acc: 0.5644
Epoch 3/100
Epoch 00002: val_loss improved from 1.22245 to 1.12096, saving model to model.weights.best.hdf5
49s - loss: 1.1630 - acc: 0.5879 - val_loss: 1.1210 - val_acc: 0.6046
Epoch 4/100
Epoch 00003: val_loss improved from 1.12096 to 1.10724, saving model to model.weights.best.hdf5
56s - loss: 1.0928 - acc: 0.6160 - val_loss: 1.1072 - val_acc: 0.6134
Epoch 5/100
Epoch 00004: val_loss improved from 1.10724 to 0.97377, saving model to model.weights.best.hdf5
52s - loss: 1.0413 - acc: 0.6382 - val_loss: 0.9738 - val_acc: 0.6596
Epoch 6/100
Epoch 00005: val_loss improved from 0.97377 to 0.95501, saving model to model.weights.best.hdf5
50s - loss: 1.0090 - acc: 0.6484 - val_loss: 0.9550 - val_acc: 0.6768
Epoch 7/100
Epoch 00006: val_loss improved from 0.95501 to 0.94448, saving model to model.weights.best.hdf5
49s - loss: 0.9967 - acc: 0.6561 - val_loss: 0.9445 - val_acc: 0.6828
Epoch 8/100
Epoch 00007: val_loss did not improve
61s - loss: 0.9934 - acc: 0.6604 - val_loss: 1.1300 - val_acc: 0.6376
Epoch 9/100
Epoch 00008: val_loss improved from 0.94448 to 0.91779, saving model to model.weights.best.hdf5
49s - loss: 0.9858 - acc: 0.6672 - val_loss: 0.9178 - val_acc: 0.6882
Epoch 10/100
Epoch 00009: val_loss did not improve
50s - loss: 0.9839 - acc: 0.6658 - val_loss: 0.9669 - val_acc: 0.6748
Epoch 11/100
Epoch 00010: val_loss improved from 0.91779 to 0.91570, saving model to model.weights.best.hdf5
49s - loss: 1.0002 - acc: 0.6624 - val_loss: 0.9157 - val_acc: 0.6936
Epoch 12/100
Epoch 00011: val_loss did not improve
54s - loss: 1.0001 - acc: 0.6659 - val_loss: 1.1442 - val_acc: 0.6646
Epoch 13/100
Epoch 00012: val_loss did not improve
56s - loss: 1.0161 - acc: 0.6633 - val_loss: 0.9702 - val_acc: 0.6788
Epoch 14/100
Epoch 00013: val_loss did not improve
46s - loss: 1.0316 - acc: 0.6568 - val_loss: 0.9937 - val_acc: 0.6766
Epoch 15/100
Epoch 00014: val_loss did not improve
54s - loss: 1.0412 - acc: 0.6525 - val_loss: 1.1574 - val_acc: 0.6190
Epoch 16/100
Epoch 00015: val_loss did not improve
55s - loss: 1.0726 - acc: 0.6462 - val_loss: 1.0492 - val_acc: 0.6790
Epoch 17/100
Epoch 00016: val_loss did not improve
48s - loss: 1.0891 - acc: 0.6387 - val_loss: 1.0739 - val_acc: 0.6528
Epoch 18/100
Epoch 00017: val_loss did not improve
46s - loss: 1.1152 - acc: 0.6337 - val_loss: 1.0672 - val_acc: 0.6610
Epoch 19/100
Epoch 00018: val_loss did not improve
47s - loss: 1.1392 - acc: 0.6258 - val_loss: 1.5400 - val_acc: 0.5742
Epoch 20/100
Epoch 00019: val_loss did not improve
47s - loss: 1.1565 - acc: 0.6207 - val_loss: 1.0309 - val_acc: 0.6636
Epoch 21/100
Epoch 00020: val_loss did not improve
44s - loss: 1.1711 - acc: 0.6159 - val_loss: 1.4559 - val_acc: 0.5736
Epoch 22/100
Epoch 00021: val_loss did not improve
44s - loss: 1.1802 - acc: 0.6132 - val_loss: 1.1716 - val_acc: 0.6288
Epoch 23/100
Epoch 00022: val_loss did not improve
44s - loss: 1.2012 - acc: 0.6033 - val_loss: 1.3916 - val_acc: 0.6222
Epoch 24/100
Epoch 00023: val_loss did not improve
47s - loss: 1.2319 - acc: 0.5964 - val_loss: 1.5698 - val_acc: 0.5688
Epoch 25/100
Epoch 00024: val_loss did not improve
50s - loss: 1.2479 - acc: 0.5914 - val_loss: 1.2740 - val_acc: 0.6038
Epoch 26/100
Epoch 00025: val_loss did not improve
58s - loss: 1.2616 - acc: 0.5870 - val_loss: 1.2803 - val_acc: 0.5496
Epoch 27/100
Epoch 00026: val_loss did not improve
57s - loss: 1.2908 - acc: 0.5792 - val_loss: 1.0756 - val_acc: 0.6432
Epoch 28/100
Epoch 00027: val_loss did not improve
55s - loss: 1.3248 - acc: 0.5667 - val_loss: 1.2289 - val_acc: 0.5800
Epoch 29/100
Epoch 00028: val_loss did not improve
57s - loss: 1.3258 - acc: 0.5633 - val_loss: 1.3088 - val_acc: 0.5756
Epoch 30/100
Epoch 00029: val_loss did not improve
46s - loss: 1.3381 - acc: 0.5586 - val_loss: 1.2569 - val_acc: 0.6044
Epoch 31/100
Epoch 00030: val_loss did not improve
55s - loss: 1.3507 - acc: 0.5545 - val_loss: 1.3436 - val_acc: 0.5562
Epoch 32/100
Epoch 00031: val_loss did not improve
61s - loss: 1.3643 - acc: 0.5513 - val_loss: 1.2951 - val_acc: 0.5646
Epoch 33/100
Epoch 00032: val_loss did not improve
69s - loss: 1.3873 - acc: 0.5426 - val_loss: 1.4049 - val_acc: 0.6066
Epoch 34/100
Epoch 00033: val_loss did not improve
53s - loss: 1.3842 - acc: 0.5415 - val_loss: 1.8164 - val_acc: 0.5640
Epoch 35/100
Epoch 00034: val_loss did not improve
48s - loss: 1.4187 - acc: 0.5303 - val_loss: 1.7554 - val_acc: 0.5616
Epoch 36/100
Epoch 00035: val_loss did not improve
57s - loss: 1.4278 - acc: 0.5268 - val_loss: 1.9956 - val_acc: 0.5072
Epoch 37/100
Epoch 00036: val_loss did not improve
58s - loss: 1.4365 - acc: 0.5216 - val_loss: 1.8344 - val_acc: 0.4748
Epoch 38/100
Epoch 00037: val_loss did not improve
64s - loss: 1.4529 - acc: 0.5205 - val_loss: 1.2752 - val_acc: 0.5690
Epoch 39/100
Epoch 00038: val_loss did not improve
62s - loss: 1.4726 - acc: 0.5111 - val_loss: 1.7092 - val_acc: 0.5600
Epoch 40/100
Epoch 00039: val_loss did not improve
70s - loss: 1.4673 - acc: 0.5107 - val_loss: 1.2288 - val_acc: 0.5698
Epoch 41/100
Epoch 00040: val_loss did not improve
68s - loss: 1.4872 - acc: 0.5083 - val_loss: 1.4082 - val_acc: 0.5162
Epoch 42/100
Epoch 00041: val_loss did not improve
69s - loss: 1.4983 - acc: 0.5003 - val_loss: 1.5808 - val_acc: 0.4818
Epoch 43/100
Epoch 00042: val_loss did not improve
79s - loss: 1.5211 - acc: 0.4957 - val_loss: 1.2271 - val_acc: 0.5882
Epoch 44/100
Epoch 00043: val_loss did not improve
95s - loss: 1.5474 - acc: 0.4867 - val_loss: 3.7681 - val_acc: 0.3394
Epoch 45/100
Epoch 00044: val_loss did not improve
80s - loss: 1.5432 - acc: 0.4854 - val_loss: 1.3349 - val_acc: 0.5830
Epoch 46/100
Epoch 00045: val_loss did not improve
63s - loss: 1.5615 - acc: 0.4785 - val_loss: 1.4494 - val_acc: 0.5332
Epoch 47/100
Epoch 00046: val_loss did not improve
47s - loss: 1.5731 - acc: 0.4752 - val_loss: 1.4689 - val_acc: 0.4648
Epoch 48/100
Epoch 00047: val_loss did not improve
49s - loss: 1.5832 - acc: 0.4694 - val_loss: 1.6045 - val_acc: 0.3992
Epoch 49/100
Epoch 00048: val_loss did not improve
50s - loss: 1.6000 - acc: 0.4670 - val_loss: 3.0627 - val_acc: 0.3648
Epoch 50/100
Epoch 00049: val_loss did not improve
73s - loss: 1.5988 - acc: 0.4655 - val_loss: 1.4299 - val_acc: 0.5020
Epoch 51/100
Epoch 00050: val_loss did not improve
52s - loss: 1.6025 - acc: 0.4623 - val_loss: 1.6269 - val_acc: 0.4766
Epoch 52/100
Epoch 00051: val_loss did not improve
53s - loss: 1.6104 - acc: 0.4601 - val_loss: 1.4260 - val_acc: 0.5390
Epoch 53/100
Epoch 00052: val_loss did not improve
51s - loss: 1.6203 - acc: 0.4569 - val_loss: 1.3396 - val_acc: 0.5366
Epoch 54/100
Epoch 00053: val_loss did not improve
50s - loss: 1.6354 - acc: 0.4500 - val_loss: 1.6159 - val_acc: 0.4512
Epoch 55/100
Epoch 00054: val_loss did not improve
53s - loss: 1.6552 - acc: 0.4433 - val_loss: 1.7258 - val_acc: 0.4468
Epoch 56/100
Epoch 00055: val_loss did not improve
47s - loss: 1.6696 - acc: 0.4363 - val_loss: 1.4365 - val_acc: 0.4938
Epoch 57/100
Epoch 00056: val_loss did not improve
46s - loss: 1.6605 - acc: 0.4368 - val_loss: 2.5907 - val_acc: 0.3732
Epoch 58/100
Epoch 00057: val_loss did not improve
50s - loss: 1.6720 - acc: 0.4336 - val_loss: 1.5503 - val_acc: 0.4274
Epoch 59/100
Epoch 00058: val_loss did not improve
68s - loss: 1.6897 - acc: 0.4281 - val_loss: 1.5233 - val_acc: 0.4362
Epoch 60/100
Epoch 00059: val_loss did not improve
73s - loss: 1.7099 - acc: 0.4201 - val_loss: 1.4141 - val_acc: 0.5124
Epoch 61/100
Epoch 00060: val_loss did not improve
71s - loss: 1.7182 - acc: 0.4182 - val_loss: 1.5190 - val_acc: 0.4486
Epoch 62/100
Epoch 00061: val_loss did not improve
72s - loss: 1.7177 - acc: 0.4179 - val_loss: 1.4966 - val_acc: 0.4860
Epoch 63/100
Epoch 00062: val_loss did not improve
59s - loss: 1.7079 - acc: 0.4228 - val_loss: 1.6089 - val_acc: 0.4384
Epoch 64/100
Epoch 00063: val_loss did not improve
50s - loss: 1.7101 - acc: 0.4147 - val_loss: 1.6014 - val_acc: 0.4430
Epoch 65/100
Epoch 00064: val_loss did not improve
49s - loss: 1.7180 - acc: 0.4144 - val_loss: 2.2502 - val_acc: 0.3712
Epoch 66/100
Epoch 00065: val_loss did not improve
50s - loss: 1.7190 - acc: 0.4140 - val_loss: 1.3967 - val_acc: 0.4964
Epoch 67/100
Epoch 00066: val_loss did not improve
50s - loss: 1.7262 - acc: 0.4082 - val_loss: 1.5334 - val_acc: 0.4650
Epoch 68/100
Epoch 00067: val_loss did not improve
50s - loss: 1.7432 - acc: 0.4032 - val_loss: 1.7911 - val_acc: 0.3588
Epoch 69/100
Epoch 00068: val_loss did not improve
50s - loss: 1.7309 - acc: 0.4054 - val_loss: 1.6592 - val_acc: 0.3892
Epoch 70/100
Epoch 00069: val_loss did not improve
50s - loss: 1.7581 - acc: 0.3977 - val_loss: 1.6551 - val_acc: 0.4056
Epoch 71/100
Epoch 00070: val_loss did not improve
50s - loss: 1.7619 - acc: 0.3930 - val_loss: 1.5855 - val_acc: 0.4670
Epoch 72/100
Epoch 00071: val_loss did not improve
55s - loss: 1.7690 - acc: 0.3918 - val_loss: 1.5534 - val_acc: 0.4350
Epoch 73/100
Epoch 00072: val_loss did not improve
77s - loss: 1.7910 - acc: 0.3890 - val_loss: 1.5390 - val_acc: 0.4692
Epoch 74/100
Epoch 00073: val_loss did not improve
68s - loss: 1.7941 - acc: 0.3853 - val_loss: 1.4875 - val_acc: 0.4764
Epoch 75/100
Epoch 00074: val_loss did not improve
71s - loss: 1.8069 - acc: 0.3816 - val_loss: 1.6594 - val_acc: 0.3990
Epoch 76/100
Epoch 00075: val_loss did not improve
63s - loss: 1.8160 - acc: 0.3776 - val_loss: 1.6119 - val_acc: 0.3804
Epoch 77/100
Epoch 00076: val_loss did not improve
52s - loss: 1.8073 - acc: 0.3793 - val_loss: 1.5836 - val_acc: 0.4578
Epoch 78/100
Epoch 00077: val_loss did not improve
72s - loss: 1.8185 - acc: 0.3731 - val_loss: 1.6415 - val_acc: 0.4004
Epoch 79/100
Epoch 00078: val_loss did not improve
78s - loss: 1.8229 - acc: 0.3724 - val_loss: 1.7005 - val_acc: 0.3834
Epoch 80/100
Epoch 00079: val_loss did not improve
61s - loss: 1.8316 - acc: 0.3664 - val_loss: 1.8900 - val_acc: 0.2996
Epoch 81/100
Epoch 00080: val_loss did not improve
50s - loss: 1.8274 - acc: 0.3656 - val_loss: 1.6902 - val_acc: 0.3794
Epoch 82/100
Epoch 00081: val_loss did not improve
50s - loss: 1.8448 - acc: 0.3609 - val_loss: 1.9591 - val_acc: 0.3094
Epoch 83/100
Epoch 00082: val_loss did not improve
48s - loss: 1.8468 - acc: 0.3566 - val_loss: 1.6827 - val_acc: 0.4108
Epoch 84/100
Epoch 00083: val_loss did not improve
48s - loss: 1.9039 - acc: 0.3516 - val_loss: 1.5814 - val_acc: 0.4456
Epoch 85/100
Epoch 00084: val_loss did not improve
68s - loss: 1.8499 - acc: 0.3550 - val_loss: 1.8199 - val_acc: 0.3736
Epoch 86/100
Epoch 00085: val_loss did not improve
77s - loss: 1.8404 - acc: 0.3556 - val_loss: 1.7326 - val_acc: 0.3518
Epoch 87/100
Epoch 00086: val_loss did not improve
59s - loss: 1.8509 - acc: 0.3513 - val_loss: 1.6321 - val_acc: 0.4042
Epoch 88/100
Epoch 00087: val_loss did not improve
51s - loss: 1.8580 - acc: 0.3502 - val_loss: 2.8168 - val_acc: 0.3208
Epoch 89/100
Epoch 00088: val_loss did not improve
60s - loss: 1.8760 - acc: 0.3392 - val_loss: 1.6616 - val_acc: 0.4156
Epoch 90/100
Epoch 00089: val_loss did not improve
61s - loss: 1.8682 - acc: 0.3462 - val_loss: 1.6725 - val_acc: 0.3900
Epoch 91/100
Epoch 00090: val_loss did not improve
57s - loss: 1.8900 - acc: 0.3312 - val_loss: 1.6851 - val_acc: 0.3424
Epoch 92/100
Epoch 00091: val_loss did not improve
54s - loss: 1.8889 - acc: 0.3363 - val_loss: 1.6296 - val_acc: 0.4230
Epoch 93/100
Epoch 00092: val_loss did not improve
56s - loss: 1.9040 - acc: 0.3343 - val_loss: 1.7510 - val_acc: 0.3306
Epoch 94/100
Epoch 00093: val_loss did not improve
50s - loss: 1.9041 - acc: 0.3266 - val_loss: 1.7218 - val_acc: 0.3582
Epoch 95/100
Epoch 00094: val_loss did not improve
48s - loss: 1.8978 - acc: 0.3224 - val_loss: 1.6739 - val_acc: 0.3970
Epoch 96/100
Epoch 00095: val_loss did not improve
48s - loss: 1.9173 - acc: 0.3180 - val_loss: 1.7337 - val_acc: 0.3526
Epoch 97/100
Epoch 00096: val_loss did not improve
48s - loss: 1.9016 - acc: 0.3204 - val_loss: 1.7351 - val_acc: 0.3452
Epoch 98/100
Epoch 00097: val_loss did not improve
48s - loss: 1.9117 - acc: 0.3170 - val_loss: 2.2827 - val_acc: 0.2592
Epoch 99/100
Epoch 00098: val_loss did not improve
48s - loss: 1.9319 - acc: 0.3049 - val_loss: 2.9560 - val_acc: 0.3060
Epoch 100/100
Epoch 00099: val_loss did not improve
48s - loss: 1.9390 - acc: 0.3070 - val_loss: 1.9106 - val_acc: 0.3102

8. Load the Model with the Best Validation Accuracy


In [8]:
# load the weights that yielded the best validation accuracy
model.load_weights('model.weights.best.hdf5')

9. Calculate Classification Accuracy on Test Set


In [9]:
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])


 Test accuracy: 0.68

10. Visualize Some Predictions

This may give you some insight into why the network is misclassifying certain objects.


In [10]:
# get predictions on the test set
y_hat = model.predict(x_test)

# define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html)
cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

In [11]:
# plot a random sample of test images, their predicted labels, and ground truth
fig = plt.figure(figsize=(20, 8))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):
    ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])
    ax.imshow(np.squeeze(x_test[idx]))
    pred_idx = np.argmax(y_hat[idx])
    true_idx = np.argmax(y_test[idx])
    ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]),
                 color=("green" if pred_idx == true_idx else "red"))