cifar10-simple-cnn


CIFAR - 10

Basic CNN

Activate virtual environment


In [1]:
%%bash
source ~/kerai/bin/activate

Imports


In [2]:
%matplotlib inline
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
from keras.layers import Lambda, Conv2D, MaxPooling2D, Dropout, Dense, Flatten, Activation


Using TensorFlow backend.

Import helper functions


In [3]:
from helper import get_class_names, get_train_data, get_test_data, plot_images, plot_model

Change matplotlib graph style


In [4]:
matplotlib.style.use('ggplot')

Constants

Import class names


In [5]:
class_names = get_class_names()
print(class_names)


Decoding file: data/batches.meta
['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

Get number of classes


In [6]:
num_classes = len(class_names)
print(num_classes)


10

In [1]:
# Hight and width of the images
IMAGE_SIZE = 32
# 3 channels, Red, Green and Blue
CHANNELS = 3

Fetch and decode data

Load the training dataset. Labels are integers whereas class is one-hot encoded vectors.


In [7]:
images_train, labels_train, class_train = get_train_data()


Decoding file: data/data_batch_1
Decoding file: data/data_batch_2
Decoding file: data/data_batch_3
Decoding file: data/data_batch_4
Decoding file: data/data_batch_5

Normal labels


In [8]:
print(labels_train)


[6 9 9 ..., 9 1 1]

One hot encoded labels


In [9]:
print(class_train)


[[ 0.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  1.]
 [ 0.  0.  0. ...,  0.  0.  1.]
 ..., 
 [ 0.  0.  0. ...,  0.  0.  1.]
 [ 0.  1.  0. ...,  0.  0.  0.]
 [ 0.  1.  0. ...,  0.  0.  0.]]

Load the testing dataset.


In [10]:
images_test, labels_test, class_test = get_test_data()


Decoding file: data/test_batch

In [11]:
print("Training set size:\t",len(images_train))
print("Testing set size:\t",len(images_test))


Training set size:	 50000
Testing set size:	 10000

The CIFAR-10 dataset has been loaded and consists of a total of 60,000 images and corresponding labels.

Define the CNN model


In [12]:
def cnn_model():
    
    model = Sequential()
    
    model.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(IMAGE_SIZE,IMAGE_SIZE,CHANNELS)))    
    model.add(Conv2D(32, (3, 3), activation='relu'))    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    
    model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))    
    model.add(Conv2D(64, (3, 3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    model.add(Flatten())
    
    model.add(Dense(512, activation='relu'))
    model.add(Dropout(0.5))
    
    model.add(Dense(num_classes, activation='softmax'))
    
    model.summary()
    
    return model

Build model


In [13]:
model = cnn_model()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 32, 32, 32)        896       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 30, 30, 32)        9248      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 15, 15, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 15, 15, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 15, 15, 64)        18496     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 13, 13, 64)        36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 6, 6, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 2304)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               1180160   
_________________________________________________________________
dropout_3 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                5130      
=================================================================
Total params: 1,250,858
Trainable params: 1,250,858
Non-trainable params: 0
_________________________________________________________________

Train model on the training data

Save the model after every epoch


In [14]:
checkpoint = ModelCheckpoint('best_model_simple.h5',  # model filename
                             monitor='val_loss', # quantity to monitor
                             verbose=0, # verbosity - 0 or 1
                             save_best_only= True, # The latest best model will not be overwritten
                             mode='auto') # The decision to overwrite model is made 
                                          # automatically depending on the quantity to monitor

Configure the model for training


In [15]:
model.compile(loss='categorical_crossentropy', # Better loss function for neural networks
              optimizer=Adam(lr=1.0e-4), # Adam optimizer with 1.0e-4 learning rate
              metrics = ['accuracy']) # Metrics to be evaluated by the model

Fit the model on the data provided


In [16]:
model_details = model.fit(images_train, class_train,
                    batch_size = 128, # number of samples per gradient update
                    epochs = 100, # number of iterations
                    validation_data= (images_test, class_test),
                    callbacks=[checkpoint],
                    verbose=1)


Train on 50000 samples, validate on 10000 samples
Epoch 1/100
50000/50000 [==============================] - 85s - loss: 1.9323 - acc: 0.2891 - val_loss: 1.6324 - val_acc: 0.4150
Epoch 2/100
50000/50000 [==============================] - 82s - loss: 1.5806 - acc: 0.4253 - val_loss: 1.4275 - val_acc: 0.4862
Epoch 3/100
50000/50000 [==============================] - 82s - loss: 1.4481 - acc: 0.4753 - val_loss: 1.3389 - val_acc: 0.5209
Epoch 4/100
50000/50000 [==============================] - 82s - loss: 1.3712 - acc: 0.5065 - val_loss: 1.2795 - val_acc: 0.5449
Epoch 5/100
50000/50000 [==============================] - 82s - loss: 1.3134 - acc: 0.5290 - val_loss: 1.2220 - val_acc: 0.5619
Epoch 6/100
50000/50000 [==============================] - 82s - loss: 1.2571 - acc: 0.5529 - val_loss: 1.1620 - val_acc: 0.5902
Epoch 7/100
50000/50000 [==============================] - 82s - loss: 1.2140 - acc: 0.5695 - val_loss: 1.1246 - val_acc: 0.6024
Epoch 8/100
50000/50000 [==============================] - 82s - loss: 1.1743 - acc: 0.5857 - val_loss: 1.0778 - val_acc: 0.6194
Epoch 9/100
50000/50000 [==============================] - 82s - loss: 1.1310 - acc: 0.6002 - val_loss: 1.0664 - val_acc: 0.6234
Epoch 10/100
50000/50000 [==============================] - 82s - loss: 1.0968 - acc: 0.6130 - val_loss: 1.0168 - val_acc: 0.6405
Epoch 11/100
50000/50000 [==============================] - 82s - loss: 1.0661 - acc: 0.6237 - val_loss: 0.9985 - val_acc: 0.6476
Epoch 12/100
50000/50000 [==============================] - 82s - loss: 1.0363 - acc: 0.6368 - val_loss: 0.9774 - val_acc: 0.6554
Epoch 13/100
50000/50000 [==============================] - 82s - loss: 1.0129 - acc: 0.6433 - val_loss: 0.9408 - val_acc: 0.6678
Epoch 14/100
50000/50000 [==============================] - 82s - loss: 0.9923 - acc: 0.6514 - val_loss: 0.9282 - val_acc: 0.6742
Epoch 15/100
50000/50000 [==============================] - 82s - loss: 0.9617 - acc: 0.6614 - val_loss: 0.9177 - val_acc: 0.6807
Epoch 16/100
50000/50000 [==============================] - 82s - loss: 0.9440 - acc: 0.6697 - val_loss: 0.8977 - val_acc: 0.6824
Epoch 17/100
50000/50000 [==============================] - 82s - loss: 0.9267 - acc: 0.6757 - val_loss: 0.8711 - val_acc: 0.6975
Epoch 18/100
50000/50000 [==============================] - 82s - loss: 0.9105 - acc: 0.6821 - val_loss: 0.8708 - val_acc: 0.6984
Epoch 19/100
50000/50000 [==============================] - 82s - loss: 0.8938 - acc: 0.6856 - val_loss: 0.8535 - val_acc: 0.7013
Epoch 20/100
50000/50000 [==============================] - 82s - loss: 0.8752 - acc: 0.6936 - val_loss: 0.8604 - val_acc: 0.6944
Epoch 21/100
50000/50000 [==============================] - 82s - loss: 0.8570 - acc: 0.6984 - val_loss: 0.8244 - val_acc: 0.7108
Epoch 22/100
50000/50000 [==============================] - 82s - loss: 0.8405 - acc: 0.7039 - val_loss: 0.8131 - val_acc: 0.7164
Epoch 23/100
50000/50000 [==============================] - 82s - loss: 0.8234 - acc: 0.7112 - val_loss: 0.8170 - val_acc: 0.7150
Epoch 24/100
50000/50000 [==============================] - 82s - loss: 0.8166 - acc: 0.7131 - val_loss: 0.8033 - val_acc: 0.7200
Epoch 25/100
50000/50000 [==============================] - 82s - loss: 0.7985 - acc: 0.7217 - val_loss: 0.7880 - val_acc: 0.7247
Epoch 26/100
50000/50000 [==============================] - 82s - loss: 0.7919 - acc: 0.7237 - val_loss: 0.7807 - val_acc: 0.7325
Epoch 27/100
50000/50000 [==============================] - 82s - loss: 0.7707 - acc: 0.7292 - val_loss: 0.7735 - val_acc: 0.7313
Epoch 28/100
50000/50000 [==============================] - 82s - loss: 0.7597 - acc: 0.7345 - val_loss: 0.7614 - val_acc: 0.7347
Epoch 29/100
50000/50000 [==============================] - 82s - loss: 0.7487 - acc: 0.7388 - val_loss: 0.7646 - val_acc: 0.7357
Epoch 30/100
50000/50000 [==============================] - 82s - loss: 0.7325 - acc: 0.7425 - val_loss: 0.7512 - val_acc: 0.7389
Epoch 31/100
50000/50000 [==============================] - 82s - loss: 0.7199 - acc: 0.7490 - val_loss: 0.7419 - val_acc: 0.7413
Epoch 32/100
50000/50000 [==============================] - 82s - loss: 0.7112 - acc: 0.7517 - val_loss: 0.7400 - val_acc: 0.7426
Epoch 33/100
50000/50000 [==============================] - 82s - loss: 0.7015 - acc: 0.7547 - val_loss: 0.7535 - val_acc: 0.7373
Epoch 34/100
50000/50000 [==============================] - 82s - loss: 0.6894 - acc: 0.7600 - val_loss: 0.7216 - val_acc: 0.7514
Epoch 35/100
50000/50000 [==============================] - 82s - loss: 0.6761 - acc: 0.7663 - val_loss: 0.7136 - val_acc: 0.7545
Epoch 36/100
50000/50000 [==============================] - 82s - loss: 0.6646 - acc: 0.7670 - val_loss: 0.7153 - val_acc: 0.7538
Epoch 37/100
50000/50000 [==============================] - 82s - loss: 0.6558 - acc: 0.7699 - val_loss: 0.7172 - val_acc: 0.7502
Epoch 38/100
50000/50000 [==============================] - 82s - loss: 0.6402 - acc: 0.7754 - val_loss: 0.7063 - val_acc: 0.7542
Epoch 39/100
50000/50000 [==============================] - 82s - loss: 0.6336 - acc: 0.7777 - val_loss: 0.6989 - val_acc: 0.7590
Epoch 40/100
50000/50000 [==============================] - 82s - loss: 0.6200 - acc: 0.7835 - val_loss: 0.6940 - val_acc: 0.7607
Epoch 41/100
50000/50000 [==============================] - 82s - loss: 0.6193 - acc: 0.7837 - val_loss: 0.6860 - val_acc: 0.7629
Epoch 42/100
50000/50000 [==============================] - 82s - loss: 0.6066 - acc: 0.7873 - val_loss: 0.6922 - val_acc: 0.7595
Epoch 43/100
50000/50000 [==============================] - 82s - loss: 0.5910 - acc: 0.7922 - val_loss: 0.6851 - val_acc: 0.7647
Epoch 44/100
50000/50000 [==============================] - 81s - loss: 0.5877 - acc: 0.7934 - val_loss: 0.6790 - val_acc: 0.7659
Epoch 45/100
50000/50000 [==============================] - 81s - loss: 0.5797 - acc: 0.7956 - val_loss: 0.6717 - val_acc: 0.7684
Epoch 46/100
50000/50000 [==============================] - 81s - loss: 0.5658 - acc: 0.7990 - val_loss: 0.6765 - val_acc: 0.7640
Epoch 47/100
50000/50000 [==============================] - 81s - loss: 0.5560 - acc: 0.8055 - val_loss: 0.6721 - val_acc: 0.7666
Epoch 48/100
50000/50000 [==============================] - 81s - loss: 0.5460 - acc: 0.8073 - val_loss: 0.6706 - val_acc: 0.7689
Epoch 49/100
50000/50000 [==============================] - 81s - loss: 0.5423 - acc: 0.8094 - val_loss: 0.6658 - val_acc: 0.7700
Epoch 50/100
50000/50000 [==============================] - 81s - loss: 0.5355 - acc: 0.8106 - val_loss: 0.6714 - val_acc: 0.7677
Epoch 51/100
50000/50000 [==============================] - 82s - loss: 0.5278 - acc: 0.8150 - val_loss: 0.6685 - val_acc: 0.7715
Epoch 52/100
50000/50000 [==============================] - 82s - loss: 0.5236 - acc: 0.8179 - val_loss: 0.6604 - val_acc: 0.7749
Epoch 53/100
50000/50000 [==============================] - 85s - loss: 0.5097 - acc: 0.8194 - val_loss: 0.6742 - val_acc: 0.7736
Epoch 54/100
50000/50000 [==============================] - 86s - loss: 0.5035 - acc: 0.8212 - val_loss: 0.6636 - val_acc: 0.7734
Epoch 55/100
50000/50000 [==============================] - 86s - loss: 0.4914 - acc: 0.8265 - val_loss: 0.6710 - val_acc: 0.7714
Epoch 56/100
50000/50000 [==============================] - 86s - loss: 0.4845 - acc: 0.8304 - val_loss: 0.6603 - val_acc: 0.7759
Epoch 57/100
50000/50000 [==============================] - 86s - loss: 0.4820 - acc: 0.8303 - val_loss: 0.6614 - val_acc: 0.7772
Epoch 58/100
50000/50000 [==============================] - 86s - loss: 0.4690 - acc: 0.8342 - val_loss: 0.6635 - val_acc: 0.7731
Epoch 59/100
50000/50000 [==============================] - 86s - loss: 0.4650 - acc: 0.8364 - val_loss: 0.6592 - val_acc: 0.7750
Epoch 60/100
50000/50000 [==============================] - 86s - loss: 0.4609 - acc: 0.8359 - val_loss: 0.6578 - val_acc: 0.7778
Epoch 61/100
50000/50000 [==============================] - 86s - loss: 0.4561 - acc: 0.8391 - val_loss: 0.6575 - val_acc: 0.7784
Epoch 62/100
50000/50000 [==============================] - 86s - loss: 0.4495 - acc: 0.8408 - val_loss: 0.6562 - val_acc: 0.7792
Epoch 63/100
50000/50000 [==============================] - 86s - loss: 0.4432 - acc: 0.8424 - val_loss: 0.6628 - val_acc: 0.7766
Epoch 64/100
50000/50000 [==============================] - 86s - loss: 0.4345 - acc: 0.8468 - val_loss: 0.6619 - val_acc: 0.7783
Epoch 65/100
50000/50000 [==============================] - 86s - loss: 0.4301 - acc: 0.8484 - val_loss: 0.6495 - val_acc: 0.7843
Epoch 66/100
50000/50000 [==============================] - 86s - loss: 0.4261 - acc: 0.8482 - val_loss: 0.6607 - val_acc: 0.7803
Epoch 67/100
50000/50000 [==============================] - 86s - loss: 0.4121 - acc: 0.8537 - val_loss: 0.6617 - val_acc: 0.7790
Epoch 68/100
50000/50000 [==============================] - 86s - loss: 0.4095 - acc: 0.8549 - val_loss: 0.6581 - val_acc: 0.7808
Epoch 69/100
50000/50000 [==============================] - 86s - loss: 0.4030 - acc: 0.8557 - val_loss: 0.6630 - val_acc: 0.7803
Epoch 70/100
50000/50000 [==============================] - 86s - loss: 0.3960 - acc: 0.8588 - val_loss: 0.6588 - val_acc: 0.7820
Epoch 71/100
50000/50000 [==============================] - 86s - loss: 0.3947 - acc: 0.8598 - val_loss: 0.6649 - val_acc: 0.7802
Epoch 72/100
50000/50000 [==============================] - 86s - loss: 0.3867 - acc: 0.8626 - val_loss: 0.6512 - val_acc: 0.7812
Epoch 73/100
50000/50000 [==============================] - 86s - loss: 0.3788 - acc: 0.8641 - val_loss: 0.6573 - val_acc: 0.7819
Epoch 74/100
50000/50000 [==============================] - 86s - loss: 0.3775 - acc: 0.8656 - val_loss: 0.6734 - val_acc: 0.7794
Epoch 75/100
50000/50000 [==============================] - 86s - loss: 0.3667 - acc: 0.8689 - val_loss: 0.6562 - val_acc: 0.7859
Epoch 76/100
50000/50000 [==============================] - 86s - loss: 0.3655 - acc: 0.8683 - val_loss: 0.6563 - val_acc: 0.7861
Epoch 77/100
50000/50000 [==============================] - 86s - loss: 0.3624 - acc: 0.8700 - val_loss: 0.6614 - val_acc: 0.7835
Epoch 78/100
50000/50000 [==============================] - 86s - loss: 0.3593 - acc: 0.8731 - val_loss: 0.6699 - val_acc: 0.7840
Epoch 79/100
50000/50000 [==============================] - 86s - loss: 0.3536 - acc: 0.8735 - val_loss: 0.6738 - val_acc: 0.7818
Epoch 80/100
50000/50000 [==============================] - 86s - loss: 0.3431 - acc: 0.8775 - val_loss: 0.6674 - val_acc: 0.7825
Epoch 81/100
50000/50000 [==============================] - 86s - loss: 0.3401 - acc: 0.8776 - val_loss: 0.6690 - val_acc: 0.7859
Epoch 82/100
50000/50000 [==============================] - 86s - loss: 0.3391 - acc: 0.8777 - val_loss: 0.6688 - val_acc: 0.7857
Epoch 83/100
50000/50000 [==============================] - 83s - loss: 0.3365 - acc: 0.8799 - val_loss: 0.6681 - val_acc: 0.7836
Epoch 84/100
50000/50000 [==============================] - 82s - loss: 0.3343 - acc: 0.8786 - val_loss: 0.6713 - val_acc: 0.7844
Epoch 85/100
50000/50000 [==============================] - 82s - loss: 0.3225 - acc: 0.8843 - val_loss: 0.6635 - val_acc: 0.7835
Epoch 86/100
50000/50000 [==============================] - 83s - loss: 0.3224 - acc: 0.8834 - val_loss: 0.6776 - val_acc: 0.7847
Epoch 87/100
50000/50000 [==============================] - 82s - loss: 0.3188 - acc: 0.8853 - val_loss: 0.6781 - val_acc: 0.7813
Epoch 88/100
50000/50000 [==============================] - 82s - loss: 0.3134 - acc: 0.8877 - val_loss: 0.6785 - val_acc: 0.7833
Epoch 89/100
50000/50000 [==============================] - 83s - loss: 0.3088 - acc: 0.8895 - val_loss: 0.6741 - val_acc: 0.7841
Epoch 90/100
50000/50000 [==============================] - 83s - loss: 0.3111 - acc: 0.8900 - val_loss: 0.6672 - val_acc: 0.7866
Epoch 91/100
50000/50000 [==============================] - 83s - loss: 0.3033 - acc: 0.8918 - val_loss: 0.6745 - val_acc: 0.7883
Epoch 92/100
50000/50000 [==============================] - 82s - loss: 0.3059 - acc: 0.8889 - val_loss: 0.6680 - val_acc: 0.7856
Epoch 93/100
50000/50000 [==============================] - 83s - loss: 0.2965 - acc: 0.8936 - val_loss: 0.6684 - val_acc: 0.7873
Epoch 94/100
50000/50000 [==============================] - 82s - loss: 0.2895 - acc: 0.8973 - val_loss: 0.6903 - val_acc: 0.7867
Epoch 95/100
50000/50000 [==============================] - 82s - loss: 0.2843 - acc: 0.8979 - val_loss: 0.6806 - val_acc: 0.7866
Epoch 96/100
50000/50000 [==============================] - 82s - loss: 0.2803 - acc: 0.8990 - val_loss: 0.6834 - val_acc: 0.7862
Epoch 97/100
50000/50000 [==============================] - 82s - loss: 0.2812 - acc: 0.8987 - val_loss: 0.6827 - val_acc: 0.7852
Epoch 98/100
50000/50000 [==============================] - 82s - loss: 0.2785 - acc: 0.8991 - val_loss: 0.6883 - val_acc: 0.7876
Epoch 99/100
50000/50000 [==============================] - 82s - loss: 0.2752 - acc: 0.9018 - val_loss: 0.6921 - val_acc: 0.7842
Epoch 100/100
50000/50000 [==============================] - 82s - loss: 0.2719 - acc: 0.9026 - val_loss: 0.6882 - val_acc: 0.7883

Evaluate the model


In [17]:
scores = model.evaluate(images_test, class_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))


Accuracy: 78.83%

Model accuracy and loss plots


In [18]:
plot_model(model_details)


Predictions

Predict class for test set images


In [23]:
class_pred = model.predict(images_test, batch_size=32)
print(class_pred[0])


[  2.34778727e-05   3.61718005e-03   7.07750034e-04   6.86623812e-01
   2.35363492e-04   2.97410578e-01   9.98710049e-04   1.62866409e-03
   8.46819486e-03   2.86295195e-04]

Get the index of the largest element in each vector


In [24]:
labels_pred = np.argmax(class_pred,axis=1)
print(labels_pred)


[3 8 8 ..., 5 1 7]

Check which labels have been predicted correctly


In [25]:
correct = (labels_pred == labels_test)
print(correct)
print("Number of correct predictions: %d" % sum(correct))


[ True  True  True ...,  True  True  True]
Number of correct predictions: 7883

Calculate accuracy using manual calculation


In [26]:
num_images = len(correct)
print("Accuracy: %.2f%%" % ((sum(correct)*100)/num_images))


Accuracy: 78.83%

Show some mis-classifications

Get the incorrectly classified images


In [27]:
incorrect = (correct == False)

# Images of the test-set that have been incorrectly classified.
images_error = images_test[incorrect]

# Get predicted classes for those images
labels_error = labels_pred[incorrect]

# Get true classes for those images
labels_true = labels_test[incorrect]

Plot the first 9 mis-classified images


In [28]:
plot_images(images=images_error[0:9],
            labels_true=labels_true[0:9],
            class_names=class_names,
            labels_pred=labels_error[0:9])