Deep Convolutional Neural Network in Keras

In this notebook, we build a deep, convolutional, MNIST-classifying network inspired by LeNet-5 and this example code.

Set seed for reproducibility


In [1]:
import numpy as np
np.random.seed(42)

Load dependencies


In [2]:
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Flatten, Conv2D, MaxPooling2D # new!


Using TensorFlow backend.

Load data


In [3]:
(X_train, y_train), (X_test, y_test) = mnist.load_data()

Preprocess data


In [4]:
X_train = X_train.reshape(60000, 28, 28, 1).astype('float32')
X_test = X_test.reshape(10000, 28, 28, 1).astype('float32')

In [5]:
X_train /= 255
X_test /= 255

In [6]:
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)

Design neural network architecture


In [7]:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))

In [8]:
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 12, 12, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               1179776   
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 1,199,882
Trainable params: 1,199,882
Non-trainable params: 0
_________________________________________________________________

Configure model


In [8]:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Train!


In [9]:
model.fit(X_train, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(X_test, y_test))


Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 66s - loss: 0.2421 - acc: 0.9264 - val_loss: 0.0510 - val_acc: 0.9842
Epoch 2/20
60000/60000 [==============================] - 65s - loss: 0.0853 - acc: 0.9743 - val_loss: 0.0453 - val_acc: 0.9853
Epoch 3/20
60000/60000 [==============================] - 64s - loss: 0.0631 - acc: 0.9809 - val_loss: 0.0372 - val_acc: 0.9878
Epoch 4/20
60000/60000 [==============================] - 66s - loss: 0.0549 - acc: 0.9834 - val_loss: 0.0345 - val_acc: 0.9876
Epoch 5/20
60000/60000 [==============================] - 64s - loss: 0.0452 - acc: 0.9859 - val_loss: 0.0282 - val_acc: 0.9905
Epoch 6/20
60000/60000 [==============================] - 64s - loss: 0.0399 - acc: 0.9873 - val_loss: 0.0293 - val_acc: 0.9910
Epoch 7/20
60000/60000 [==============================] - 64s - loss: 0.0350 - acc: 0.9889 - val_loss: 0.0271 - val_acc: 0.9914
Epoch 8/20
60000/60000 [==============================] - 64s - loss: 0.0303 - acc: 0.9897 - val_loss: 0.0283 - val_acc: 0.9920
Epoch 9/20
60000/60000 [==============================] - 66s - loss: 0.0306 - acc: 0.9902 - val_loss: 0.0316 - val_acc: 0.9909
Epoch 10/20
60000/60000 [==============================] - 63s - loss: 0.0259 - acc: 0.9916 - val_loss: 0.0304 - val_acc: 0.9907
Epoch 11/20
60000/60000 [==============================] - 64s - loss: 0.0227 - acc: 0.9929 - val_loss: 0.0272 - val_acc: 0.9921
Epoch 12/20
60000/60000 [==============================] - 64s - loss: 0.0230 - acc: 0.9925 - val_loss: 0.0326 - val_acc: 0.9913
Epoch 13/20
60000/60000 [==============================] - 62s - loss: 0.0212 - acc: 0.9927 - val_loss: 0.0326 - val_acc: 0.9913
Epoch 14/20
60000/60000 [==============================] - 55s - loss: 0.0202 - acc: 0.9934 - val_loss: 0.0284 - val_acc: 0.9924
Epoch 15/20
60000/60000 [==============================] - 56s - loss: 0.0181 - acc: 0.9939 - val_loss: 0.0302 - val_acc: 0.9927
Epoch 16/20
60000/60000 [==============================] - 63s - loss: 0.0176 - acc: 0.9940 - val_loss: 0.0320 - val_acc: 0.9919
Epoch 17/20
60000/60000 [==============================] - 63s - loss: 0.0174 - acc: 0.9943 - val_loss: 0.0275 - val_acc: 0.9926
Epoch 18/20
60000/60000 [==============================] - 66s - loss: 0.0166 - acc: 0.9945 - val_loss: 0.0297 - val_acc: 0.9928
Epoch 19/20
60000/60000 [==============================] - 64s - loss: 0.0147 - acc: 0.9953 - val_loss: 0.0326 - val_acc: 0.9925
Epoch 20/20
60000/60000 [==============================] - 64s - loss: 0.0149 - acc: 0.9949 - val_loss: 0.0300 - val_acc: 0.9930
Out[9]:
<keras.callbacks.History at 0x7fe7300490f0>

In [ ]: