VGGNet in Keras

In this notebook, we fit a model inspired by the "very deep" convolutional network VGGNet to classify flowers into the 17 categories of the Oxford Flowers data set. Derived from these two earlier notebooks.

Set seed for reproducibility


In [1]:
import numpy as np
np.random.seed(42)

Load dependencies


In [2]:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.callbacks import TensorBoard # for part 3.5 on TensorBoard


Using TensorFlow backend.

Load and preprocess data


In [3]:
import tflearn.datasets.oxflower17 as oxflower17
X, Y = oxflower17.load_data(one_hot=True)

Design neural network architecture


In [4]:
model = Sequential()

model.add(Conv2D(64, 3, activation='relu', input_shape=(224, 224, 3)))
model.add(Conv2D(64, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())

model.add(Conv2D(128, 3, activation='relu'))
model.add(Conv2D(128, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())

model.add(Conv2D(256, 3, activation='relu'))
model.add(Conv2D(256, 3, activation='relu'))
model.add(Conv2D(256, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())

model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())

model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(Conv2D(512, 3, activation='relu'))
model.add(MaxPooling2D(2, 2))
model.add(BatchNormalization())

model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(17, activation='softmax'))

In [5]:
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 222, 222, 64)      1792      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 220, 220, 64)      36928     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 110, 110, 64)      0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 110, 110, 64)      256       
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 108, 108, 128)     73856     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 106, 106, 128)     147584    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 53, 53, 128)       0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 53, 53, 128)       512       
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 51, 51, 256)       295168    
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 49, 49, 256)       590080    
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 47, 47, 256)       590080    
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 23, 23, 256)       0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 23, 23, 256)       1024      
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 21, 21, 512)       1180160   
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 19, 19, 512)       2359808   
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 17, 17, 512)       2359808   
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 8, 8, 512)         0         
_________________________________________________________________
batch_normalization_4 (Batch (None, 8, 8, 512)         2048      
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 6, 6, 512)         2359808   
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 4, 4, 512)         2359808   
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 2, 2, 512)         2359808   
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 512)         0         
_________________________________________________________________
batch_normalization_5 (Batch (None, 1, 1, 512)         2048      
_________________________________________________________________
flatten_1 (Flatten)          (None, 512)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 4096)              2101248   
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 4096)              16781312  
_________________________________________________________________
dropout_2 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 17)                69649     
=================================================================
Total params: 33,672,785
Trainable params: 33,669,841
Non-trainable params: 2,944
_________________________________________________________________

Configure model


In [6]:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Configure TensorBoard (for part 5 of lesson 3)


In [ ]:
tensorbrd = TensorBoard('logs/vggnet')

Train!


In [ ]:
model.fit(X, Y, batch_size=64, epochs=16, verbose=1, validation_split=0.1, shuffle=True,
          callbacks=[tensorbrd])


Train on 1224 samples, validate on 136 samples
Epoch 1/16
1224/1224 [==============================] - 784s - loss: 4.1356 - acc: 0.0735 - val_loss: 12.5035 - val_acc: 0.0368
Epoch 2/16
1224/1224 [==============================] - 774s - loss: 3.4962 - acc: 0.1062 - val_loss: 13.7122 - val_acc: 0.0735
Epoch 3/16
1224/1224 [==============================] - 773s - loss: 2.9282 - acc: 0.1373 - val_loss: 13.0526 - val_acc: 0.0441
Epoch 4/16
1224/1224 [==============================] - 774s - loss: 3.1878 - acc: 0.1381 - val_loss: 3.1900 - val_acc: 0.0956
Epoch 5/16
1224/1224 [==============================] - 773s - loss: 3.4662 - acc: 0.0752 - val_loss: 2.8582 - val_acc: 0.0588
Epoch 6/16
1224/1224 [==============================] - 770s - loss: 3.2509 - acc: 0.1119 - val_loss: 2.8061 - val_acc: 0.0441
Epoch 7/16
1224/1224 [==============================] - 770s - loss: 2.8882 - acc: 0.1193 - val_loss: 2.6081 - val_acc: 0.1029
Epoch 8/16
1224/1224 [==============================] - 772s - loss: 3.1455 - acc: 0.1144 - val_loss: 3.9969 - val_acc: 0.0588
Epoch 9/16
1224/1224 [==============================] - 773s - loss: 3.2672 - acc: 0.1185 - val_loss: 5.0395 - val_acc: 0.0515
Epoch 10/16
1224/1224 [==============================] - 771s - loss: 2.8959 - acc: 0.1324 - val_loss: 2.8850 - val_acc: 0.0662
Epoch 11/16
1224/1224 [==============================] - 771s - loss: 2.6121 - acc: 0.1765 - val_loss: 2.7125 - val_acc: 0.0588
Epoch 12/16
1224/1224 [==============================] - 771s - loss: 2.5183 - acc: 0.1675 - val_loss: 2.5720 - val_acc: 0.1176
Epoch 13/16
1224/1224 [==============================] - 771s - loss: 2.6153 - acc: 0.1822 - val_loss: 6.8114 - val_acc: 0.1103
Epoch 14/16
1224/1224 [==============================] - 770s - loss: 2.6700 - acc: 0.1748 - val_loss: 4.6305 - val_acc: 0.1324
Epoch 15/16
1088/1224 [=========================>....] - ETA: 82s - loss: 2.7326 - acc: 0.1967 

In [ ]: