AlexNet in Keras

In this notebook, we leverage an AlexNet-like deep, convolutional neural network to classify flowers into the 17 categories of the Oxford Flowers data set. Derived from this earlier notebook.

Set seed for reproducibility


In [1]:
import numpy as np
np.random.seed(42)

Load dependencies


In [4]:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.callbacks import TensorBoard # for part 3.5 on TensorBoard

Load and preprocess data


In [5]:
import tflearn.datasets.oxflower17 as oxflower17
X, Y = oxflower17.load_data(one_hot=True)

Design neural network architecture


In [6]:
model = Sequential()

model.add(Conv2D(96, kernel_size=(11, 11), strides=(4, 4), activation='relu', input_shape=(224, 224, 3)))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())

model.add(Conv2D(256, kernel_size=(5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())

model.add(Conv2D(256, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(384, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(384, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())

model.add(Flatten())
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))

model.add(Dense(17, activation='softmax'))

In [7]:
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 54, 54, 96)        34944     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 26, 26, 96)        0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 26, 26, 96)        384       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 22, 22, 256)       614656    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 10, 10, 256)       0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 10, 10, 256)       1024      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 8, 8, 256)         590080    
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 6, 6, 384)         885120    
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 4, 4, 384)         1327488   
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 1, 1, 384)         0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 1, 1, 384)         1536      
_________________________________________________________________
flatten_1 (Flatten)          (None, 384)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 4096)              1576960   
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 4096)              16781312  
_________________________________________________________________
dropout_2 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 17)                69649     
=================================================================
Total params: 21,883,153
Trainable params: 21,881,681
Non-trainable params: 1,472
_________________________________________________________________

Configure model


In [8]:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Configure TensorBoard (for part 5 of lesson 3)


In [9]:
tensorbrd = TensorBoard('logs/alexnet')

Train!


In [10]:
model.fit(X, Y, batch_size=64, epochs=32, verbose=1, validation_split=0.1, shuffle=True, 
          callbacks=[tensorbrd])


Train on 1224 samples, validate on 136 samples
Epoch 1/32
1224/1224 [==============================] - 43s - loss: 5.4860 - acc: 0.2092 - val_loss: 10.5151 - val_acc: 0.1029
Epoch 2/32
1224/1224 [==============================] - 41s - loss: 4.0177 - acc: 0.2525 - val_loss: 4.1294 - val_acc: 0.1838
Epoch 3/32
1224/1224 [==============================] - 41s - loss: 2.8861 - acc: 0.3113 - val_loss: 4.2866 - val_acc: 0.1838
Epoch 4/32
1224/1224 [==============================] - 41s - loss: 2.7104 - acc: 0.3194 - val_loss: 5.1511 - val_acc: 0.1471
Epoch 5/32
1224/1224 [==============================] - 42s - loss: 2.6687 - acc: 0.3154 - val_loss: 4.1556 - val_acc: 0.1838
Epoch 6/32
1224/1224 [==============================] - 41s - loss: 2.4847 - acc: 0.3358 - val_loss: 2.7126 - val_acc: 0.2647
Epoch 7/32
1224/1224 [==============================] - 41s - loss: 2.4848 - acc: 0.3358 - val_loss: 3.5959 - val_acc: 0.2574
Epoch 8/32
1224/1224 [==============================] - 41s - loss: 2.3969 - acc: 0.3922 - val_loss: 3.5869 - val_acc: 0.2868
Epoch 9/32
1224/1224 [==============================] - 41s - loss: 2.7176 - acc: 0.3472 - val_loss: 2.8241 - val_acc: 0.3309
Epoch 10/32
1224/1224 [==============================] - 41s - loss: 2.3563 - acc: 0.3693 - val_loss: 2.6689 - val_acc: 0.3676
Epoch 11/32
1224/1224 [==============================] - 41s - loss: 1.9205 - acc: 0.4281 - val_loss: 2.8030 - val_acc: 0.3824
Epoch 12/32
1224/1224 [==============================] - 39s - loss: 2.1723 - acc: 0.4101 - val_loss: 2.4830 - val_acc: 0.3603
Epoch 13/32
1224/1224 [==============================] - 39s - loss: 1.9932 - acc: 0.4542 - val_loss: 2.6662 - val_acc: 0.3824
Epoch 14/32
1224/1224 [==============================] - 38s - loss: 2.0622 - acc: 0.4404 - val_loss: 2.0595 - val_acc: 0.4485
Epoch 15/32
1224/1224 [==============================] - 38s - loss: 2.0491 - acc: 0.4706 - val_loss: 2.1866 - val_acc: 0.4632
Epoch 16/32
1224/1224 [==============================] - 39s - loss: 2.0746 - acc: 0.4534 - val_loss: 2.5153 - val_acc: 0.4559
Epoch 17/32
1224/1224 [==============================] - 40s - loss: 2.2386 - acc: 0.4297 - val_loss: 2.6147 - val_acc: 0.4265
Epoch 18/32
1224/1224 [==============================] - 42s - loss: 2.0306 - acc: 0.4812 - val_loss: 2.8573 - val_acc: 0.3824
Epoch 19/32
1224/1224 [==============================] - 41s - loss: 1.9511 - acc: 0.4771 - val_loss: 2.1484 - val_acc: 0.4632
Epoch 20/32
1224/1224 [==============================] - 41s - loss: 1.8286 - acc: 0.5123 - val_loss: 2.3557 - val_acc: 0.4853
Epoch 21/32
1224/1224 [==============================] - 41s - loss: 2.2471 - acc: 0.4493 - val_loss: 2.3572 - val_acc: 0.4118
Epoch 22/32
1224/1224 [==============================] - 41s - loss: 1.6955 - acc: 0.5278 - val_loss: 2.2848 - val_acc: 0.4779
Epoch 23/32
1224/1224 [==============================] - 41s - loss: 1.5541 - acc: 0.5498 - val_loss: 1.6983 - val_acc: 0.5956
Epoch 24/32
1224/1224 [==============================] - 40s - loss: 1.5198 - acc: 0.5523 - val_loss: 1.9519 - val_acc: 0.5588
Epoch 25/32
1224/1224 [==============================] - 41s - loss: 1.6150 - acc: 0.5588 - val_loss: 2.6820 - val_acc: 0.4853
Epoch 26/32
1224/1224 [==============================] - 39s - loss: 1.4255 - acc: 0.5940 - val_loss: 2.8716 - val_acc: 0.4118
Epoch 27/32
1224/1224 [==============================] - 39s - loss: 1.6967 - acc: 0.5408 - val_loss: 2.7123 - val_acc: 0.4485
Epoch 28/32
1224/1224 [==============================] - 44s - loss: 1.6957 - acc: 0.5474 - val_loss: 2.1038 - val_acc: 0.4779
Epoch 29/32
1224/1224 [==============================] - 37s - loss: 1.5395 - acc: 0.5703 - val_loss: 2.5981 - val_acc: 0.5074
Epoch 30/32
1224/1224 [==============================] - 37s - loss: 1.5090 - acc: 0.5940 - val_loss: 2.2708 - val_acc: 0.5368
Epoch 31/32
1224/1224 [==============================] - 37s - loss: 1.5410 - acc: 0.5678 - val_loss: 2.3326 - val_acc: 0.4853
Epoch 32/32
1224/1224 [==============================] - 37s - loss: 1.6714 - acc: 0.5531 - val_loss: 1.8566 - val_acc: 0.5000
Out[10]:
<keras.callbacks.History at 0x7fddafb19f28>

In [ ]: