Traffic Sign Classification with Keras

Keras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, you’re going to use Keras to build a convolutional neural network in a few dozen lines of code.

You’ll be connecting the concepts from the previous lessons to the methods that Keras provides.

Dataset

The network you'll build with Keras is similar to the example that you can find in Keras’s GitHub repository that builds out a convolutional neural network for MNIST.

However, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.

You can download pickle files with sanitized traffic sign data here.

Overview

Here are the steps you'll take to build the network:

  1. First load the data.
  2. Build a feedforward neural network to classify traffic signs.
  3. Build a convolutional neural network to classify traffic signs.

Keep an eye on the network’s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that you’ve built and trained an effective model.

Load the Data

Start by importing the data from the pickle file.


In [1]:
# TODO: Implement load the data here.
# Load pickled data
import pickle
import csv
import os

# TODO: fill this in based on where you saved the training and testing data
training_file = '../../traffic-signs/traffic-signs-data/train.p'
testing_file = '../../traffic-signs/traffic-signs-data/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)

X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']

# Make dictionary of sign names from CSV file
with open('../../traffic-signs/signnames.csv', 'r') as csvfile:
    reader = csv.reader(csvfile)
    next(reader, None)  # skip the headers
    sign_names = dict((int(n),label) for n, label in reader)

cls_numbers, cls_names = zip(*sign_names.items())

n_classes = len(set(y_train))
flat_img_size = 32*32*3

# STOP: Do not change the tests below. Your implementation should pass these tests. 
assert(X_train.shape[0] == y_train.shape[0]), "The number of images is not equal to the number of labels."
assert(X_train.shape[1:] == (32,32,3)), "The dimensions of the images are not 32 x 32 x 3."

In [2]:
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm_notebook
from zipfile import ZipFile
import time
from datetime import timedelta
import math
import tensorflow as tf

Normalize the data

Now that you've loaded the training data, normalize the input so that it has a mean of 0 and a range between -0.5 and 0.5.


In [3]:
# TODO: Implement data normalization here.
def normalize_color(image_data):
    """
    Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
    :param image_data: The image data to be normalized
    :return: Normalized image data
    """
    a = -0.5
    b = +0.5
    
    Xmin = 0.0
    Xmax = 255.0

    norm_img = np.empty_like(image_data, dtype=np.float32)

    norm_img = a + (image_data - Xmin)*(b-a)/(Xmax - Xmin)
    
    return norm_img

X_train = normalize_color(X_train)
X_test = normalize_color(X_test)

# STOP: Do not change the tests below. Your implementation should pass these tests. 
assert(round(np.mean(X_train)) == 0), "The mean of the input data is: %f" % np.mean(X_train)
assert(np.min(X_train) == -0.5 and np.max(X_train) == 0.5), "The range of the input data is: %.1f to %.1f" % (np.min(X_train), np.max(X_train))

Build a Two-Layer Feedfoward Network

The code you've written so far is for data processing, not specific to Keras. Here you're going to build Keras-specific code.

Build a two-layer feedforward neural network, with 128 neurons in the fully-connected hidden layer.

To get started, review the Keras documentation about models and layers.

The Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.


In [4]:
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import Adam
from keras.utils import np_utils


Using TensorFlow backend.

In [5]:
# TODO: Build a two-layer feedforward neural network with Keras here.
model = Sequential()
model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1'))
model.add(Activation('relu'))
model.add(Dense(43, name='output'))
model.add(Activation('softmax'))

# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(model.get_layer(name="hidden1").input_shape == (None, 32*32*3)), "The input shape is: %s" % model.get_layer(name="hidden1").input_shape
assert(model.get_layer(name="output").output_shape == (None, 43)), "The output shape is: %s" % model.get_layer(name="output").output_shape

Train the Network

Compile and train the network for 2 epochs. Use the adam optimizer, with categorical_crossentropy loss.

Hint 1: In order to use categorical cross entropy, you will need to one-hot encode the labels.

Hint 2: In order to pass the input images to the fully-connected hidden layer, you will need to reshape the input.

Hint 3: Keras's .fit() method returns a History.history object, which the tests below use. Save that to a variable named history.


In [6]:
# One-Hot encode the labels
Y_train = np_utils.to_categorical(y_train, n_classes)
Y_test = np_utils.to_categorical(y_test, n_classes)

# Reshape input for MLP
X_train_mlp = X_train.reshape(-1, flat_img_size)
X_test_mlp = X_test.reshape(-1, flat_img_size)

In [7]:
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

history = model.fit(X_train_mlp, Y_train, batch_size=128, nb_epoch=10,
                    validation_data=(X_test_mlp, Y_test), verbose=1)

# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['acc'][0] > 0.5), "The training accuracy was: %.3f" % history.history['acc'][0]


Train on 39209 samples, validate on 12630 samples
Epoch 1/10
39209/39209 [==============================] - 3s - loss: 1.6459 - acc: 0.5692 - val_loss: 1.2762 - val_acc: 0.6511
Epoch 2/10
39209/39209 [==============================] - 3s - loss: 0.7393 - acc: 0.8084 - val_loss: 1.0502 - val_acc: 0.6948
Epoch 3/10
39209/39209 [==============================] - 3s - loss: 0.5310 - acc: 0.8620 - val_loss: 0.9035 - val_acc: 0.7549
Epoch 4/10
39209/39209 [==============================] - 3s - loss: 0.4127 - acc: 0.8932 - val_loss: 0.8067 - val_acc: 0.7938
Epoch 5/10
39209/39209 [==============================] - 3s - loss: 0.3521 - acc: 0.9066 - val_loss: 0.8088 - val_acc: 0.7755
Epoch 6/10
39209/39209 [==============================] - 3s - loss: 0.3066 - acc: 0.9182 - val_loss: 0.7676 - val_acc: 0.8026
Epoch 7/10
39209/39209 [==============================] - 3s - loss: 0.2692 - acc: 0.9283 - val_loss: 0.8346 - val_acc: 0.7781
Epoch 8/10
39209/39209 [==============================] - 3s - loss: 0.2487 - acc: 0.9335 - val_loss: 0.8782 - val_acc: 0.7702
Epoch 9/10
39209/39209 [==============================] - 3s - loss: 0.2501 - acc: 0.9324 - val_loss: 0.7706 - val_acc: 0.8205
Epoch 10/10
39209/39209 [==============================] - 3s - loss: 0.2144 - acc: 0.9416 - val_loss: 0.7575 - val_acc: 0.8202

Validate the Network

Split the training data into a training and validation set.

Measure the validation accuracy of the network after two training epochs.

Hint: Use the train_test_split() method from scikit-learn.


In [8]:
# Get randomized datasets for training and validation
X_train, X_val, Y_train, Y_val = train_test_split(
    X_train,
    Y_train,
    test_size=0.25,
    random_state=0xdeadbeef)

X_val_mlp = X_val.reshape(-1, flat_img_size)
print('Training features and labels randomized and split.')

# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(round(X_train.shape[0] / float(X_val.shape[0])) == 3), "The training set is %.3f times larger than the validation set." % (X_train.shape[0] / float(X_val.shape[0]))
assert(history.history['val_acc'][0] > 0.6), "The validation accuracy is: %.3f" % history.history['val_acc'][0]


Training features and labels randomized and split.

In [15]:
loss, acc = model.evaluate(X_val.reshape(-1, flat_img_size), Y_val, verbose=1)
print('\nValidation accuracy : {0:>6.2%}'.format(acc))


9803/9803 [==============================] - 0s     

Validation accuracy : 95.92%

Validation Accuracy: 95.92%

Congratulations

You've built a feedforward neural network in Keras!

Don't stop here! Next, you'll add a convolutional layer to drive.py.

Convolutions

Build a new network, similar to your existing network. Before the hidden layer, add a 3x3 convolutional layer with 32 filters and valid padding.

Then compile and train the network.

Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.

Hint 2: Now that the first layer of the network is a convolutional layer, you no longer need to reshape the input images before passing them to the network. You might need to reload your training data to recover the original shape.

Hint 3: Add a Flatten() layer between the convolutional layer and the fully-connected hidden layer.


In [23]:
from keras.layers import Convolution2D, MaxPooling2D, Dropout, Flatten

In [17]:
# TODO: Re-construct the network and add a convolutional layer before the first fully-connected layer.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1'))
model.add(Activation('relu'))
model.add(Dense(43, name='output'))
model.add(Activation('softmax'))

# TODO: Compile and train the model.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=10,
                    validation_data=(X_val, Y_val), verbose=1)

# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['val_acc'][0] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][0]


Train on 29406 samples, validate on 9803 samples
Epoch 1/10
29406/29406 [==============================] - 3s - loss: 1.0621 - acc: 0.7237 - val_loss: 0.3543 - val_acc: 0.9046
Epoch 2/10
29406/29406 [==============================] - 3s - loss: 0.2740 - acc: 0.9291 - val_loss: 0.2577 - val_acc: 0.9238
Epoch 3/10
29406/29406 [==============================] - 3s - loss: 0.1592 - acc: 0.9593 - val_loss: 0.1826 - val_acc: 0.9511
Epoch 4/10
29406/29406 [==============================] - 3s - loss: 0.1131 - acc: 0.9709 - val_loss: 0.1612 - val_acc: 0.9533
Epoch 5/10
29406/29406 [==============================] - 3s - loss: 0.0818 - acc: 0.9793 - val_loss: 0.1417 - val_acc: 0.9649
Epoch 6/10
29406/29406 [==============================] - 3s - loss: 0.0700 - acc: 0.9813 - val_loss: 0.1186 - val_acc: 0.9712
Epoch 7/10
29406/29406 [==============================] - 3s - loss: 0.0442 - acc: 0.9893 - val_loss: 0.0948 - val_acc: 0.9776
Epoch 8/10
29406/29406 [==============================] - 3s - loss: 0.0559 - acc: 0.9848 - val_loss: 0.1674 - val_acc: 0.9593
Epoch 9/10
29406/29406 [==============================] - 3s - loss: 0.0541 - acc: 0.9850 - val_loss: 0.1244 - val_acc: 0.9707
Epoch 10/10
29406/29406 [==============================] - 3s - loss: 0.0333 - acc: 0.9913 - val_loss: 0.1204 - val_acc: 0.9698

Validation Accuracy: 96.98%

Pooling

Re-construct your network and add a 2x2 pooling layer immediately following your convolutional layer.

Then compile and train the network.


In [22]:
# TODO: Re-construct the network and add a pooling layer after the convolutional layer.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1'))
model.add(Activation('relu'))
model.add(Dense(43, name='output'))
model.add(Activation('softmax'))

# TODO: Compile and train the model.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=10,
                    validation_data=(X_val, Y_val), verbose=1)

# STOP: Do not change the tests below. Your implementation should pass these tests.
## Fixed bug
assert(history.history['val_acc'][-1] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][0]


Train on 29406 samples, validate on 9803 samples
Epoch 1/10
29406/29406 [==============================] - 3s - loss: 1.4677 - acc: 0.6262 - val_loss: 0.5605 - val_acc: 0.8510
Epoch 2/10
29406/29406 [==============================] - 3s - loss: 0.3804 - acc: 0.9073 - val_loss: 0.2562 - val_acc: 0.9417
Epoch 3/10
29406/29406 [==============================] - 3s - loss: 0.2129 - acc: 0.9497 - val_loss: 0.1899 - val_acc: 0.9546
Epoch 4/10
29406/29406 [==============================] - 3s - loss: 0.1443 - acc: 0.9677 - val_loss: 0.1553 - val_acc: 0.9637
Epoch 5/10
29406/29406 [==============================] - 3s - loss: 0.1169 - acc: 0.9721 - val_loss: 0.1583 - val_acc: 0.9575
Epoch 6/10
29406/29406 [==============================] - 3s - loss: 0.0917 - acc: 0.9777 - val_loss: 0.1211 - val_acc: 0.9689
Epoch 7/10
29406/29406 [==============================] - 3s - loss: 0.0697 - acc: 0.9829 - val_loss: 0.1095 - val_acc: 0.9728
Epoch 8/10
29406/29406 [==============================] - 3s - loss: 0.0598 - acc: 0.9857 - val_loss: 0.1227 - val_acc: 0.9677
Epoch 9/10
29406/29406 [==============================] - 3s - loss: 0.0593 - acc: 0.9852 - val_loss: 0.0931 - val_acc: 0.9779
Epoch 10/10
29406/29406 [==============================] - 3s - loss: 0.0420 - acc: 0.9904 - val_loss: 0.1053 - val_acc: 0.9736

Validation Accuracy: 97.36%

Dropout

Re-construct your network and add dropout after the pooling layer. Set the dropout rate to 50%.


In [24]:
# TODO: Re-construct the network and add dropout after the pooling layer.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1'))
model.add(Activation('relu'))
model.add(Dense(43, name='output'))
model.add(Activation('softmax'))

# TODO: Compile and train the model.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=10,
                    validation_data=(X_val, Y_val), verbose=1)

# STOP: Do not change the tests below. Your implementation should pass these tests.
assert(history.history['val_acc'][-1] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][0]


Train on 29406 samples, validate on 9803 samples
Epoch 1/10
29406/29406 [==============================] - 3s - loss: 1.7220 - acc: 0.5407 - val_loss: 0.7021 - val_acc: 0.7931
Epoch 2/10
29406/29406 [==============================] - 3s - loss: 0.5794 - acc: 0.8338 - val_loss: 0.3359 - val_acc: 0.9232
Epoch 3/10
29406/29406 [==============================] - 3s - loss: 0.3784 - acc: 0.8939 - val_loss: 0.2357 - val_acc: 0.9423
Epoch 4/10
29406/29406 [==============================] - 3s - loss: 0.2891 - acc: 0.9186 - val_loss: 0.1924 - val_acc: 0.9528
Epoch 5/10
29406/29406 [==============================] - 3s - loss: 0.2478 - acc: 0.9282 - val_loss: 0.1554 - val_acc: 0.9651
Epoch 6/10
29406/29406 [==============================] - 3s - loss: 0.2049 - acc: 0.9422 - val_loss: 0.1289 - val_acc: 0.9710
Epoch 7/10
29406/29406 [==============================] - 3s - loss: 0.1738 - acc: 0.9513 - val_loss: 0.1174 - val_acc: 0.9742
Epoch 8/10
29406/29406 [==============================] - 3s - loss: 0.1648 - acc: 0.9536 - val_loss: 0.1215 - val_acc: 0.9738
Epoch 9/10
29406/29406 [==============================] - 3s - loss: 0.1480 - acc: 0.9572 - val_loss: 0.1123 - val_acc: 0.9725
Epoch 10/10
29406/29406 [==============================] - 3s - loss: 0.1384 - acc: 0.9585 - val_loss: 0.0987 - val_acc: 0.9775

Validation Accuracy: 97.75%

Optimization

Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.

Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.

What is the best validation accuracy you can achieve?


In [28]:
pool_size = (2,2)
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.5))
model.add(Convolution2D(128, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256, input_shape=(flat_img_size,), name='hidden1'))
model.add(Activation('relu'))
model.add(Dense(43, name='output'))
model.add(Activation('softmax'))

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

In [33]:
history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=50,
                    validation_data=(X_val, Y_val), verbose=1)


Train on 29406 samples, validate on 9803 samples
Epoch 1/20
29406/29406 [==============================] - 4s - loss: 0.0727 - acc: 0.9769 - val_loss: 0.0177 - val_acc: 0.9958
Epoch 2/20
29406/29406 [==============================] - 4s - loss: 0.0785 - acc: 0.9750 - val_loss: 0.0176 - val_acc: 0.9954
Epoch 3/20
29406/29406 [==============================] - 4s - loss: 0.0735 - acc: 0.9765 - val_loss: 0.0246 - val_acc: 0.9942
Epoch 4/20
29406/29406 [==============================] - 4s - loss: 0.0721 - acc: 0.9783 - val_loss: 0.0166 - val_acc: 0.9960
Epoch 5/20
29406/29406 [==============================] - 4s - loss: 0.0697 - acc: 0.9788 - val_loss: 0.0170 - val_acc: 0.9955
Epoch 6/20
29406/29406 [==============================] - 4s - loss: 0.0631 - acc: 0.9798 - val_loss: 0.0171 - val_acc: 0.9947
Epoch 7/20
29406/29406 [==============================] - 4s - loss: 0.0773 - acc: 0.9761 - val_loss: 0.0143 - val_acc: 0.9964
Epoch 8/20
29406/29406 [==============================] - 4s - loss: 0.0641 - acc: 0.9799 - val_loss: 0.0124 - val_acc: 0.9966
Epoch 9/20
29406/29406 [==============================] - 4s - loss: 0.0708 - acc: 0.9776 - val_loss: 0.0124 - val_acc: 0.9967
Epoch 10/20
29406/29406 [==============================] - 4s - loss: 0.0674 - acc: 0.9788 - val_loss: 0.0150 - val_acc: 0.9966
Epoch 11/20
29406/29406 [==============================] - 4s - loss: 0.0611 - acc: 0.9802 - val_loss: 0.0139 - val_acc: 0.9963
Epoch 12/20
29406/29406 [==============================] - 4s - loss: 0.0584 - acc: 0.9813 - val_loss: 0.0156 - val_acc: 0.9957
Epoch 13/20
29406/29406 [==============================] - 4s - loss: 0.0604 - acc: 0.9816 - val_loss: 0.0133 - val_acc: 0.9964
Epoch 14/20
29406/29406 [==============================] - 4s - loss: 0.0694 - acc: 0.9791 - val_loss: 0.0152 - val_acc: 0.9963
Epoch 15/20
29406/29406 [==============================] - 4s - loss: 0.0602 - acc: 0.9810 - val_loss: 0.0184 - val_acc: 0.9947
Epoch 16/20
29406/29406 [==============================] - 4s - loss: 0.0609 - acc: 0.9805 - val_loss: 0.0114 - val_acc: 0.9972
Epoch 17/20
29406/29406 [==============================] - 4s - loss: 0.0617 - acc: 0.9799 - val_loss: 0.0111 - val_acc: 0.9978
Epoch 18/20
29406/29406 [==============================] - 4s - loss: 0.0595 - acc: 0.9809 - val_loss: 0.0127 - val_acc: 0.9962
Epoch 19/20
29406/29406 [==============================] - 4s - loss: 0.0581 - acc: 0.9819 - val_loss: 0.0118 - val_acc: 0.9970
Epoch 20/20
29406/29406 [==============================] - 4s - loss: 0.0588 - acc: 0.9814 - val_loss: 0.0127 - val_acc: 0.9965

Best Validation Accuracy: 99.65%

Testing

Once you've picked out your best model, it's time to test it.

Load up the test data and use the evaluate() method to see how well it does.

Hint 1: After you load your test data, don't forget to normalize the input and one-hot encode the output, so it matches the training data.

Hint 2: The evaluate() method should return an array of numbers. Use the metrics_names() method to get the labels.


In [35]:
# with open('./test.p', mode='rb') as f:
#     test = pickle.load(f)
    
# X_test = test['features']
# y_test = test['labels']
# X_test = X_test.astype('float32')
# X_test /= 255
# X_test -= 0.5
# Y_test = np_utils.to_categorical(y_test, 43)

model.evaluate(X_test, Y_test)


12576/12630 [============================>.] - ETA: 0s
Out[35]:
[0.10833839187514692, 0.97157561369387757]

In [40]:
model.save('test-acc-9716-epoch50.h5')

from keras.models import load_model
model2 = load_model('test-acc-9716-epoch50.h5')
# model2.evaluate(X_test, Y_test)
model2.summary()


____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
convolution2d_11 (Convolution2D) (None, 32, 32, 16)    1216        convolution2d_input_10[0][0]     
____________________________________________________________________________________________________
activation_23 (Activation)       (None, 32, 32, 16)    0           convolution2d_11[0][0]           
____________________________________________________________________________________________________
maxpooling2d_8 (MaxPooling2D)    (None, 16, 16, 16)    0           activation_23[0][0]              
____________________________________________________________________________________________________
convolution2d_12 (Convolution2D) (None, 14, 14, 64)    9280        maxpooling2d_8[0][0]             
____________________________________________________________________________________________________
activation_24 (Activation)       (None, 14, 14, 64)    0           convolution2d_12[0][0]           
____________________________________________________________________________________________________
maxpooling2d_9 (MaxPooling2D)    (None, 7, 7, 64)      0           activation_24[0][0]              
____________________________________________________________________________________________________
dropout_6 (Dropout)              (None, 7, 7, 64)      0           maxpooling2d_9[0][0]             
____________________________________________________________________________________________________
convolution2d_13 (Convolution2D) (None, 5, 5, 128)     73856       dropout_6[0][0]                  
____________________________________________________________________________________________________
activation_25 (Activation)       (None, 5, 5, 128)     0           convolution2d_13[0][0]           
____________________________________________________________________________________________________
maxpooling2d_10 (MaxPooling2D)   (None, 2, 2, 128)     0           activation_25[0][0]              
____________________________________________________________________________________________________
dropout_7 (Dropout)              (None, 2, 2, 128)     0           maxpooling2d_10[0][0]            
____________________________________________________________________________________________________
flatten_6 (Flatten)              (None, 512)           0           dropout_7[0][0]                  
____________________________________________________________________________________________________
hidden1 (Dense)                  (None, 256)           131328      flatten_6[0][0]                  
____________________________________________________________________________________________________
activation_26 (Activation)       (None, 256)           0           hidden1[0][0]                    
____________________________________________________________________________________________________
output (Dense)                   (None, 43)            11051       activation_26[0][0]              
____________________________________________________________________________________________________
activation_27 (Activation)       (None, 43)            0           output[0][0]                     
====================================================================================================
Total params: 226731
____________________________________________________________________________________________________

Test Accuracy: 97.15%

Summary

Keras is a great tool to use if you want to quickly build a neural network and evaluate performance.