Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data


In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data
Folder_path = "../../../dataset/traffic-signs-data/"
training_file = Folder_path + "train.p"
validation_file= Folder_path + "valid.p"
testing_file = Folder_path + "test.p"

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

In [2]:
# Read signal names from signames.csv
import pandas as pd
signames = pd.read_csv("signnames.csv")
signames


Out[2]:
ClassId SignName
0 0 Speed limit (20km/h)
1 1 Speed limit (30km/h)
2 2 Speed limit (50km/h)
3 3 Speed limit (60km/h)
4 4 Speed limit (70km/h)
5 5 Speed limit (80km/h)
6 6 End of speed limit (80km/h)
7 7 Speed limit (100km/h)
8 8 Speed limit (120km/h)
9 9 No passing
10 10 No passing for vehicles over 3.5 metric tons
11 11 Right-of-way at the next intersection
12 12 Priority road
13 13 Yield
14 14 Stop
15 15 No vehicles
16 16 Vehicles over 3.5 metric tons prohibited
17 17 No entry
18 18 General caution
19 19 Dangerous curve to the left
20 20 Dangerous curve to the right
21 21 Double curve
22 22 Bumpy road
23 23 Slippery road
24 24 Road narrows on the right
25 25 Road work
26 26 Traffic signals
27 27 Pedestrians
28 28 Children crossing
29 29 Bicycles crossing
30 30 Beware of ice/snow
31 31 Wild animals crossing
32 32 End of all speed and passing limits
33 33 Turn right ahead
34 34 Turn left ahead
35 35 Ahead only
36 36 Go straight or right
37 37 Go straight or left
38 38 Keep right
39 39 Keep left
40 40 Roundabout mandatory
41 41 End of no passing
42 42 End of no passing by vehicles over 3.5 metric ...

In [3]:
signames.values[:,1]


Out[3]:
array(['Speed limit (20km/h)', 'Speed limit (30km/h)',
       'Speed limit (50km/h)', 'Speed limit (60km/h)',
       'Speed limit (70km/h)', 'Speed limit (80km/h)',
       'End of speed limit (80km/h)', 'Speed limit (100km/h)',
       'Speed limit (120km/h)', 'No passing',
       'No passing for vehicles over 3.5 metric tons',
       'Right-of-way at the next intersection', 'Priority road', 'Yield',
       'Stop', 'No vehicles', 'Vehicles over 3.5 metric tons prohibited',
       'No entry', 'General caution', 'Dangerous curve to the left',
       'Dangerous curve to the right', 'Double curve', 'Bumpy road',
       'Slippery road', 'Road narrows on the right', 'Road work',
       'Traffic signals', 'Pedestrians', 'Children crossing',
       'Bicycles crossing', 'Beware of ice/snow', 'Wild animals crossing',
       'End of all speed and passing limits', 'Turn right ahead',
       'Turn left ahead', 'Ahead only', 'Go straight or right',
       'Go straight or left', 'Keep right', 'Keep left',
       'Roundabout mandatory', 'End of no passing',
       'End of no passing by vehicles over 3.5 metric tons'], dtype=object)

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas


In [4]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np

# TODO: Number of training examples
n_train = len(X_train)

# TODO: Number of validation examples
n_validation = len(X_valid)

# TODO: Number of testing examples.
n_test = len(X_test)

# TODO: What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.bincount(y_train))

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)


Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?


In [5]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
from random import randint

r_num = randint(0,n_train)
plt.figure(dpi=100)
for i in range(5*5):
    r_num = randint(0,n_train)
    plt.subplot(5,5,i+1),plt.imshow(X_train[r_num,:,:,:])



In [6]:
n = plt.hist(y_train, 43)



In [7]:
n[0]


Out[7]:
array([  180.,  1980.,  2010.,  1260.,  1770.,  1650.,   360.,  1290.,
        1260.,  1320.,  1800.,  1170.,  1890.,  1920.,   690.,   540.,
         360.,   990.,  1080.,   180.,   300.,   270.,   330.,   450.,
         240.,  1350.,   540.,   210.,   480.,   240.,   390.,   690.,
         210.,   599.,   360.,  1080.,   330.,   180.,  1860.,   270.,
         300.,   210.,   210.])

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.

Other pre-processing steps are optional. You can try different techniques to see if it improves performance.

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.


In [8]:
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include 
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
### Refer from : https://navoshta.com/traffic-signs-classification/
from scipy import ndimage
import random
import sys
from skimage.transform import rotate
from skimage.transform import warp
from skimage.transform import ProjectiveTransform
from skimage.transform import AffineTransform
from skimage.exposure import equalize_adapthist
from skimage.color import rgb2gray

In [9]:
# Print iterations progress
def print_progress(iteration, total):
    """
    Call in a loop to create terminal progress bar
    
    Parameters
    ----------
        
    iteration : 
                Current iteration (Int)
    total     : 
                Total iterations (Int)
    """
    str_format = "{0:.0f}"
    percents = str_format.format(100 * (iteration / float(total)))
    filled_length = int(round(100 * iteration / float(total)))
    bar = '█' * filled_length + '-' * (100 - filled_length)

    sys.stdout.write('\r |%s| %s%%' % (bar, percents)),

    if iteration == total:
        sys.stdout.write('\n')
    sys.stdout.flush()

In [10]:
def random_rotate(X,intensity):
    delta = np.radians(intensity) #deg to rad
    rand_delta = random.uniform(-delta, delta) #rotate range
    rotate_matrix = AffineTransform(rotation=rand_delta)
    X = warp(X, rotate_matrix)
    return X

In [11]:
def random_translate(X, intensity):
    delta = 15.* intensity
    rand_delta = random.uniform(-delta, delta)
    translate_matrix = AffineTransform(translation=(rand_delta,rand_delta))
    X = warp(X,translate_matrix)
    return X

In [12]:
def random_shear(X, intensity):
    delta = np.radians(intensity) #deg to rad
    rand_delta = random.uniform(-delta, delta) #shear range
    shear_matrix = AffineTransform(shear=rand_delta)
    X = warp(X, shear_matrix)
    return X

In [13]:
def random_transform(X, intensity):
    def _get_delta(intensity):
        delta = intensity
        rand_delta = random.uniform(-delta, delta)
        return rand_delta
    transform_matrix = AffineTransform(scale=(0.8,0.8),
                                       translation=(-_get_delta(intensity),_get_delta(intensity)),
                                       shear=_get_delta(intensity))
    X = warp(X, transform_matrix)
    return X

In [14]:
# test
plt.figure(dpi=150)
for i in range(5):
    plt.subplot(5,5,i+1),plt.imshow(X_train[0])
    plt.subplot(5,5,i+6),plt.imshow(random_rotate(X_train[0], 15)) #deg
    plt.subplot(5,5,i+11),plt.imshow(random_translate(X_train[0], 0.3))
    plt.subplot(5,5,i+16),plt.imshow(random_shear(X_train[0], 15)) #deg
    plt.subplot(5,5,i+21),plt.imshow(random_transform(X_train[0], 0.3))



In [15]:
def transform_images(X, y, rotate_range, shear_range, trans_range):
    X = random_rotate(X, rotate_range)
    X = random_translate(X, trans_range)
    X = random_shear(X, shear_range)
    
    return X,y

In [16]:
#test
plt.figure(dpi=150)
for i in range(5):
    plt.subplot(5,5,i+1),plt.imshow(transform_images(X_train[0], y_train[0], 15, 15, 0.3)[0])
    plt.subplot(5,5,i+6),plt.imshow(transform_images(X_train[0], y_train[0], 15, 15, 0.3)[0])
    plt.subplot(5,5,i+11),plt.imshow(transform_images(X_train[0], y_train[0], 15, 15, 0.3)[0])
    plt.subplot(5,5,i+16),plt.imshow(transform_images(X_train[0], y_train[0], 15, 15, 0.3)[0])
    plt.subplot(5,5,i+21),plt.imshow(transform_images(X_train[0], y_train[0], 15, 15, 0.3)[0])
    plt.subplot(5,5,1),plt.imshow(X_train[0])



In [17]:
def get_transform_images(Xs, ys, n_each=10, rotate_range=15, shear_range=15, trans_range=0.3):
        
    X_arr = []
    y_arr = []
    
    for i, (x, y) in enumerate(zip(Xs,ys)):
        for _ in range(n_each):
            img_trf, label_trf = transform_images(x, y, rotate_range, shear_range, trans_range)
            X_arr.append(img_trf)
            y_arr.append(label_trf)
        
        print_progress(i+1, Xs.shape[0])
        
    X_arr = np.asarray(X_arr, dtype=np.float32)
    y_arr = np.asarray(y_arr, dtype=np.float32)
    
    return X_arr, y_arr

In [18]:
def preprocess_image(X):
    X = rgb2gray(X) #Convert to GRAY
    
    X = (X / 255.).astype(np.float32) #Normalization
        
    X = equalize_adapthist(X) #Apative histogram normalization (CLAHE)
    
    X = X.reshape(X.shape + (1,))  #Add a single grayscale channel

    return X

In [20]:
#test plot
plt.figure(dpi=100)

for i in range(5):
    ranidx = randint(0,n_train)
    plt.subplot(2,5,i+1), plt.imshow(preprocess_image(X_train[ranidx])[:,:,0], cmap='gray')
    plt.subplot(2,5,i+6), plt.imshow(X_train[ranidx])


/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))

In [21]:
from joblib import Parallel, delayed
nprocs = 12

In [22]:
X_trf, y_trf = get_transform_images(X_train, y_train)


 |████████████████████████████████████████████████████████████████████████████████████████████████████| 100%

In [23]:
X_trf.shape, y_trf.shape


Out[23]:
((347990, 32, 32, 3), (347990,))

In [24]:
X_train = Parallel(n_jobs=nprocs)(delayed(preprocess_image)(imfile) for imfile in X_trf)


/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))

In [25]:
X_valid = Parallel(n_jobs=nprocs)(delayed(preprocess_image)(imfile) for imfile in X_valid)


/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))

In [26]:
X_test = Parallel(n_jobs=nprocs)(delayed(preprocess_image)(imfile) for imfile in X_test)


/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))
/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint16
  .format(dtypeobj_in, dtypeobj_out))

In [27]:
y_train = y_trf

In [28]:
from keras.utils import to_categorical
y_train = to_categorical(y_train, num_classes=n_classes)
y_valid = to_categorical(y_valid, num_classes=n_classes)
y_test = to_categorical(y_test, num_classes=n_classes)

X_train = np.asarray(X_train, dtype=np.float32)
X_valid = np.asarray(X_valid, dtype=np.float32)
X_test = np.asarray(X_test, dtype=np.float32)


Using TensorFlow backend.

In [29]:
X_train.shape, X_valid.shape, X_test.shape


Out[29]:
((347990, 32, 32, 1), (4410, 32, 32, 1), (12630, 32, 32, 1))

In [30]:
y_train.shape, y_valid.shape, y_test.shape


Out[30]:
((347990, 43), (4410, 43), (12630, 43))

In [31]:
print("Saving preprocessed Train images")
np.save(Folder_path + "TrainX.npy", X_train)
np.save(Folder_path + "Trainy.npy", y_train)
np.save(Folder_path + "ValidX.npy", X_valid)
np.save(Folder_path + "Validy.npy", y_valid)
np.save(Folder_path + "TestX.npy", X_test)
np.save(Folder_path + "Testy.npy", y_test)
print("Saving Done")


Saving preprocessed Train images
Saving Done
## Data Augmentation using Keras from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=False)

Model Architecture


In [32]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, concatenate, Conv2DTranspose
from keras.layers import Dense, Dropout, Flatten
from keras.optimizers import SGD
from keras.layers.normalization import BatchNormalization

In [33]:
from keras.callbacks import ModelCheckpoint, EarlyStopping, TensorBoard
from keras.utils import to_categorical
from keras import backend as K

In [34]:
#Refer from : https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py
#Multi gpu support
from keras.layers import merge
from keras.layers.core import Lambda
from keras.models import Model

import tensorflow as tf

def make_parallel(model, gpu_count):
    def get_slice(data, idx, parts):
        shape = tf.shape(data)
        size = tf.concat([ shape[:1] // parts, shape[1:] ],axis=0)
        stride = tf.concat([ shape[:1] // parts, shape[1:]*0 ],axis=0)
        start = stride * idx
        return tf.slice(data, start, size)

    outputs_all = []
    for i in range(len(model.outputs)):
        outputs_all.append([])

    #Place a copy of the model on each GPU, each getting a slice of the batch
    for i in range(gpu_count):
        with tf.device('/gpu:%d' % i):
            with tf.name_scope('tower_%d' % i) as scope:

                inputs = []
                #Slice each input into a piece for processing on this GPU
                for x in model.inputs:
                    input_shape = tuple(x.get_shape().as_list())[1:]
                    slice_n = Lambda(get_slice, output_shape=input_shape, arguments={'idx':i,'parts':gpu_count})(x)
                    inputs.append(slice_n)                

                outputs = model(inputs)
                
                if not isinstance(outputs, list):
                    outputs = [outputs]
                
                #Save all the outputs for merging back together later
                for l in range(len(outputs)):
                    outputs_all[l].append(outputs[l])

    # merge outputs on CPU
    with tf.device('/cpu:0'):
        merged = []
        for outputs in outputs_all:
            #merged.append(merge(outputs, mode='concat', concat_axis=0))
            merged.append(concatenate(outputs, axis=0))
            
        return Model(inputs=model.inputs, outputs=merged)

In [35]:
#Model configurations
img_width = 32
img_height = 32
n_classes = 43

f_size = 3
learning_rate = 1e-2
activation = 'elu'

In [36]:
def build_model():
    inputs = Input((img_width, img_height, 1))
    conv1 = Conv2D(32,(f_size, f_size), activation=activation, padding='same')(inputs)
    conv1 = Conv2D(32,(f_size, f_size), activation=activation, padding='same')(conv1)
    conv1 = BatchNormalization()(conv1)
    pool1 = MaxPooling2D(pool_size=(2,2))(conv1)
    
    conv2 = Conv2D(64,(f_size, f_size), activation=activation, padding='same')(pool1)
    conv2 = Conv2D(64,(f_size, f_size), activation=activation, padding='same')(conv2)
    conv2 = BatchNormalization()(conv2)
    pool2 = MaxPooling2D(pool_size=(2,2))(conv2)
    
    conv3 = Conv2D(128,(f_size, f_size), activation=activation, padding='same')(pool2)
    conv3 = Conv2D(128,(f_size, f_size), activation=activation, padding='same')(conv3)
    conv3 = BatchNormalization()(conv3)
    pool3 = MaxPooling2D(pool_size=(2,2))(conv3)
    
    fcn1 = Flatten()(pool3)
    fcn1 = Dense(256, activation=activation)(fcn1)
    fcn1 = Dropout(0.5)(fcn1)
    
    fcn2 = Dense(128, activation=activation)(fcn1)
    fcn2 = Dropout(0.5)(fcn2)
    
    softmax1 = Dense(n_classes, activation='softmax')(fcn2)
    
    model = Model(inputs=inputs, outputs=softmax1)
    model.summary()
    model = make_parallel(model,2)
    
    sgd = SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

    return model

In [37]:
def build_model2():
    inputs = Input((img_width, img_height, 1))
    conv1 = Conv2D(32,(f_size, f_size), activation=activation, padding='same')(inputs)
    conv1 = Conv2D(32,(f_size, f_size), activation=activation, padding='same')(conv1)
    conv1 = BatchNormalization()(conv1)
    pool1 = MaxPooling2D(pool_size=(2,2))(conv1)
    
    conv2 = Conv2D(64,(f_size, f_size), activation=activation, padding='same')(pool1)
    conv2 = Conv2D(64,(f_size, f_size), activation=activation, padding='same')(conv2)
    conv2 = BatchNormalization()(conv2)
    pool2 = MaxPooling2D(pool_size=(2,2))(conv2)
    
    conv3 = Conv2D(128,(f_size, f_size), activation=activation, padding='same')(pool2)
    conv3 = Conv2D(128,(f_size, f_size), activation=activation, padding='same')(conv3)
    conv3 = BatchNormalization()(conv3)
    pool3 = MaxPooling2D(pool_size=(2,2))(conv3)
    
    fcn1 = concatenate([Conv2DTranspose(128, (2,2), strides=(2,2), padding='same')(pool1), \
                        Conv2DTranspose(128, (4,4), strides=(4,4), padding='same')(pool2), \
                        Conv2DTranspose(128, (8,8), strides=(8,8), padding='same')(pool3)], axis=3)
    
    fcn1 = Flatten()(fcn1)
    fcn1 = Dense(2048, activation=activation)(fcn1)
    fcn1 = Dropout(0.5)(fcn1)
    
    fcn2 = Dense(1024, activation=activation)(fcn1)
    fcn2 = Dropout(0.5)(fcn2)
    
    fcn3 = Dense(256, activation=activation)(fcn2)
    fcn3 = Dropout(0.5)(fcn3)
    
    softmax1 = Dense(n_classes, activation='softmax')(fcn3)
    
    model = Model(inputs=inputs, outputs=softmax1)
    model.summary()
    model = make_parallel(model,2)
    
    sgd = SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

    return model

In [38]:
model = build_model()
#model = build_model2()
model_checkpoint = ModelCheckpoint('model_TSR.hdf5', monitor='loss', save_best_only=True)
model_earlystopping = EarlyStopping(monitor='loss')
model_tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,
                          write_graph=True, write_images=False)


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 32, 32, 1)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 32, 32, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 32, 32, 32)        9248      
_________________________________________________________________
batch_normalization_1 (Batch (None, 32, 32, 32)        128       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 16, 16, 64)        18496     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 16, 16, 64)        36928     
_________________________________________________________________
batch_normalization_2 (Batch (None, 16, 16, 64)        256       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64)          0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 8, 8, 128)         73856     
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 8, 8, 128)         147584    
_________________________________________________________________
batch_normalization_3 (Batch (None, 8, 8, 128)         512       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 2048)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 256)               524544    
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 128)               32896     
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 43)                5547      
=================================================================
Total params: 850,315
Trainable params: 849,867
Non-trainable params: 448
_________________________________________________________________

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.


In [39]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.

In [40]:
import numpy as np
X_train = np.load(Folder_path + "TrainX.npy")
y_train = np.load(Folder_path + "Trainy.npy")
X_valid = np.load(Folder_path + "ValidX.npy")
y_valid = np.load(Folder_path + "Validy.npy")
X_test = np.load(Folder_path + "TestX.npy")
y_test = np.load(Folder_path + "Testy.npy")

In [41]:
#Train Model
model.fit(X_train, y_train, batch_size=1024*2, epochs=100, verbose=1, shuffle=True,
         validation_data=(X_valid, y_valid),
         callbacks=[model_checkpoint, model_earlystopping])


Train on 347990 samples, validate on 4410 samples
Epoch 1/100
347990/347990 [==============================] - 98s - loss: 2.4406 - acc: 0.3479 - val_loss: 2.8612 - val_acc: 0.2832
Epoch 2/100
347990/347990 [==============================] - 80s - loss: 0.8232 - acc: 0.7454 - val_loss: 0.4147 - val_acc: 0.8680
Epoch 3/100
347990/347990 [==============================] - 81s - loss: 0.3833 - acc: 0.8790 - val_loss: 0.2081 - val_acc: 0.9376
Epoch 4/100
347990/347990 [==============================] - 80s - loss: 0.2256 - acc: 0.9296 - val_loss: 0.1454 - val_acc: 0.9567
Epoch 5/100
347990/347990 [==============================] - 82s - loss: 0.1530 - acc: 0.9532 - val_loss: 0.1731 - val_acc: 0.9553
Epoch 6/100
347990/347990 [==============================] - 81s - loss: 0.1134 - acc: 0.9652 - val_loss: 0.2035 - val_acc: 0.9599
Epoch 7/100
347990/347990 [==============================] - 82s - loss: 0.0883 - acc: 0.9729 - val_loss: 0.1525 - val_acc: 0.9621
Epoch 8/100
347990/347990 [==============================] - 82s - loss: 0.0723 - acc: 0.9778 - val_loss: 0.1633 - val_acc: 0.9662
Epoch 9/100
347990/347990 [==============================] - 81s - loss: 0.0603 - acc: 0.9814 - val_loss: 0.1245 - val_acc: 0.9692
Epoch 10/100
347990/347990 [==============================] - 82s - loss: 0.0511 - acc: 0.9844 - val_loss: 0.2127 - val_acc: 0.9567
Epoch 11/100
347990/347990 [==============================] - 81s - loss: 0.0439 - acc: 0.9867 - val_loss: 0.1497 - val_acc: 0.9730
Epoch 12/100
347990/347990 [==============================] - 82s - loss: 0.0383 - acc: 0.9886 - val_loss: 0.1241 - val_acc: 0.9741
Epoch 13/100
347990/347990 [==============================] - 82s - loss: 0.0331 - acc: 0.9900 - val_loss: 0.1285 - val_acc: 0.9751
Epoch 14/100
347990/347990 [==============================] - 84s - loss: 0.0310 - acc: 0.9904 - val_loss: 0.1717 - val_acc: 0.9762
Epoch 15/100
347990/347990 [==============================] - 82s - loss: 0.0271 - acc: 0.9915 - val_loss: 0.1420 - val_acc: 0.9776
Epoch 16/100
347990/347990 [==============================] - 81s - loss: 0.0243 - acc: 0.9925 - val_loss: 0.1434 - val_acc: 0.9764
Epoch 17/100
347990/347990 [==============================] - 83s - loss: 0.0223 - acc: 0.9932 - val_loss: 0.1360 - val_acc: 0.9776
Epoch 18/100
347990/347990 [==============================] - 82s - loss: 0.0199 - acc: 0.9938 - val_loss: 0.1121 - val_acc: 0.9780
Epoch 19/100
347990/347990 [==============================] - 83s - loss: 0.0187 - acc: 0.9943 - val_loss: 0.1224 - val_acc: 0.9782
Epoch 20/100
347990/347990 [==============================] - 82s - loss: 0.0166 - acc: 0.9949 - val_loss: 0.1617 - val_acc: 0.9785
Epoch 21/100
347990/347990 [==============================] - 83s - loss: 0.0155 - acc: 0.9954 - val_loss: 0.1243 - val_acc: 0.9782
Epoch 22/100
347990/347990 [==============================] - 82s - loss: 0.0146 - acc: 0.9957 - val_loss: 0.2159 - val_acc: 0.9692
Epoch 23/100
347990/347990 [==============================] - 82s - loss: 0.0136 - acc: 0.9959 - val_loss: 0.1474 - val_acc: 0.9794
Epoch 24/100
347990/347990 [==============================] - 82s - loss: 0.0127 - acc: 0.9962 - val_loss: 0.1556 - val_acc: 0.9794
Epoch 25/100
347990/347990 [==============================] - 82s - loss: 0.0121 - acc: 0.9964 - val_loss: 0.1762 - val_acc: 0.9748
Epoch 26/100
347990/347990 [==============================] - 82s - loss: 0.0112 - acc: 0.9967 - val_loss: 0.1612 - val_acc: 0.9755
Epoch 27/100
347990/347990 [==============================] - 82s - loss: 0.0106 - acc: 0.9969 - val_loss: 0.1492 - val_acc: 0.9789
Epoch 28/100
347990/347990 [==============================] - 82s - loss: 0.0097 - acc: 0.9972 - val_loss: 0.1392 - val_acc: 0.9789
Epoch 29/100
347990/347990 [==============================] - 81s - loss: 0.0091 - acc: 0.9974 - val_loss: 0.1816 - val_acc: 0.9730
Epoch 30/100
347990/347990 [==============================] - 82s - loss: 0.0092 - acc: 0.9973 - val_loss: 0.1359 - val_acc: 0.9794
Out[41]:
<keras.callbacks.History at 0x7fb35ac4bcc0>
#Train keras data generator train_datagen.fit(X_train) model.fit_generator(train_datagen.flow(X_train, y_train, batch_size=512), validation_data=(X_valid, y_valid), steps_per_epoch=len(X_train) / 512, epochs=100, callbacks=[model_checkpoint, model_earlystopping])

In [42]:
model.load_weights('model_TSR.hdf5')

In [43]:
loss, acc = model.evaluate(X_test, y_test, batch_size=512, verbose=1)

print("\nTest loss:{:0.2f}, Test acc:{:0.2f}%".format(loss, acc*100))


12630/12630 [==============================] - 1s     

Test loss:0.34, Test acc:94.39%

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images


In [44]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.

In [45]:
%matplotlib inline
import os
from scipy.misc import imresize
from skimage import io
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
Test_dir = "./examples/test_images_from_google/"
filenames = []
for root, dir, files in os.walk(Test_dir):
    pass
for file in files:
    filenames.append(os.path.join(root, file))

filenames


Out[45]:
['./examples/test_images_from_google/05_german-road-sign-bicycles-crossing-j2mra8.jpg',
 './examples/test_images_from_google/02_work-progress-road-sign-triangle-isolated-cloudy-background-germany-47409527.jpg',
 './examples/test_images_from_google/01_roadsign_yield-e1452287501390.png',
 './examples/test_images_from_google/04_sign-giving-order-no-entry-vehicular-traffic.jpg',
 './examples/test_images_from_google/00_german-road-sign-road-narrows-both-sides-narrow-bottleneck-construction-J2HT7X.jpg',
 './examples/test_images_from_google/03-Speed-limit-sign-in-Germany-Stock-Photo.jpg']

In [46]:
#preprocess for web images
test_imgs = []
for imgname in filenames:
    img = io.imread(imgname)
    img = rgb2gray(img) #convert image to gray
    img = equalize_adapthist(img) #histogram normalization
    img = imresize(img, (32,32)) #image resize to model input size
    img = (img / 255.).astype(np.float32) #normalization
    img = img.reshape(img.shape +(1,)) #add channel demension
    
    test_imgs.append(img)

test_imgs = np.array(test_imgs)
type(test_imgs[0]), test_imgs[0].shape


/home/vmadmin/.local/lib/python3.5/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint16
  .format(dtypeobj_in, dtypeobj_out))
Out[46]:
(numpy.ndarray, (32, 32, 1))

In [47]:
result = model.predict(test_imgs, verbose=1)


6/6 [==============================] - 0s

Predict the Sign Type for Each Image


In [48]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.

In [49]:
for i in range(len(result)):
    
    topn_index = np.argsort(result[i])[-5:][::-1]

    plt.figure()
    plt.subplot2grid((2,2), (0,0), colspan=1, rowspan=1), plt.imshow(io.imread(filenames[i])),plt.axis('off')
    plt.subplot2grid((2,2), (1,0), colspan=1, rowspan=1), plt.imshow(test_imgs[i][:,:,0],cmap='gray'),plt.axis('off')
    plt.subplot2grid((2,2), (0,1), colspan=1, rowspan=2), plt.barh(np.arange(5), result[i][topn_index])
    plt.yticks(np.arange(5), signames.values[topn_index])
    plt.tick_params(axis='both', which='both', labelleft='off', labelright='on', labeltop='off', labelbottom='off')


Analyze Performance


In [50]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.


In [51]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.

In [52]:
def analyze_perf(result, filename):
    topn_index = np.argsort(result)[-5:][::-1]
    top1=signames.values[topn_index[0]][1]
    top2=signames.values[topn_index[1]][1]
    top3=signames.values[topn_index[2]][1]
    top4=signames.values[topn_index[3]][1]
    top5=signames.values[topn_index[4]][1]

    print("="*50)
    plt.figure()
    plt.imshow(io.imread(filename)),plt.axis('off')
    plt.show()
    print("top1-{:35.35} : {:2.2f}%\ntop2-{:35.35} : {:2.2f}%\ntop3-{:35.35} : {:2.2f}%\ntop4-{:35.35} : {:2.2f}%\ntop5-{:35.35} : {:.2f}%" \
      .format(top1, result[topn_index[0]]*100, \
              top2, result[topn_index[1]]*100, \
              top3, result[topn_index[2]]*100, \
              top4, result[topn_index[3]]*100, \
              top5, result[topn_index[4]]*100) )

In [53]:
for i in range(6):
    analyze_perf(result[i], filenames[i])


==================================================
top1-Bicycles crossing                   : 98.17%
top2-Wild animals crossing               : 1.56%
top3-Slippery road                       : 0.23%
top4-Speed limit (120km/h)               : 0.02%
top5-Bumpy road                          : 0.01%
==================================================
top1-Road work                           : 100.00%
top2-Beware of ice/snow                  : 0.00%
top3-Pedestrians                         : 0.00%
top4-Right-of-way at the next intersecti : 0.00%
top5-Children crossing                   : 0.00%
==================================================
top1-Yield                               : 100.00%
top2-Keep right                          : 0.00%
top3-No passing for vehicles over 3.5 me : 0.00%
top4-End of no passing by vehicles over  : 0.00%
top5-Speed limit (70km/h)                : 0.00%
==================================================
top1-No entry                            : 100.00%
top2-Vehicles over 3.5 metric tons prohi : 0.00%
top3-Stop                                : 0.00%
top4-Keep left                           : 0.00%
top5-Turn left ahead                     : 0.00%
==================================================
top1-Road narrows on the right           : 99.54%
top2-Pedestrians                         : 0.45%
top3-Children crossing                   : 0.01%
top4-Road work                           : 0.00%
top5-Bicycles crossing                   : 0.00%
==================================================
top1-Speed limit (100km/h)               : 99.99%
top2-Speed limit (120km/h)               : 0.00%
top3-Speed limit (80km/h)                : 0.00%
top4-Beware of ice/snow                  : 0.00%
top5-Keep right                          : 0.00%

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.


Step 4 (Optional): Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Your output should look something like this (above)


In [54]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")

In [ ]:


In [ ]: