Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data


In [31]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data
DATA_DIR = './traffic-signs-data/'

training_file = DATA_DIR + 'train.p'
validation_file= DATA_DIR + 'valid.p'
testing_file = DATA_DIR + 'test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

originals = X_train

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas


In [32]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = len(X_train)

# TODO: Number of validation examples
n_validation = len(X_valid)

# TODO: Number of testing examples.
n_test = len(X_test)

# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape

# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).shape[0]

print("Size of training set =", n_train)
print("Size of testing set =", n_test)
print("Size of validation set =", n_validation)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)


Size of training set = 34799
Size of testing set = 12630
Size of validation set = 4410
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?


In [33]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
import pandas as pd

sign_names = pd.read_csv('signnames.csv')


### plot random image and corresponding class
index = random.randint(0, n_train)
image = X_train[index]
label = y_train[index]


names = list(sign_names.iloc[:,1])

names = [n.strip() for n in names]
print (label, names[label])

plt.imshow(image)
plt.show()

#print ("Normalized image")
#norm = normalize_gs(image)

#plt.gray()
#plt.imshow(norm.reshape((32,32)))
#plt.show()

unique, counts = np.unique(y_train, return_counts = True)

print ('Chart shows label counts')

#print (sign_names.iloc[:,1])
fig = plt.figure(figsize=(15,10))
#print (unique, len(names))
plt.bar(unique, counts)
plt.xticks(unique, names, rotation = 90)
plt.show()



# Visualizations will be shown in the notebook.
%matplotlib inline


8 Speed limit (120km/h)
Chart shows label counts

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.

Other pre-processing steps are optional. You can try different techniques to see if it improves performance.

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.


In [34]:
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include 
### converting to grayscale, etc. 
### Feel free to use as many code cells as needed.

import cv2

from sklearn.utils import shuffle


def normalize_gs(mat):
    '''perform RGB -> grayscale
       perform histogram normalization to improve contrast'''
    mat = cv2.cvtColor(mat, cv2.COLOR_RGB2GRAY)
    mat = cv2.equalizeHist(mat)
    # needed to keep tensorflow sane, it wants 32x32x1 images!
    mat = np.expand_dims(mat, axis=2)
    return mat


### naive value normalize --> doesnt work very well
def normalize(img):
    return  (img - 128) / 128


    

## normalize all images in the training / test set 
## will not convert to greyscale as I believe that color information is 
## a very important and distinct feature for traffic signs 

X_train = [normalize_gs(x) for x in X_train]
X_test = [normalize_gs(x) for x in X_test]
X_valid = [normalize_gs(x) for x in X_valid]

In [35]:
### plot random image and corresponding class
index = random.randint(0, n_train)
image = X_train[index]

label = y_train[index]

print (label, names[label])
ax = plt.subplot(121)
plt.imshow(originals[index])
ax = plt.subplot(122)

plt.gray()
plt.imshow(np.array(image).reshape((32,32)))
plt.show()


26 Traffic signals

Model Architecture

Will follow the general LeNet architecture from LeNet Lab solution.


In [36]:
### Tensorflow import and setup 
import tensorflow as tf
EPOCHS = 512
BATCH_SIZE = 128

MODEL_FILE = './traffic_sign'

DROPOUT_PROB = 0.80

In [37]:
from tensorflow.contrib.layers import flatten

def LeNet(x):    
    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1
    
    
    # SOLUTION: Layer 1: Convolutional. Input = 32x32x64. Output = 17x17x64.
    # changed conv kernel size to 16x16, depth of 64
    conv1_W = tf.Variable(tf.truncated_normal(shape=(16, 16, 1, 32), mean = mu, stddev = sigma))
    #conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(32))
    
    
    # changed padding to same to cover a larger area 
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # SOLUTION: Activation.
    conv1 = tf.nn.relu(conv1)

    # SOLUTION: Pooling. Input = 28x28x6. Output = 8x8x64.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 32, 64), mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(64))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # SOLUTION: Activation.
    conv2 = tf.nn.relu(conv2)

    # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Flatten. Input = 2x2x64. Output = 400.
    fc0   = flatten(conv2)
    
    # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 200.
    # modified: input shape 576
    fc1_W = tf.Variable(tf.truncated_normal(shape=(256, 128), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(128))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # SOLUTION: Activation.
    fc1    = tf.nn.relu(fc1)
    # add a dropout layer
    fc1 = tf.nn.dropout(fc1, dropout_prob)
    

    # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 100.
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(128, 100), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(100))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b
    
    # SOLUTION: Activation.
    fc2    = tf.nn.relu(fc2)
    # add a dropout layer 
    fc2 = tf.nn.dropout(fc2, dropout_prob)

    # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
    ## MODIFIED: output dim now 43 (n_classes)
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(100, n_classes), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(n_classes))
    logits = tf.matmul(fc2, fc3_W) + fc3_b
    
    #print (logits.summary())
    
    return logits

Features and Labels


In [38]:
#x = tf.placeholder(tf.float32, (None, 32, 32, 3))
x = tf.placeholder(tf.float32, (None, 32, 32,1))
y = tf.placeholder(tf.int32, (None))
dropout_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, n_classes)

Training Pipeline


In [39]:
rate = 0.0001    # default is 0.001

logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.


In [58]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.

## Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, dropout_prob: 1.0})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples

### function to get logits 
def predict(image_data):
    sess = tf.get_default_session()
    predictions = sess.run(logits, feed_dict={x:image_data, dropout_prob: 1.0})
    
    ### NOTE: Added SoftMax function here as per the assignment requirements!
    ### top_k now represents the top 5 softmax probabilities
    softmax_predictions = sess.run(tf.nn.softmax(predictions))
    top_k = sess.run(tf.nn.top_k(softmax_predictions), k=5)) 
    ### also added softmax function to predictions
    return softmax_predictions.eval(), top_k


  File "<ipython-input-58-3f3aa42f4069>", line 30
    top_k = sess.run(tf.nn.top_k(softmax_predictions), k=5))
                                                           ^
SyntaxError: invalid syntax

In [44]:
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    num_examples = len(X_train)
    
    print("Training...")
    print()
    for i in range(EPOCHS):
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, dropout_prob: DROPOUT_PROB})
            
        validation_accuracy = evaluate(X_valid, y_valid )
        
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print()
        
        if validation_accuracy > 0.943:
            print ("High Accuracy Reached. Breaking off!")
            break
        
    saver.save(sess, MODEL_FILE)
    print("Model saved")


Training...

EPOCH 1 ...
Validation Accuracy = 0.088

EPOCH 2 ...
Validation Accuracy = 0.087

EPOCH 3 ...
Validation Accuracy = 0.100

EPOCH 4 ...
Validation Accuracy = 0.114

EPOCH 5 ...
Validation Accuracy = 0.124

EPOCH 6 ...
Validation Accuracy = 0.133

EPOCH 7 ...
Validation Accuracy = 0.136

EPOCH 8 ...
Validation Accuracy = 0.149

EPOCH 9 ...
Validation Accuracy = 0.155

EPOCH 10 ...
Validation Accuracy = 0.167

EPOCH 11 ...
Validation Accuracy = 0.173

EPOCH 12 ...
Validation Accuracy = 0.169

EPOCH 13 ...
Validation Accuracy = 0.181

EPOCH 14 ...
Validation Accuracy = 0.198

EPOCH 15 ...
Validation Accuracy = 0.224

EPOCH 16 ...
Validation Accuracy = 0.273

EPOCH 17 ...
Validation Accuracy = 0.291

EPOCH 18 ...
Validation Accuracy = 0.309

EPOCH 19 ...
Validation Accuracy = 0.323

EPOCH 20 ...
Validation Accuracy = 0.331

EPOCH 21 ...
Validation Accuracy = 0.334

EPOCH 22 ...
Validation Accuracy = 0.356

EPOCH 23 ...
Validation Accuracy = 0.358

EPOCH 24 ...
Validation Accuracy = 0.381

EPOCH 25 ...
Validation Accuracy = 0.394

EPOCH 26 ...
Validation Accuracy = 0.411

EPOCH 27 ...
Validation Accuracy = 0.417

EPOCH 28 ...
Validation Accuracy = 0.431

EPOCH 29 ...
Validation Accuracy = 0.429

EPOCH 30 ...
Validation Accuracy = 0.451

EPOCH 31 ...
Validation Accuracy = 0.458

EPOCH 32 ...
Validation Accuracy = 0.471

EPOCH 33 ...
Validation Accuracy = 0.484

EPOCH 34 ...
Validation Accuracy = 0.492

EPOCH 35 ...
Validation Accuracy = 0.514

EPOCH 36 ...
Validation Accuracy = 0.522

EPOCH 37 ...
Validation Accuracy = 0.538

EPOCH 38 ...
Validation Accuracy = 0.536

EPOCH 39 ...
Validation Accuracy = 0.558

EPOCH 40 ...
Validation Accuracy = 0.564

EPOCH 41 ...
Validation Accuracy = 0.578

EPOCH 42 ...
Validation Accuracy = 0.596

EPOCH 43 ...
Validation Accuracy = 0.611

EPOCH 44 ...
Validation Accuracy = 0.629

EPOCH 45 ...
Validation Accuracy = 0.628

EPOCH 46 ...
Validation Accuracy = 0.656

EPOCH 47 ...
Validation Accuracy = 0.654

EPOCH 48 ...
Validation Accuracy = 0.670

EPOCH 49 ...
Validation Accuracy = 0.685

EPOCH 50 ...
Validation Accuracy = 0.683

EPOCH 51 ...
Validation Accuracy = 0.700

EPOCH 52 ...
Validation Accuracy = 0.693

EPOCH 53 ...
Validation Accuracy = 0.717

EPOCH 54 ...
Validation Accuracy = 0.728

EPOCH 55 ...
Validation Accuracy = 0.739

EPOCH 56 ...
Validation Accuracy = 0.750

EPOCH 57 ...
Validation Accuracy = 0.754

EPOCH 58 ...
Validation Accuracy = 0.767

EPOCH 59 ...
Validation Accuracy = 0.771

EPOCH 60 ...
Validation Accuracy = 0.773

EPOCH 61 ...
Validation Accuracy = 0.782

EPOCH 62 ...
Validation Accuracy = 0.796

EPOCH 63 ...
Validation Accuracy = 0.797

EPOCH 64 ...
Validation Accuracy = 0.801

EPOCH 65 ...
Validation Accuracy = 0.798

EPOCH 66 ...
Validation Accuracy = 0.806

EPOCH 67 ...
Validation Accuracy = 0.812

EPOCH 68 ...
Validation Accuracy = 0.813

EPOCH 69 ...
Validation Accuracy = 0.816

EPOCH 70 ...
Validation Accuracy = 0.824

EPOCH 71 ...
Validation Accuracy = 0.833

EPOCH 72 ...
Validation Accuracy = 0.833

EPOCH 73 ...
Validation Accuracy = 0.834

EPOCH 74 ...
Validation Accuracy = 0.844

EPOCH 75 ...
Validation Accuracy = 0.841

EPOCH 76 ...
Validation Accuracy = 0.848

EPOCH 77 ...
Validation Accuracy = 0.844

EPOCH 78 ...
Validation Accuracy = 0.864

EPOCH 79 ...
Validation Accuracy = 0.858

EPOCH 80 ...
Validation Accuracy = 0.860

EPOCH 81 ...
Validation Accuracy = 0.857

EPOCH 82 ...
Validation Accuracy = 0.869

EPOCH 83 ...
Validation Accuracy = 0.864

EPOCH 84 ...
Validation Accuracy = 0.865

EPOCH 85 ...
Validation Accuracy = 0.869

EPOCH 86 ...
Validation Accuracy = 0.877

EPOCH 87 ...
Validation Accuracy = 0.869

EPOCH 88 ...
Validation Accuracy = 0.870

EPOCH 89 ...
Validation Accuracy = 0.882

EPOCH 90 ...
Validation Accuracy = 0.871

EPOCH 91 ...
Validation Accuracy = 0.877

EPOCH 92 ...
Validation Accuracy = 0.883

EPOCH 93 ...
Validation Accuracy = 0.879

EPOCH 94 ...
Validation Accuracy = 0.877

EPOCH 95 ...
Validation Accuracy = 0.877

EPOCH 96 ...
Validation Accuracy = 0.886

EPOCH 97 ...
Validation Accuracy = 0.884

EPOCH 98 ...
Validation Accuracy = 0.888

EPOCH 99 ...
Validation Accuracy = 0.890

EPOCH 100 ...
Validation Accuracy = 0.889

EPOCH 101 ...
Validation Accuracy = 0.887

EPOCH 102 ...
Validation Accuracy = 0.892

EPOCH 103 ...
Validation Accuracy = 0.892

EPOCH 104 ...
Validation Accuracy = 0.892

EPOCH 105 ...
Validation Accuracy = 0.894

EPOCH 106 ...
Validation Accuracy = 0.893

EPOCH 107 ...
Validation Accuracy = 0.898

EPOCH 108 ...
Validation Accuracy = 0.893

EPOCH 109 ...
Validation Accuracy = 0.898

EPOCH 110 ...
Validation Accuracy = 0.896

EPOCH 111 ...
Validation Accuracy = 0.898

EPOCH 112 ...
Validation Accuracy = 0.900

EPOCH 113 ...
Validation Accuracy = 0.897

EPOCH 114 ...
Validation Accuracy = 0.899

EPOCH 115 ...
Validation Accuracy = 0.902

EPOCH 116 ...
Validation Accuracy = 0.900

EPOCH 117 ...
Validation Accuracy = 0.902

EPOCH 118 ...
Validation Accuracy = 0.904

EPOCH 119 ...
Validation Accuracy = 0.900

EPOCH 120 ...
Validation Accuracy = 0.904

EPOCH 121 ...
Validation Accuracy = 0.908

EPOCH 122 ...
Validation Accuracy = 0.907

EPOCH 123 ...
Validation Accuracy = 0.911

EPOCH 124 ...
Validation Accuracy = 0.907

EPOCH 125 ...
Validation Accuracy = 0.912

EPOCH 126 ...
Validation Accuracy = 0.909

EPOCH 127 ...
Validation Accuracy = 0.910

EPOCH 128 ...
Validation Accuracy = 0.903

EPOCH 129 ...
Validation Accuracy = 0.909

EPOCH 130 ...
Validation Accuracy = 0.912

EPOCH 131 ...
Validation Accuracy = 0.912

EPOCH 132 ...
Validation Accuracy = 0.909

EPOCH 133 ...
Validation Accuracy = 0.913

EPOCH 134 ...
Validation Accuracy = 0.913

EPOCH 135 ...
Validation Accuracy = 0.910

EPOCH 136 ...
Validation Accuracy = 0.912

EPOCH 137 ...
Validation Accuracy = 0.916

EPOCH 138 ...
Validation Accuracy = 0.912

EPOCH 139 ...
Validation Accuracy = 0.915

EPOCH 140 ...
Validation Accuracy = 0.912

EPOCH 141 ...
Validation Accuracy = 0.918

EPOCH 142 ...
Validation Accuracy = 0.912

EPOCH 143 ...
Validation Accuracy = 0.912

EPOCH 144 ...
Validation Accuracy = 0.912

EPOCH 145 ...
Validation Accuracy = 0.915

EPOCH 146 ...
Validation Accuracy = 0.914

EPOCH 147 ...
Validation Accuracy = 0.914

EPOCH 148 ...
Validation Accuracy = 0.916

EPOCH 149 ...
Validation Accuracy = 0.916

EPOCH 150 ...
Validation Accuracy = 0.913

EPOCH 151 ...
Validation Accuracy = 0.909

EPOCH 152 ...
Validation Accuracy = 0.915

EPOCH 153 ...
Validation Accuracy = 0.913

EPOCH 154 ...
Validation Accuracy = 0.912

EPOCH 155 ...
Validation Accuracy = 0.918

EPOCH 156 ...
Validation Accuracy = 0.910

EPOCH 157 ...
Validation Accuracy = 0.916

EPOCH 158 ...
Validation Accuracy = 0.921

EPOCH 159 ...
Validation Accuracy = 0.919

EPOCH 160 ...
Validation Accuracy = 0.919

EPOCH 161 ...
Validation Accuracy = 0.917

EPOCH 162 ...
Validation Accuracy = 0.917

EPOCH 163 ...
Validation Accuracy = 0.920

EPOCH 164 ...
Validation Accuracy = 0.918

EPOCH 165 ...
Validation Accuracy = 0.921

EPOCH 166 ...
Validation Accuracy = 0.919

EPOCH 167 ...
Validation Accuracy = 0.923

EPOCH 168 ...
Validation Accuracy = 0.917

EPOCH 169 ...
Validation Accuracy = 0.922

EPOCH 170 ...
Validation Accuracy = 0.916

EPOCH 171 ...
Validation Accuracy = 0.925

EPOCH 172 ...
Validation Accuracy = 0.921

EPOCH 173 ...
Validation Accuracy = 0.923

EPOCH 174 ...
Validation Accuracy = 0.923

EPOCH 175 ...
Validation Accuracy = 0.923

EPOCH 176 ...
Validation Accuracy = 0.919

EPOCH 177 ...
Validation Accuracy = 0.921

EPOCH 178 ...
Validation Accuracy = 0.922

EPOCH 179 ...
Validation Accuracy = 0.922

EPOCH 180 ...
Validation Accuracy = 0.924

EPOCH 181 ...
Validation Accuracy = 0.924

EPOCH 182 ...
Validation Accuracy = 0.925

EPOCH 183 ...
Validation Accuracy = 0.928

EPOCH 184 ...
Validation Accuracy = 0.923

EPOCH 185 ...
Validation Accuracy = 0.924

EPOCH 186 ...
Validation Accuracy = 0.924

EPOCH 187 ...
Validation Accuracy = 0.929

EPOCH 188 ...
Validation Accuracy = 0.924

EPOCH 189 ...
Validation Accuracy = 0.927

EPOCH 190 ...
Validation Accuracy = 0.926

EPOCH 191 ...
Validation Accuracy = 0.921

EPOCH 192 ...
Validation Accuracy = 0.924

EPOCH 193 ...
Validation Accuracy = 0.925

EPOCH 194 ...
Validation Accuracy = 0.925

EPOCH 195 ...
Validation Accuracy = 0.922

EPOCH 196 ...
Validation Accuracy = 0.923

EPOCH 197 ...
Validation Accuracy = 0.927

EPOCH 198 ...
Validation Accuracy = 0.926

EPOCH 199 ...
Validation Accuracy = 0.927

EPOCH 200 ...
Validation Accuracy = 0.925

EPOCH 201 ...
Validation Accuracy = 0.928

EPOCH 202 ...
Validation Accuracy = 0.928

EPOCH 203 ...
Validation Accuracy = 0.925

EPOCH 204 ...
Validation Accuracy = 0.927

EPOCH 205 ...
Validation Accuracy = 0.924

EPOCH 206 ...
Validation Accuracy = 0.926

EPOCH 207 ...
Validation Accuracy = 0.924

EPOCH 208 ...
Validation Accuracy = 0.932

EPOCH 209 ...
Validation Accuracy = 0.926

EPOCH 210 ...
Validation Accuracy = 0.925

EPOCH 211 ...
Validation Accuracy = 0.925

EPOCH 212 ...
Validation Accuracy = 0.927

EPOCH 213 ...
Validation Accuracy = 0.931

EPOCH 214 ...
Validation Accuracy = 0.925

EPOCH 215 ...
Validation Accuracy = 0.924

EPOCH 216 ...
Validation Accuracy = 0.926

EPOCH 217 ...
Validation Accuracy = 0.929

EPOCH 218 ...
Validation Accuracy = 0.930

EPOCH 219 ...
Validation Accuracy = 0.928

EPOCH 220 ...
Validation Accuracy = 0.930

EPOCH 221 ...
Validation Accuracy = 0.928

EPOCH 222 ...
Validation Accuracy = 0.925

EPOCH 223 ...
Validation Accuracy = 0.927

EPOCH 224 ...
Validation Accuracy = 0.931

EPOCH 225 ...
Validation Accuracy = 0.929

EPOCH 226 ...
Validation Accuracy = 0.927

EPOCH 227 ...
Validation Accuracy = 0.928

EPOCH 228 ...
Validation Accuracy = 0.929

EPOCH 229 ...
Validation Accuracy = 0.932

EPOCH 230 ...
Validation Accuracy = 0.932

EPOCH 231 ...
Validation Accuracy = 0.929

EPOCH 232 ...
Validation Accuracy = 0.930

EPOCH 233 ...
Validation Accuracy = 0.929

EPOCH 234 ...
Validation Accuracy = 0.926

EPOCH 235 ...
Validation Accuracy = 0.933

EPOCH 236 ...
Validation Accuracy = 0.932

EPOCH 237 ...
Validation Accuracy = 0.931

EPOCH 238 ...
Validation Accuracy = 0.929

EPOCH 239 ...
Validation Accuracy = 0.932

EPOCH 240 ...
Validation Accuracy = 0.934

EPOCH 241 ...
Validation Accuracy = 0.930

EPOCH 242 ...
Validation Accuracy = 0.926

EPOCH 243 ...
Validation Accuracy = 0.927

EPOCH 244 ...
Validation Accuracy = 0.930

EPOCH 245 ...
Validation Accuracy = 0.933

EPOCH 246 ...
Validation Accuracy = 0.929

EPOCH 247 ...
Validation Accuracy = 0.929

EPOCH 248 ...
Validation Accuracy = 0.930

EPOCH 249 ...
Validation Accuracy = 0.932

EPOCH 250 ...
Validation Accuracy = 0.932

EPOCH 251 ...
Validation Accuracy = 0.932

EPOCH 252 ...
Validation Accuracy = 0.934

EPOCH 253 ...
Validation Accuracy = 0.932

EPOCH 254 ...
Validation Accuracy = 0.930

EPOCH 255 ...
Validation Accuracy = 0.932

EPOCH 256 ...
Validation Accuracy = 0.931

EPOCH 257 ...
Validation Accuracy = 0.936

EPOCH 258 ...
Validation Accuracy = 0.932

EPOCH 259 ...
Validation Accuracy = 0.932

EPOCH 260 ...
Validation Accuracy = 0.935

EPOCH 261 ...
Validation Accuracy = 0.932

EPOCH 262 ...
Validation Accuracy = 0.934

EPOCH 263 ...
Validation Accuracy = 0.932

EPOCH 264 ...
Validation Accuracy = 0.933

EPOCH 265 ...
Validation Accuracy = 0.936

EPOCH 266 ...
Validation Accuracy = 0.937

EPOCH 267 ...
Validation Accuracy = 0.937

EPOCH 268 ...
Validation Accuracy = 0.934

EPOCH 269 ...
Validation Accuracy = 0.932

EPOCH 270 ...
Validation Accuracy = 0.936

EPOCH 271 ...
Validation Accuracy = 0.934

EPOCH 272 ...
Validation Accuracy = 0.935

EPOCH 273 ...
Validation Accuracy = 0.939

EPOCH 274 ...
Validation Accuracy = 0.933

EPOCH 275 ...
Validation Accuracy = 0.936

EPOCH 276 ...
Validation Accuracy = 0.938

EPOCH 277 ...
Validation Accuracy = 0.939

EPOCH 278 ...
Validation Accuracy = 0.936

EPOCH 279 ...
Validation Accuracy = 0.935

EPOCH 280 ...
Validation Accuracy = 0.933

EPOCH 281 ...
Validation Accuracy = 0.930

EPOCH 282 ...
Validation Accuracy = 0.937

EPOCH 283 ...
Validation Accuracy = 0.935

EPOCH 284 ...
Validation Accuracy = 0.933

EPOCH 285 ...
Validation Accuracy = 0.922

EPOCH 286 ...
Validation Accuracy = 0.936

EPOCH 287 ...
Validation Accuracy = 0.937

EPOCH 288 ...
Validation Accuracy = 0.935

EPOCH 289 ...
Validation Accuracy = 0.939

EPOCH 290 ...
Validation Accuracy = 0.937

EPOCH 291 ...
Validation Accuracy = 0.937

EPOCH 292 ...
Validation Accuracy = 0.934

EPOCH 293 ...
Validation Accuracy = 0.939

EPOCH 294 ...
Validation Accuracy = 0.933

EPOCH 295 ...
Validation Accuracy = 0.934

EPOCH 296 ...
Validation Accuracy = 0.932

EPOCH 297 ...
Validation Accuracy = 0.934

EPOCH 298 ...
Validation Accuracy = 0.937

EPOCH 299 ...
Validation Accuracy = 0.938

EPOCH 300 ...
Validation Accuracy = 0.937

EPOCH 301 ...
Validation Accuracy = 0.927

EPOCH 302 ...
Validation Accuracy = 0.935

EPOCH 303 ...
Validation Accuracy = 0.933

EPOCH 304 ...
Validation Accuracy = 0.934

EPOCH 305 ...
Validation Accuracy = 0.933

EPOCH 306 ...
Validation Accuracy = 0.938

EPOCH 307 ...
Validation Accuracy = 0.937

EPOCH 308 ...
Validation Accuracy = 0.932

EPOCH 309 ...
Validation Accuracy = 0.939

EPOCH 310 ...
Validation Accuracy = 0.936

EPOCH 311 ...
Validation Accuracy = 0.934

EPOCH 312 ...
Validation Accuracy = 0.934

EPOCH 313 ...
Validation Accuracy = 0.934

EPOCH 314 ...
Validation Accuracy = 0.932

EPOCH 315 ...
Validation Accuracy = 0.935

EPOCH 316 ...
Validation Accuracy = 0.936

EPOCH 317 ...
Validation Accuracy = 0.932

EPOCH 318 ...
Validation Accuracy = 0.936

EPOCH 319 ...
Validation Accuracy = 0.935

EPOCH 320 ...
Validation Accuracy = 0.941

EPOCH 321 ...
Validation Accuracy = 0.937

EPOCH 322 ...
Validation Accuracy = 0.936

EPOCH 323 ...
Validation Accuracy = 0.936

EPOCH 324 ...
Validation Accuracy = 0.938

EPOCH 325 ...
Validation Accuracy = 0.939

EPOCH 326 ...
Validation Accuracy = 0.940

EPOCH 327 ...
Validation Accuracy = 0.941

EPOCH 328 ...
Validation Accuracy = 0.938

EPOCH 329 ...
Validation Accuracy = 0.937

EPOCH 330 ...
Validation Accuracy = 0.939

EPOCH 331 ...
Validation Accuracy = 0.934

EPOCH 332 ...
Validation Accuracy = 0.940

EPOCH 333 ...
Validation Accuracy = 0.935

EPOCH 334 ...
Validation Accuracy = 0.937

EPOCH 335 ...
Validation Accuracy = 0.941

EPOCH 336 ...
Validation Accuracy = 0.938

EPOCH 337 ...
Validation Accuracy = 0.936

EPOCH 338 ...
Validation Accuracy = 0.938

EPOCH 339 ...
Validation Accuracy = 0.936

EPOCH 340 ...
Validation Accuracy = 0.932

EPOCH 341 ...
Validation Accuracy = 0.935

EPOCH 342 ...
Validation Accuracy = 0.939

EPOCH 343 ...
Validation Accuracy = 0.933

EPOCH 344 ...
Validation Accuracy = 0.938

EPOCH 345 ...
Validation Accuracy = 0.933

EPOCH 346 ...
Validation Accuracy = 0.934

EPOCH 347 ...
Validation Accuracy = 0.933

EPOCH 348 ...
Validation Accuracy = 0.933

EPOCH 349 ...
Validation Accuracy = 0.935

EPOCH 350 ...
Validation Accuracy = 0.939

EPOCH 351 ...
Validation Accuracy = 0.941

EPOCH 352 ...
Validation Accuracy = 0.939

EPOCH 353 ...
Validation Accuracy = 0.939

EPOCH 354 ...
Validation Accuracy = 0.944

High Accuracy Reached. Breaking off!
Model saved

Restore Model and Calculate Performance with the Test Set, to check for over or underfitting


In [45]:
saver = tf.train.Saver()
MODEL_FILE = './traffic_sign'
# Create Graph

print ("Evaluating Model on Testing Set") 

with tf.Session() as sess:
    saver.restore(sess, MODEL_FILE)
    validation_accuracy = evaluate(X_test, y_test)
    #print("EPOCH {} ...".format(i+1))
    print("Validation Accuracy (test) = {:.3f}".format(validation_accuracy))
    validation_accuracy = evaluate(X_valid, y_valid)
    #print("EPOCH {} ...".format(i+1))
    print("Validation Accuracy (validation) = {:.3f}".format(validation_accuracy))
    validation_accuracy = evaluate(X_train, y_train)
    #print("EPOCH {} ...".format(i+1))
    print("Validation Accuracy (training) = {:.3f}".format(validation_accuracy))


Evaluating Model on Testing Set
INFO:tensorflow:Restoring parameters from ./traffic_sign
Validation Accuracy (test) = 0.914
Validation Accuracy (validation) = 0.944
Validation Accuracy (training) = 1.000

In [50]:
saver = tf.train.Saver()
MODEL_FILE = './traffic_sign'
# Create Graph

print ("Evaluating Model on Testing Set") 

with tf.Session() as sess:
    saver.restore(sess, MODEL_FILE)


Evaluating Model on Testing Set
INFO:tensorflow:Restoring parameters from ./traffic_sign

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images


In [51]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.

import os
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
IMAGES_DIR = "./web_images"
image_files = os.listdir(IMAGES_DIR)
#print (image_files)

images = [mpimg.imread(os.path.join(IMAGES_DIR,f)) for f in image_files]


    
## preprocess
images_preprocessed = [normalize_gs(img) for img in images]

for i in images_preprocessed:
    print (i.shape)

for idx,i in enumerate(images):
    print (image_files[idx])
    f = plt.subplot(121)
    plt.imshow(i)
    f = plt.subplot(122)
    plt.gray()
    plt.imshow(images_preprocessed[idx].reshape((32,32)))
    plt.show()


(32, 32, 1)
(32, 32, 1)
(32, 32, 1)
(32, 32, 1)
(32, 32, 1)
limit30.jpeg
Yield.jpeg
lights.jpeg
STOP.jpeg
roundabout.jpeg

Predict the Sign Type for Each Image


In [64]:
## evaluate preprocessed images on the model

with tf.Session() as sess:
    saver.restore(sess, MODEL_FILE)
    predictions, top_k = predict(images_preprocessed)
    #print (list(predictions))
    
    for i,p in enumerate(predictions.eval()):
        amax = np.argmax(p)
        print ("File name", image_files[i], "Logit argmax", amax, "equivalent to class", names[amax] )


INFO:tensorflow:Restoring parameters from ./traffic_sign
File name limit30.jpeg Logit argmax 1 equivalent to class Speed limit (30km/h)
File name Yield.jpeg Logit argmax 13 equivalent to class Yield
File name lights.jpeg Logit argmax 18 equivalent to class General caution
File name STOP.jpeg Logit argmax 14 equivalent to class Stop
File name roundabout.jpeg Logit argmax 40 equivalent to class Roundabout mandatory

Analyze Performance


In [54]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
N = 5.0
correct = 4

acc = correct / N

print("Accuracy is", acc)


Accuracy is 0.8

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.


In [65]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.
print(top_k)
print(top_k[1][0])
width = 1.5
bar_x = range(5)
plt.figure(2, figsize=(12,5))
ax = plt.subplot(151)
plt.bar(bar_x, top_k[0][0])
plt.xticks(bar_x, [names[k] for k in top_k[1][0]], rotation = 90)
ax.set_title('Limit30')

ax = plt.subplot(152)
plt.bar(bar_x, top_k[0][1])
plt.xticks(bar_x, [names[k] for k in top_k[1][1]], rotation = 90)
ax.set_title('Yield')

ax = plt.subplot(153)
plt.bar(bar_x, top_k[0][2])
plt.xticks(bar_x, [names[k] for k in top_k[1][2]], rotation = 90)
ax.set_title('Lights')

ax = plt.subplot(154)
plt.bar(bar_x, top_k[0][3])
plt.xticks(bar_x, [names[k] for k in top_k[1][3]], rotation = 90)
ax.set_title('STOP')

ax = plt.subplot(155)
plt.bar(bar_x, top_k[0][4])
plt.xticks(bar_x, [names[k] for k in top_k[1][4]], rotation = 90)
ax.set_title('Roundabout')

plt.show()


TopKV2(values=array([[  9.99999881e-01,   6.80429793e-08,   9.71807301e-10,
          2.73615223e-13,   4.19314661e-17],
       [  1.00000000e+00,   4.38417138e-15,   2.62378431e-17,
          1.33695626e-19,   6.31987799e-31],
       [  1.00000000e+00,   6.24002048e-20,   0.00000000e+00,
          0.00000000e+00,   0.00000000e+00],
       [  1.00000000e+00,   1.22684913e-14,   5.27914112e-18,
          1.74858899e-19,   9.99467858e-20],
       [  6.49874568e-01,   3.50123942e-01,   1.54575480e-06,
          4.69151905e-11,   1.33635052e-12]], dtype=float32), indices=array([[ 1,  4,  0,  2,  6],
       [13, 15, 38, 35, 33],
       [18, 26,  0,  1,  2],
       [14, 17, 12, 10, 38],
       [40, 12, 18, 37, 26]], dtype=int32))
[1 4 0 2 6]

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.


Step 4 (Optional): Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Your output should look something like this (above)


In [ ]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")