Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data


In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = r'C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\pickled_data\train.p'
validation_file = r'C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\pickled_data\valid.p'
testing_file = r'C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\pickled_data\test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

print('Data loaded')


Data loaded

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas


In [2]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np

# TODO: Number of training examples
n_train = len(X_train)

# TODO: Number of validation examples
n_validation = len(X_valid)

# TODO: Number of testing examples.
n_test = len(X_test)

# TODO: What's the shape of an traffic sign image?
image_shape = np.shape(X_train[0])

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validation examples =", n_validation)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)


Number of training examples = 34799
Number of testing examples = 12630
Number of validation examples = 4410
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?


In [3]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import random
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline

# show image of 20 random data points
figs, axs = plt.subplots(4, 5, figsize = (16, 8))
figs.subplots_adjust(hspace = .2, wspace = .001)
axs = axs.ravel()
for i in range(20):
    index = random.randint(0, len(X_train))
    image = X_train[index]
    axs[i].axis('off')
    axs[i].imshow(image)
    axs[i].set_title(y_train[index])


Check counts of individual classes


In [4]:
import numpy as np

train_class, train_counts = np.unique(y_train, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(train_class, train_counts)
plt.grid()
plt.title("Training Dataset : class vs count")
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.show()

test_class, test_counts = np.unique(y_test, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(test_class, test_counts)
plt.grid()
plt.title("Testing Dataset : class vs count")
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.show()

valid_class, valid_counts = np.unique(y_valid, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(valid_class, valid_counts)
plt.grid()
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.title("Validation Dataset : class vs count")
plt.show()



Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.

Other pre-processing steps are optional. You can try different techniques to see if it improves performance.

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.


In [5]:
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include converting
### to grayscale, etc. Feel free to use as many code cells as needed.
### Define functions to be used for pre-processing, cleaning and augmenting the data
import cv2

def normalize(dataset):
    return dataset/128.0 - 1.0

def rgbtograyscale(dataset):
    return np.sum(dataset/3, axis = 3, keepdims = True)

def histogram_eq(dataset):
    hist_eq_dataset = []
    channels = np.shape(dataset[0])[2]
    for i in range(len(dataset)):
        img = dataset[i]
        for j in range(channels):
            img[:, :, j] = cv2.equalizeHist(img[:, :, j])
        hist_eq_dataset.append(img)
    return hist_eq_dataset

def rand_rotation(dataset):
    rand_rotation_dataset = []
    rows,cols = dataset[0].shape[:2]
    for i in range(len(dataset)):
        img = dataset[i]
        random_angle = 30.0*np.random.rand() - 15 # rotations are restricted to be within (-15, 15) degreees
        M = cv2.getRotationMatrix2D((cols/2,rows/2), random_angle, 1) 
        dst = cv2.warpAffine(img, M, (cols, rows))
        rand_rotation_dataset.append(dst)
    return rand_rotation_dataset

def rand_translation(dataset):
    rand_translation_dataset = []
    rows,cols = dataset[0].shape[:2]
    for i in range(len(dataset)):
        img = dataset[i]
        delx, dely = np.random.randint(-3, 3, 2)
        M = np.float32([[1, 0, delx],[0, 1, dely]])
        dst = cv2.warpAffine(img, M, (cols,rows))        
        rand_translation_dataset.append(dst)
    return rand_translation_dataset

def rand_zoom(dataset):
    rand_zoom_dataset = []
    rows,cols = dataset[0].shape[:2]
    for i in range(len(dataset)):
        img = dataset[i]
        px = np.random.randint(-2, 2) # transform limits
        pts1 = np.float32([[px,px],[rows-px,px],[px,cols-px],[rows-px,cols-px]]) # ending locations
        pts2 = np.float32([[0,0],[rows,0],[0,cols],[rows,cols]]) # starting locations
        M = cv2.getPerspectiveTransform(pts1,pts2)
        dst = cv2.warpPerspective(img,M,(rows,cols))      
        rand_zoom_dataset.append(dst)
    return rand_zoom_dataset     

def augment_dataset(dataset, dataset_class, deficit, deficit_class):
# Note: even though there might be duplicates selected from the original X_train dataset
# in augment dataset, application of transformations like random rotation, histogram 
# equalization etc. will produce new features for the network which will help
# generalize the model.
    X_train_deficit = dataset[dataset_class == deficit_class]
    X_train_augment = []
    for i in range(deficit):
        index = random.randint(0, len(X_train_deficit) - 1)
        X_train_augment.append(X_train_deficit[index])
    return X_train_augment, deficit_class*np.ones(len(X_train_augment))

Augmenting training data by applying transformations to the original data

As can be seen from the visualization of test, training and validation data; the number of images per class varies greatly in each of the datasets. This means that there might be features in validation data, that are not fully encompassed in the training data. To overcome this issue, data augmentation is applied. The first step in augmentation is to make a dataset of images that are below a certain number in each of the classes. Specifically, classes with number of images less than 800 are selected to be augmented. Also, augmentation is done a way such that the final distribution of class vs count is relatively flat.


In [6]:
deficit_index = train_class[train_counts < 800] # This coincidentally is also the deficit class label
deficit_count = 800 - train_counts[deficit_index]
n_augment = np.sum(deficit_count)

print(deficit_count) # print the number of images to be added per class
print(deficit_index) # print the class of images to be added
print(n_augment) # print the total number of images to be added

X_train_augment = []
y_train_augment = []

for i in range(len(deficit_index)):
    augment_data, augment_class = augment_dataset(X_train, y_train, deficit_count[i], deficit_index[i])
    X_train_augment.extend(augment_data)
    y_train_augment.extend(augment_class)
    
print(len(X_train_augment)) # verify number of images in augment dataset matches with total number of images to be added


[620 440 110 260 440 620 500 530 470 350 560 260 590 320 560 410 110 590
 201 440 470 620 530 500 590 590]
[ 0  6 14 15 16 19 20 21 22 23 24 26 27 28 29 30 31 32 33 34 36 37 39 40 41
 42]
11681
11681

In [7]:
X_train_augment = np.array(X_train_augment) # convert to np array from list
y_train_augment = np.array(y_train_augment) # convert to np array from list

print(np.shape(X_train)) 
print(np.shape(X_train_augment))
print(np.shape(y_train))
print(np.shape(y_train_augment))

# Once we have a set of images selected from the original dataset for augmenting the training data, we apply
# various transformations like random rotation, translation, zoom and histogram equalization on the augment dataset.
# This augment dataset is then concatenated with the original dataset.

X_train = np.concatenate((X_train, rand_translation(rand_rotation(histogram_eq(rand_zoom(X_train_augment))))), axis = 0)
y_train = np.concatenate((y_train, y_train_augment), axis = 0)

# show image of 20 random data points
figs, axs = plt.subplots(4, 5, figsize = (16, 8))
figs.subplots_adjust(hspace = .4, wspace = .001)
axs = axs.ravel()
for i in range(20):
    index = random.randint(0, len(X_train) - 1)
    image = X_train[index]
    axs[i].axis('off')
    axs[i].imshow(image)
    axs[i].set_title(y_train[index])


(34799, 32, 32, 3)
(11681, 32, 32, 3)
(34799,)
(11681,)

Check final size of the training dataset after augmentation and plot distribution of images per class


In [8]:
print(np.shape(X_train))
print(np.shape(y_train))

# plot distribution of augmented dataset
train_class, train_counts = np.unique(y_train, return_counts=True)
plt.figure(figsize=(15, 5))
plt.bar(train_class, train_counts)
plt.grid()
plt.title("Augmented training Dataset : class vs count")
plt.xlabel("Class")
plt.ylabel("Number of images")
plt.show()


(46480, 32, 32, 3)
(46480,)

In [9]:
X_train = np.array(X_train) # convert to np array from list
X_valid = np.array(X_valid) # convert to np array from list
X_train = normalize(rgbtograyscale(X_train)) # normalize the dataset and convert to grayscale
X_valid = normalize(rgbtograyscale(X_valid)) # normalize the dataset and convert to grayscale

# verify whether the preprocessing is applied successfully to validation and training datasets
    
print(np.shape(X_train))
print(np.shape(X_test))
print(np.shape(X_valid))

# show image of 20 random data points
figs, axs = plt.subplots(4, 5, figsize = (16, 8))
figs.subplots_adjust(hspace = .4, wspace = .001)
axs = axs.ravel()
for i in range(20):
    index = random.randint(0, len(X_train) - 1)
    image = X_train[index].squeeze()
    axs[i].axis('off')
    axs[i].imshow(image, cmap = 'gray')
    axs[i].set_title(int(y_train[index]))


(46480, 32, 32, 1)
(12630, 32, 32, 3)
(4410, 32, 32, 1)

Model Architecture


In [10]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten

EPOCHS = 60
BATCH_SIZE = 200

def LeNet(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1
    
    # Layer 1: Convolution. Input = 32x32x1. Output = 28x28x6.
    conv1_w = tf.Variable(tf.truncated_normal(shape = (5, 5, 1, 6), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(6))
    conv1 = tf.nn.conv2d(x, conv1_w, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
    
    # Activation
    conv1 = tf.nn.relu(conv1)
                          
    # Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
                   
    # Layer 2: Convolution. Output = 10x10x16.
    conv2_w = tf.Variable(tf.truncated_normal([5, 5, 6, 16], mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(16))
    conv2 = tf.nn.conv2d(conv1, conv2_w, strides=[1, 1, 1, 1], padding='VALID') + conv2_b

    # Activation
    conv2 = tf.nn.relu(conv2)
    
    # Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # Flatten. Input = 5x5x16. Output = 400.
    conv2 = flatten(conv2)
    
    # Dropout layer 1
    conv2 = tf.nn.dropout(conv2, 0.80)
    
    # Layer 3: Fully Connected. Input = 400. Output = 120.
    fconnected1_w = tf.Variable(tf.truncated_normal([400, 120], mean = mu, stddev = sigma))
    fconnected1_b = tf.Variable(tf.zeros(120))
    fconnected_1 = tf.add(tf.matmul(conv2, fconnected1_w), fconnected1_b)
    
    # Activation.
    fconnected_1 = tf.nn.relu(fconnected_1)
    
    # Dropout layer 2
    fconnected_1 = tf.nn.dropout(fconnected_1, 0.75)                             
    
    # Layer 4: Fully Connected. Input = 120. Output = 84.
    fconnected2_w = tf.Variable(tf.truncated_normal([120, 84], mean = mu, stddev = sigma))
    fconnected2_b = tf.Variable(tf.zeros(84))
    fconnected_2 = tf.add(tf.matmul(fconnected_1, fconnected2_w), fconnected2_b)
    
    # Activation.
    fconnected_2 = tf.nn.relu(fconnected_2)
    
    # Layer 5: Fully Connected. Input = 84. Output = 43.
    fconnected3_w = tf.Variable(tf.truncated_normal([84, 43], mean = mu, stddev = sigma))
    fconnected3_b = tf.Variable(tf.zeros(43))
    logits = tf.add(tf.matmul(fconnected_2, fconnected3_w), fconnected3_b)
    
    return logits

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.


In [11]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.

from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)

x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
rate = 0.001

logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)

correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    total_loss = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        loss, accuracy = sess.run([loss_operation, accuracy_operation], feed_dict={x: batch_x, y: batch_y})
        total_accuracy += (accuracy * len(batch_x))
        total_loss += (loss * len(batch_x))
    return total_loss/float(num_examples), total_accuracy/float(num_examples)

train_loss_history = []
valid_loss_history = []

train_accuracy_history = []
valid_accuracy_history = []

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    num_examples = len(X_train)
    
    print("Training...")
    print()
    
    for i in range(EPOCHS):
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
            
        validation_loss, validation_accuracy = evaluate(X_valid, y_valid)
        valid_loss_history.append(validation_loss)
        valid_accuracy_history.append(validation_accuracy)
        train_loss, train_accuracy = evaluate(X_train, y_train)
        train_loss_history.append(train_loss)
        train_accuracy_history.append(train_accuracy)
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print("Training Accuracy = {:.3f}".format(train_accuracy))
        print("Validation loss = {:.3f}".format(validation_loss))
        print("Training loss = {:.3f}".format(train_loss))
        
        print()
        
    saver.save(sess, './lenet')
    print("Model saved")


Training...

EPOCH 1 ...
Validation Accuracy = 0.593
Training Accuracy = 0.585
Validation loss = 1.281
Training loss = 1.382

EPOCH 2 ...
Validation Accuracy = 0.735
Training Accuracy = 0.732
Validation loss = 0.847
Training loss = 0.893

EPOCH 3 ...
Validation Accuracy = 0.787
Training Accuracy = 0.794
Validation loss = 0.694
Training loss = 0.680

EPOCH 4 ...
Validation Accuracy = 0.803
Training Accuracy = 0.824
Validation loss = 0.642
Training loss = 0.569

EPOCH 5 ...
Validation Accuracy = 0.836
Training Accuracy = 0.854
Validation loss = 0.545
Training loss = 0.476

EPOCH 6 ...
Validation Accuracy = 0.830
Training Accuracy = 0.864
Validation loss = 0.538
Training loss = 0.435

EPOCH 7 ...
Validation Accuracy = 0.863
Training Accuracy = 0.883
Validation loss = 0.487
Training loss = 0.386

EPOCH 8 ...
Validation Accuracy = 0.866
Training Accuracy = 0.891
Validation loss = 0.446
Training loss = 0.349

EPOCH 9 ...
Validation Accuracy = 0.869
Training Accuracy = 0.901
Validation loss = 0.435
Training loss = 0.314

EPOCH 10 ...
Validation Accuracy = 0.878
Training Accuracy = 0.905
Validation loss = 0.417
Training loss = 0.302

EPOCH 11 ...
Validation Accuracy = 0.877
Training Accuracy = 0.911
Validation loss = 0.412
Training loss = 0.280

EPOCH 12 ...
Validation Accuracy = 0.885
Training Accuracy = 0.915
Validation loss = 0.387
Training loss = 0.268

EPOCH 13 ...
Validation Accuracy = 0.884
Training Accuracy = 0.917
Validation loss = 0.385
Training loss = 0.260

EPOCH 14 ...
Validation Accuracy = 0.890
Training Accuracy = 0.924
Validation loss = 0.374
Training loss = 0.237

EPOCH 15 ...
Validation Accuracy = 0.889
Training Accuracy = 0.922
Validation loss = 0.374
Training loss = 0.244

EPOCH 16 ...
Validation Accuracy = 0.904
Training Accuracy = 0.932
Validation loss = 0.348
Training loss = 0.214

EPOCH 17 ...
Validation Accuracy = 0.902
Training Accuracy = 0.931
Validation loss = 0.343
Training loss = 0.215

EPOCH 18 ...
Validation Accuracy = 0.904
Training Accuracy = 0.936
Validation loss = 0.356
Training loss = 0.201

EPOCH 19 ...
Validation Accuracy = 0.901
Training Accuracy = 0.937
Validation loss = 0.348
Training loss = 0.201

EPOCH 20 ...
Validation Accuracy = 0.904
Training Accuracy = 0.942
Validation loss = 0.367
Training loss = 0.182

EPOCH 21 ...
Validation Accuracy = 0.908
Training Accuracy = 0.940
Validation loss = 0.345
Training loss = 0.185

EPOCH 22 ...
Validation Accuracy = 0.911
Training Accuracy = 0.947
Validation loss = 0.320
Training loss = 0.166

EPOCH 23 ...
Validation Accuracy = 0.907
Training Accuracy = 0.944
Validation loss = 0.327
Training loss = 0.173

EPOCH 24 ...
Validation Accuracy = 0.913
Training Accuracy = 0.947
Validation loss = 0.316
Training loss = 0.166

EPOCH 25 ...
Validation Accuracy = 0.913
Training Accuracy = 0.946
Validation loss = 0.302
Training loss = 0.166

EPOCH 26 ...
Validation Accuracy = 0.908
Training Accuracy = 0.953
Validation loss = 0.308
Training loss = 0.149

EPOCH 27 ...
Validation Accuracy = 0.915
Training Accuracy = 0.952
Validation loss = 0.312
Training loss = 0.151

EPOCH 28 ...
Validation Accuracy = 0.923
Training Accuracy = 0.949
Validation loss = 0.277
Training loss = 0.158

EPOCH 29 ...
Validation Accuracy = 0.917
Training Accuracy = 0.952
Validation loss = 0.309
Training loss = 0.147

EPOCH 30 ...
Validation Accuracy = 0.924
Training Accuracy = 0.956
Validation loss = 0.276
Training loss = 0.140

EPOCH 31 ...
Validation Accuracy = 0.915
Training Accuracy = 0.953
Validation loss = 0.319
Training loss = 0.145

EPOCH 32 ...
Validation Accuracy = 0.923
Training Accuracy = 0.956
Validation loss = 0.280
Training loss = 0.137

EPOCH 33 ...
Validation Accuracy = 0.923
Training Accuracy = 0.957
Validation loss = 0.297
Training loss = 0.136

EPOCH 34 ...
Validation Accuracy = 0.912
Training Accuracy = 0.954
Validation loss = 0.331
Training loss = 0.145

EPOCH 35 ...
Validation Accuracy = 0.920
Training Accuracy = 0.958
Validation loss = 0.307
Training loss = 0.129

EPOCH 36 ...
Validation Accuracy = 0.920
Training Accuracy = 0.958
Validation loss = 0.307
Training loss = 0.130

EPOCH 37 ...
Validation Accuracy = 0.921
Training Accuracy = 0.958
Validation loss = 0.310
Training loss = 0.129

EPOCH 38 ...
Validation Accuracy = 0.922
Training Accuracy = 0.959
Validation loss = 0.294
Training loss = 0.129

EPOCH 39 ...
Validation Accuracy = 0.927
Training Accuracy = 0.961
Validation loss = 0.295
Training loss = 0.121

EPOCH 40 ...
Validation Accuracy = 0.919
Training Accuracy = 0.956
Validation loss = 0.314
Training loss = 0.138

EPOCH 41 ...
Validation Accuracy = 0.918
Training Accuracy = 0.961
Validation loss = 0.327
Training loss = 0.121

EPOCH 42 ...
Validation Accuracy = 0.923
Training Accuracy = 0.960
Validation loss = 0.318
Training loss = 0.126

EPOCH 43 ...
Validation Accuracy = 0.928
Training Accuracy = 0.963
Validation loss = 0.279
Training loss = 0.114

EPOCH 44 ...
Validation Accuracy = 0.922
Training Accuracy = 0.963
Validation loss = 0.299
Training loss = 0.113

EPOCH 45 ...
Validation Accuracy = 0.920
Training Accuracy = 0.962
Validation loss = 0.311
Training loss = 0.115

EPOCH 46 ...
Validation Accuracy = 0.932
Training Accuracy = 0.965
Validation loss = 0.273
Training loss = 0.110

EPOCH 47 ...
Validation Accuracy = 0.924
Training Accuracy = 0.963
Validation loss = 0.324
Training loss = 0.110

EPOCH 48 ...
Validation Accuracy = 0.927
Training Accuracy = 0.964
Validation loss = 0.299
Training loss = 0.110

EPOCH 49 ...
Validation Accuracy = 0.931
Training Accuracy = 0.965
Validation loss = 0.280
Training loss = 0.108

EPOCH 50 ...
Validation Accuracy = 0.929
Training Accuracy = 0.962
Validation loss = 0.278
Training loss = 0.116

EPOCH 51 ...
Validation Accuracy = 0.936
Training Accuracy = 0.967
Validation loss = 0.283
Training loss = 0.102

EPOCH 52 ...
Validation Accuracy = 0.932
Training Accuracy = 0.969
Validation loss = 0.281
Training loss = 0.098

EPOCH 53 ...
Validation Accuracy = 0.929
Training Accuracy = 0.966
Validation loss = 0.287
Training loss = 0.103

EPOCH 54 ...
Validation Accuracy = 0.930
Training Accuracy = 0.967
Validation loss = 0.267
Training loss = 0.102

EPOCH 55 ...
Validation Accuracy = 0.935
Training Accuracy = 0.968
Validation loss = 0.297
Training loss = 0.101

EPOCH 56 ...
Validation Accuracy = 0.926
Training Accuracy = 0.965
Validation loss = 0.304
Training loss = 0.110

EPOCH 57 ...
Validation Accuracy = 0.934
Training Accuracy = 0.966
Validation loss = 0.283
Training loss = 0.108

EPOCH 58 ...
Validation Accuracy = 0.927
Training Accuracy = 0.968
Validation loss = 0.302
Training loss = 0.099

EPOCH 59 ...
Validation Accuracy = 0.934
Training Accuracy = 0.969
Validation loss = 0.271
Training loss = 0.096

EPOCH 60 ...
Validation Accuracy = 0.931
Training Accuracy = 0.971
Validation loss = 0.291
Training loss = 0.090

Model saved

Plot epoch vs training loss/accuracy and epoch vs validation loss/accuracy


In [12]:
plt.figure()
loss_plot = plt.subplot(2,1,1)
loss_plot.set_title('Loss')
loss_plot.plot(train_loss_history, 'r', label='Training Loss')
loss_plot.plot(valid_loss_history, 'b', label='Validation Loss')
loss_plot.set_xlim([0, EPOCHS])
loss_plot.legend(loc = 1)

plt.figure()
accuracy_plot = plt.subplot(2,1,1)
accuracy_plot.set_title('Accuracy')
accuracy_plot.plot(train_accuracy_history, 'r', label='Training Accuracy')
accuracy_plot.plot(valid_accuracy_history, 'b', label='Validation Accuracy')
accuracy_plot.set_xlim([0, EPOCHS])
accuracy_plot.legend(loc = 4)


Out[12]:
<matplotlib.legend.Legend at 0x1c4bb751f28>

Check accuracy on test dataset (ran only once after a final model was saved)


In [13]:
# Normalize and grayscale test dataset
X_test = normalize(rgbtograyscale(X_test))

In [14]:
#Run testing
with tf.Session() as sess:
    saver_new = tf.train.import_meta_graph('./lenet.meta')
    saver_new.restore(sess, "./lenet")
    test_loss, test_accuracy = evaluate(X_test, y_test)
    print("Test Set Accuracy = {:.3f}".format(test_accuracy))


Test Set Accuracy = 0.911

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images


In [15]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
from scipy import misc
import glob

X_test_new = []
y_test_new = np.array([0, 14, 25, 4, 38])
path = r"C:\Users\VINOD\Google Drive\SDCND\CarND-Traffic-Sign-Classifier-Project\test_images_from_web/*.png"
for image_path in glob.glob(path):
    X_test_new.append(misc.imread(image_path))    

X_test_new = np.asarray(X_test_new)
print('Importing done...') 

# Apply normalization, rgb to grayscale conversion and other preprocessing steps
X_test_new = normalize(rgbtograyscale(X_test_new))
print()
print("Dimensions of test dataset:")
print(X_test_new.shape)

# show new test images
figs, axs = plt.subplots(1, 5, figsize = (6, 8))
figs.subplots_adjust(hspace = .4, wspace = .01)
axs = axs.ravel()
for i in range(5):
    image = X_test_new[i].squeeze()
    axs[i].axis('off')
    axs[i].imshow(image, cmap = 'gray')
    axs[i].set_title(int(y_test_new[i]))


Importing done...

Dimensions of test dataset:
(5, 32, 32, 1)

Predict the Sign Type for Each Image and Analyze Performance


In [16]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.

#Run testing
with tf.Session() as sess:
    saver_new2 = tf.train.import_meta_graph('./lenet.meta')
    saver_new2.restore(sess, "./lenet")
    test_loss, test_accuracy = evaluate(X_test_new, y_test_new)
    softmax_prob = sess.run(tf.nn.softmax(logits), feed_dict={x: X_test_new})
    top5_prob = sess.run(tf.nn.top_k(softmax_prob, k = 5))
    
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

print("New Test Set Accuracy = {:.3f}".format(test_accuracy))


New Test Set Accuracy = 0.800

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tf.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.


In [17]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.
plt.figure(figsize = (8, 16))
for i in range(5):
    plt.subplot(5, 2, 2*i+1)
    plt.imshow(X_test_new[i].squeeze(), cmap = 'gray') 
    plt.title(y_test_new[i])
    plt.axis('off')
    plt.subplot(5, 2, 2*i+2)
    plt.barh(np.arange(1, 6, 1), top5_prob.values[i, :])
    plt.ylabel("Class")
    plt.xlabel("Probability")
    labs = top5_prob.indices[i]
    plt.yticks(np.arange(1, 6, 1), labs)
plt.show()


Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.


Step 4 (Optional): Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Your output should look something like this (above)


In [ ]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")