Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data


In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = './traffic-signs-data/train.p'
testing_file = './traffic-signs-data/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below.


In [2]:
### Replace each question mark with the appropriate value.
import numpy as np
# TODO: Number of training examples
n_train = len(y_train)

# TODO: Number of testing examples.
n_test = len(y_test)

# TODO: What's the shape of an traffic sign image?
image_shape = X_test.shape[1:3]

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_test))

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)


Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (32, 32)
Number of classes = 43

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.


In [3]:
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
plt.figure()
for i in range(0, n_classes):
    plt.subplot(5, 9, i+1)
    idx = [y_test == i]
    img = X_test[idx][0]
    plt.imshow(img)
    plt.title(str(i))
    plt.axis('off')
plt.show()



Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.


In [4]:
### Preprocess the data here.
from tqdm import tqdm
import cv2
X = np.concatenate((X_train, X_test), axis=0)
y = np.concatenate((y_train, y_test), axis=0)

# convert RGB to Gray, apply histogram equalization, data normalization
X_gray = np.array([])
# Progress bar
batches_pbar = tqdm(range(len(y)), desc='Percent ', unit='images')

for i in batches_pbar:
    img = X[i]
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)    # opencv's imread return BGR, but here is RGB
    gray = cv2.equalizeHist(gray)    # equalize the histogram
    gray = (gray-128)/128 # normalization
    gray = gray.reshape(1,32,32,1)
    if i==0:
        X_gray = gray
    else:
        X_gray = np.append(X_gray, gray, axis=0)
    
print(X_gray.shape)


Percent : 100%|██████████| 51839/51839 [59:29<00:00,  6.81images/s]
(51839, 32, 32, 1)

Question 1

Describe how you preprocessed the data. Why did you choose that technique?

Answer:

1) Convert RGB to Gray and in order to make model insensitive to color variations

2) Apply histogram equalization to mitigate the influence caused by illumination changes.

3) Data normalization to make the problem well conditioned, which is helpful to initialization and learning convergence.

Uncorrect augmented data results in worse performance.

The uncorrect augmentation generates images with black border which cut the performance. The correct data augmentation should like this https://github.com/navoshta/traffic-signs/blob/master/Traffic_Signs_Recognition.ipynb .


In [5]:
### Generate additional data (OPTIONAL!).
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.

# data split with shuffle
# from sklearn.model_selection import train_test_split
# import os
# X_tr_val,  X_ts,  y_tr_val,  y_ts = train_test_split(
#     X_gray,
#     y,
#     test_size=0.1,
#     random_state=832289)
# X_tr,  X_val,  y_tr,  y_val = train_test_split(
#     X_tr_val,
#     y_tr_val,
#     test_size=0.1,
#     random_state=832289)

# print('Training features and labels randomized and split.')

# ### Generate additional data (OPTIONAL!)
# def augment_brightness_camera_images(image):
#     image1 = cv2.cvtColor(image,cv2.COLOR_RGB2HSV)
#     random_bright = .25+np.random.uniform()
#     #print(random_bright)
#     image1[:,:,2] = image1[:,:,2]*random_bright
#     image1 = cv2.cvtColor(image1,cv2.COLOR_HSV2RGB)
#     return image1

# def transform_image(img,ang_range,shear_range,trans_range,brightness=0):
#     '''
#     This function transforms images to generate new images.
#     The function takes in following arguments,
#     1- Image
#     2- ang_range: Range of angles for rotation
#     3- shear_range: Range of values to apply affine transform to
#     4- trans_range: Range of values to apply translations over.

#     A Random uniform distribution is used to generate different parameters for transformation
#     '''
#     # Rotation
#     ang_rot = np.random.uniform(ang_range)-ang_range/2
#     rows,cols,ch = img.shape    
#     Rot_M = cv2.getRotationMatrix2D((cols/2,rows/2),ang_rot,1)
#     # Translation
#     tr_x = trans_range*np.random.uniform()-trans_range/2
#     tr_y = trans_range*np.random.uniform()-trans_range/2
#     Trans_M = np.float32([[1,0,tr_x],[0,1,tr_y]])
#     # Shear
#     pts1 = np.float32([[5,5],[20,5],[5,20]])
#     pt1 = 5+shear_range*np.random.uniform()-shear_range/2
#     pt2 = 20+shear_range*np.random.uniform()-shear_range/2
#     # Brightness
#     pts2 = np.float32([[pt1,5],[pt2,pt1],[5,pt2]])
#     shear_M = cv2.getAffineTransform(pts1,pts2)
#     # apply operations
#     img = cv2.warpAffine(img,Rot_M,(cols,rows))
#     img = cv2.warpAffine(img,Trans_M,(cols,rows))
#     img = cv2.warpAffine(img,shear_M,(cols,rows))
#     if brightness == 1:
#       img = augment_brightness_camera_images(img)
#     return img

# def augment_data(X, y):
#     n_classes = len(np.unique(y))
#     avg_sample_num = len(y)/n_classes
#     for i in range(0, n_classes):
#         idx = np.argwhere(y==i)
#         num = len(idx)
#         cnt = num
#         while cnt < avg_sample_num:
#             img = X[idx[cnt%num]].reshape(32, 32, 1)
#             img = transform_image(img,10,5,2,brightness=0).reshape(1,32, 32, 1)
#             X = np.append(X, img, axis=0)
#             y = np.append(y, np.array(i))
#             cnt += 1
#     return X, y

# X_tr,   y_tr = augment_data(X_tr, y_tr)
# X_val,  y_val = augment_data(X_val, y_val)
# X_ts,   y_ts = augment_data(X_ts, y_ts)


Training features and labels randomized and split.

Save Checkpoint


In [6]:
# Save the data for easy access
pickle_file = 'sign.pickle'
if not os.path.isfile(pickle_file):
    print('Saving data to pickle file...')
    try:
        with open('sign.pickle', 'wb') as pfile:
            pickle.dump(
                {
                    'train_dataset': X_tr,
                    'train_labels': y_tr,
                    'valid_dataset': X_val,
                    'valid_labels': y_val,
                    'test_dataset': X_ts,
                    'test_labels': y_ts,
                },
                pfile, pickle.HIGHEST_PROTOCOL)
    except Exception as e:
        print('Unable to save data to', pickle_file, ':', e)
        raise

print('Data cached in pickle file.')


Saving data to pickle file...
Data cached in pickle file.

Load Checkpoint


In [1]:
%matplotlib inline

# Load the modules
import pickle
import math

import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt

# Reload the data
pickle_file = 'sign.pickle'
with open(pickle_file, 'rb') as f:
  pickle_data = pickle.load(f)
  X_train = pickle_data['train_dataset']
  y_train = pickle_data['train_labels']
  X_validation = pickle_data['valid_dataset']
  y_validation = pickle_data['valid_labels']
  X_test = pickle_data['test_dataset']
  y_test = pickle_data['test_labels']
  del pickle_data  # Free up memory


print('Data and modules loaded.')


Data and modules loaded.

In [2]:
# TODO: Re-construct the network and add a convolutional layer before the flatten layer.
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras import backend as K
from sklearn.preprocessing import LabelBinarizer

encoder = LabelBinarizer()
encoder.fit(y_train)
y_one_hot = encoder.transform(y_train)
X_normalized = X_train
# input image dimensions
img_rows, img_cols = 32, 32
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
print(X_normalized.shape)


# Create the Sequential model
model = Sequential()

model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                        border_mode='valid',
                        input_shape=(32, 32, 1)))
model.add(Activation('relu'))
model.add(Flatten(input_shape=(32, 32, 1)))
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(43))
model.add(Activation('softmax'))

model.compile('adam', 'categorical_crossentropy', ['accuracy'])
history = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=20, validation_split=0.2)


Using TensorFlow backend.
(41989, 32, 32, 1)
Train on 33591 samples, validate on 8398 samples
Epoch 1/20
33591/33591 [==============================] - 61s - loss: 2.7731 - acc: 0.2956 - val_loss: 1.9775 - val_acc: 0.4668
Epoch 2/20
33591/33591 [==============================] - 63s - loss: 1.4967 - acc: 0.5804 - val_loss: 1.3439 - val_acc: 0.5967
Epoch 3/20
33591/33591 [==============================] - 66s - loss: 1.0129 - acc: 0.7052 - val_loss: 1.1337 - val_acc: 0.6552
Epoch 4/20
33591/33591 [==============================] - 67s - loss: 0.7484 - acc: 0.7813 - val_loss: 1.0106 - val_acc: 0.6912
Epoch 5/20
33591/33591 [==============================] - 63s - loss: 0.5727 - acc: 0.8343 - val_loss: 0.9556 - val_acc: 0.7098
Epoch 6/20
33591/33591 [==============================] - 66s - loss: 0.4476 - acc: 0.8717 - val_loss: 0.9406 - val_acc: 0.7214
Epoch 7/20
33591/33591 [==============================] - 68s - loss: 0.3408 - acc: 0.9045 - val_loss: 0.9035 - val_acc: 0.7346
Epoch 8/20
33591/33591 [==============================] - 65s - loss: 0.2619 - acc: 0.9310 - val_loss: 0.9129 - val_acc: 0.7409
Epoch 9/20
33591/33591 [==============================] - 66s - loss: 0.1996 - acc: 0.9507 - val_loss: 0.9533 - val_acc: 0.7340
Epoch 10/20
33591/33591 [==============================] - 64s - loss: 0.1547 - acc: 0.9644 - val_loss: 0.9352 - val_acc: 0.7533
Epoch 11/20
33591/33591 [==============================] - 63s - loss: 0.1116 - acc: 0.9787 - val_loss: 0.9697 - val_acc: 0.7497
Epoch 12/20
33591/33591 [==============================] - 65s - loss: 0.0797 - acc: 0.9883 - val_loss: 0.9931 - val_acc: 0.7528
Epoch 13/20
33591/33591 [==============================] - 68s - loss: 0.0586 - acc: 0.9935 - val_loss: 1.0100 - val_acc: 0.7565
Epoch 14/20
33591/33591 [==============================] - 67s - loss: 0.0406 - acc: 0.9977 - val_loss: 1.0406 - val_acc: 0.7604
Epoch 15/20
33591/33591 [==============================] - 65s - loss: 0.0274 - acc: 0.9989 - val_loss: 1.0666 - val_acc: 0.7617
Epoch 16/20
33591/33591 [==============================] - 68s - loss: 0.0193 - acc: 0.9998 - val_loss: 1.0724 - val_acc: 0.7663
Epoch 17/20
33591/33591 [==============================] - 69s - loss: 0.0126 - acc: 1.0000 - val_loss: 1.0891 - val_acc: 0.7671
Epoch 18/20
33591/33591 [==============================] - 70s - loss: 0.0093 - acc: 1.0000 - val_loss: 1.1056 - val_acc: 0.7688
Epoch 19/20
33591/33591 [==============================] - 68s - loss: 0.0072 - acc: 1.0000 - val_loss: 1.1378 - val_acc: 0.7703
Epoch 20/20
33591/33591 [==============================] - 65s - loss: 0.0058 - acc: 1.0000 - val_loss: 1.1482 - val_acc: 0.7673

Question 2

Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?

Answer: 1) I first shuffle the dataset, and divid the dataset in to 1/10 for testing stage data and 9/10 for training stage data. Then, during traing stage, I again divide the training stage data into 1/10 for validation and 9/10 for training.

2) I use Rotation, Translation, Shear operations to generate data, since the data are very unbalanced among different classes. In the new dataset, each class has at least 1205 images.


In [9]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten

def LeNet(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1
    
    # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
    conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma), name='conv1_W')    # the filters' weights
    conv1_b = tf.Variable(tf.zeros(6), name='conv1_b')    # the filters' biases
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # TODO: Activation.
    conv1 = tf.nn.relu(conv1)

    # TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, [1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    
    # TODO: Layer 2: Convolutional. Output = 10x10x16.
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma), name='conv2_W')    # the filters' weights
    conv2_b = tf.Variable(tf.zeros(16), name='conv2_b')    # the filters' biases
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # TODO: Activation.
    conv2 = tf.nn.relu(conv2)

    # TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2= tf.nn.max_pool(conv2, [1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # TODO: Flatten. Input = 5x5x16. Output = 400.
    fc0 = flatten(conv2)
    
    # TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
    fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma), name='fc1_W')
    fc1_b = tf.Variable(tf.zeros(120), name='fc1_b')
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # TODO: Activation.
    fc1 = tf.nn.relu(fc1)

    # TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
    fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma), name='fc2_W')
    fc2_b = tf.Variable(tf.zeros(84), name='fc2_b')
    fc2   = tf.matmul(fc1, fc2_W) + fc2_b
    
    # TODO: Activation.
    fc2 = tf.nn.relu(fc2)

    # TODO: Layer 5: Fully Connected. Input = 84. Output = 43.
    fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma), name='fc3_W')
    fc3_b = tf.Variable(tf.zeros(43))
    logits   = tf.matmul(fc2, fc3_W) + fc3_b
    
    return logits

Question 3

What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.

Answer: I use the classical LeNet which is a convolutional network containing five layers: two conv layers (5x5x6 and 5x5x16), three fully-connect layers (400x120, 120x84 and 84x43). I use ReLU as activation function.


In [10]:
### Train your model here.
### ----------- Definition -----------
import tensorflow as tf
from sklearn.utils import shuffle

tf.reset_default_graph()

with tf.device('/gpu:0'):
    x = tf.placeholder(tf.float32, (None, 32, 32, 1))
    y = tf.placeholder(tf.int32, (None))
    one_hot_y = tf.one_hot(y, 43)

    EPOCHS = 50
    BATCH_SIZE = 128
    rate = 0.001

    logits = LeNet(x)
    softmax_operation = tf.nn.softmax(logits)
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
    loss_operation = tf.reduce_mean(cross_entropy)
    optimizer = tf.train.AdamOptimizer(learning_rate = rate)
    training_operation = optimizer.minimize(loss_operation)

    predict_operation = tf.argmax(logits, 1)
    groundtruth_operation = tf.argmax(one_hot_y, 1)
    correct_prediction = tf.equal(predict_operation, groundtruth_operation)
    accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    saver = tf.train.Saver()
    
    init = tf.initialize_all_variables()

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples

# set allow_soft_placement =True to Making tf.Variable ignore tf.device(), otherwise there will be some gpu error
config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)

In [11]:
### ----------- Traininig -----------
with tf.Session(config=config) as sess:
    sess.run(init)
    num_examples = len(X_train)
    
    print("Training...")
    for i in range(EPOCHS):
        X_train, y_train = shuffle(X_train, y_train)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_train[offset:end], y_train[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
            
        validation_accuracy = evaluate(X_validation, y_validation)
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print()
        
    saver.save(sess, './lenet2')    # `save` method will call `export_meta_graph` implicitly.
    print("Model saved")
######################################################


Training...
EPOCH 1 ...
Validation Accuracy = 0.411

EPOCH 2 ...
Validation Accuracy = 0.532

EPOCH 3 ...
Validation Accuracy = 0.640

EPOCH 4 ...
Validation Accuracy = 0.690

EPOCH 5 ...
Validation Accuracy = 0.737

EPOCH 6 ...
Validation Accuracy = 0.740

EPOCH 7 ...
Validation Accuracy = 0.772

EPOCH 8 ...
Validation Accuracy = 0.791

EPOCH 9 ...
Validation Accuracy = 0.814

EPOCH 10 ...
Validation Accuracy = 0.811

EPOCH 11 ...
Validation Accuracy = 0.823

EPOCH 12 ...
Validation Accuracy = 0.830

EPOCH 13 ...
Validation Accuracy = 0.832

EPOCH 14 ...
Validation Accuracy = 0.844

EPOCH 15 ...
Validation Accuracy = 0.842

EPOCH 16 ...
Validation Accuracy = 0.858

EPOCH 17 ...
Validation Accuracy = 0.836

EPOCH 18 ...
Validation Accuracy = 0.864

EPOCH 19 ...
Validation Accuracy = 0.856

EPOCH 20 ...
Validation Accuracy = 0.863

EPOCH 21 ...
Validation Accuracy = 0.866

EPOCH 22 ...
Validation Accuracy = 0.868

EPOCH 23 ...
Validation Accuracy = 0.862

EPOCH 24 ...
Validation Accuracy = 0.867

EPOCH 25 ...
Validation Accuracy = 0.865

EPOCH 26 ...
Validation Accuracy = 0.856

EPOCH 27 ...
Validation Accuracy = 0.874

EPOCH 28 ...
Validation Accuracy = 0.857

EPOCH 29 ...
Validation Accuracy = 0.878

EPOCH 30 ...
Validation Accuracy = 0.882

EPOCH 31 ...
Validation Accuracy = 0.868

EPOCH 32 ...
Validation Accuracy = 0.863

EPOCH 33 ...
Validation Accuracy = 0.877

EPOCH 34 ...
Validation Accuracy = 0.869

EPOCH 35 ...
Validation Accuracy = 0.877

EPOCH 36 ...
Validation Accuracy = 0.875

EPOCH 37 ...
Validation Accuracy = 0.877

EPOCH 38 ...
Validation Accuracy = 0.870

EPOCH 39 ...
Validation Accuracy = 0.881

EPOCH 40 ...
Validation Accuracy = 0.880

EPOCH 41 ...
Validation Accuracy = 0.877

EPOCH 42 ...
Validation Accuracy = 0.882

EPOCH 43 ...
Validation Accuracy = 0.878

EPOCH 44 ...
Validation Accuracy = 0.873

EPOCH 45 ...
Validation Accuracy = 0.883

EPOCH 46 ...
Validation Accuracy = 0.879

EPOCH 47 ...
Validation Accuracy = 0.882

EPOCH 48 ...
Validation Accuracy = 0.871

EPOCH 49 ...
Validation Accuracy = 0.871

EPOCH 50 ...
Validation Accuracy = 0.876

Model saved

In [12]:
### ----------- Testing -----------
print("Testing...")
import sklearn as sk
import numpy as np
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
    saver.restore(sess, './lenet2')
    y_true = sess.run(groundtruth_operation, feed_dict={x: X_test, y: y_test})
    y_pred = sess.run(predict_operation, feed_dict={x: X_test, y: y_test})
    acc = sess.run(accuracy_operation, feed_dict={x: X_test, y: y_test})
    print ("Precision: " + str(sk.metrics.precision_score(y_true, y_pred, average='weighted')))
    print ("Recall: " + str(sk.metrics.recall_score(y_true, y_pred, average='weighted')))
    print ("f1_score: " + str(sk.metrics.f1_score(y_true, y_pred, average='weighted')))
    np.set_printoptions(threshold=9999999)  #print all  
    print ("confusion_matrix")
    cm = sk.metrics.confusion_matrix(y_true, y_pred)
    print (str(cm))
#     n_classes = len(np.unique(y_test))
#     for i in range(n_classes):
#         print("--------------- Testing class " + str(i) + ' -------------')
#         idx = np.argwhere(y_test==i).reshape(-1)
#         y_true = sess.run(groundtruth_operation, feed_dict={x: X_test[idx], y: y_test[idx]})
#         y_pred = sess.run(predict_operation, feed_dict={x: X_test[idx], y: y_test[idx]})
#         acc = sess.run(accuracy_operation, feed_dict={x: X_test[idx], y: y_test[idx]})
#         print ("Precision: " + str(sk.metrics.precision_score(y_true, y_pred, average='micro')))
#         print ("Recall: " + str(sk.metrics.recall_score(y_true, y_pred, average='micro')))
#         print ("f1_score: " + str(sk.metrics.f1_score(y_true, y_pred, average='micro')))
#         print ("confusion_matrix")
#         print (str(sk.metrics.confusion_matrix(y_true, y_pred)))


Testing...
Precision: 0.892023667699
Recall: 0.885416666667
f1_score: 0.886272327542
confusion_matrix
[[ 15   6   0   0   1   1   0   0   1   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  2 226  25   1   7  19   0   3   3   0   1   1   0   0   1   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0
    1   0   1   0   1   0   0]
 [  0   8 226   6   2  25   0   3   1   0   1   0   0   0   0   1   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   1   0   0]
 [  0   1   3 154   1  19   0   2   2   0   3   0   0   1   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0
    0   0   0   0   0   0   1]
 [  1  11   4   0 260   6   0   0   3   0   0   0   0   1   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   1   0   0   0   0]
 [  0   3   4   8   1 188   0   4   3   0   3   0   0   1   0   0   0   0
    0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   1  62   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   1]
 [  0   2   2   4   5  26   0 159  11   2   3   0   0   0   0   0   1   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   1   0   0]
 [  0   1   7   1   2  16   0  17 131   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   1   1   0   0   0   1   0 195   4   0   0   0   0   1   1   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   3   0   0   0   0 275   0   1   0   0   0   1   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    1   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0 141   0   0   0   0   0   0
    1   3   2   1   0   7   0   4   1   0   2   0   3   4   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   1   1   0   0   0   0   1   0   2   0   0 260   2   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0
    0   0   1   0   0   1   0]
 [  0   0   0   0   0   0   0   0   0   0   1   0   1 283   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   2   0   0   0   0]
 [  0   0   0   0   1   0   0   0   0   0   0   0   0   0  93   0   0   1
    0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0
    0   0   0   1   0   0   0]
 [  0   0   1   0   0   1   0   0   0   0   1   0   0   0   0  83   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   1   3   0   0   0   0   0  40   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   2   2   0   0   0   0   0   0 130
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0
  132   0   3   1   0   1   4   4   4   1   0   0   1   2   0   1   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0  22   1   0   0   2   0   0   0   0   0   1   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   1   0   0   0   0   1   0   0   0   0   0   0   0
    1   0  29   0   0   1   0   2   0   0   1   0   1   1   0   0   0   0
    0   0   0   1   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0   2   0   0   0   0   0   0
    0   0   0  32   0   1   0   0   0   0   1   0   1  12   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   1   0   0   0   0   0   0   0   1   0   0   0
    0   0   0   0  55   0   0   2   1   0   0   2   1   1   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   1   0   0  60   0   1   0   0   0   1   0   0   0   0   0   0
    0   0   1   0   0   0   0]
 [  0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    1   0   0   0   0   0  31   5   2   1   1   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   1   0   0   1   0   0   0   0   1   1   0   0   0   0   0   0
    0   1   2   0   1   0   0 183   0   1   1   0   2   1   0   0   0   0
    0   0   1   0   0   0   0]
 [  0   0   0   0   0   1   0   0   0   0   0   1   0   0   0   0   0   0
    6   0   0   0   0   0   0   0  72   2   0   1   1   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   1   0   0   0   0   0   0   1   0   0   0   0   0   0
    0   1   0   0   0   2   1   0   0  24   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   1   0   0   0   0   0   0   0   1   0   0   0   0   0   0
    1   0   1   0   0   4   0   0   0   0  62   6   2   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   2   0   1   0   0   0  26   0   0   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   1   0   0   0   0   0   0   0   0   4   0   0   0   0   0   0
    2   0   1   1   0   1   0   3   1   0   1   0  49   1   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    1   1   0   1   0   3   0   1   0   0   0   0   0 103   0   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   3   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0  24   0   0   0
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   1   0   0   0   0   0   0   0   0   2   0   0   1
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0  82   0   0
    1   0   0   1   1   0   0]
 [  0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   1   0   0
    0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0  30   1
    0   0   2   0   0   0   0]
 [  0   0   1   1   0   1   0   0   0   0   1   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0 180
    0   0   0   0   0   0   0]
 [  0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
   42   0   0   1   0   0   0]
 [  0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   2
    0  21   0   1   2   0   0]
 [  0   1   1   0   0   0   0   0   0   1   1   0   0   1   0   0   0   0
    0   0   2   0   0   0   0   0   0   0   1   0   0   1   0   0   0   0
    0   0 294   0   0   0   0]
 [  0   1   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   1   0  27   0   0   0]
 [  0   2   0   0   1   2   0   1   0   0   0   1   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0  41   0   0]
 [  0   0   0   1   0   0   1   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
    0   0   0   0   0  26   0]
 [  0   0   0   0   0   0   1   0   0   0   1   0   0   0   0   0   0   0
    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0
    0   0   0   0   0   0  22]]

Question 4

How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)

Answer:

I use Adam optimizer, and the hyperparameters are set as following, EPOCHS = 50 BATCH_SIZE = 128 rate = 0.001

Question 5

What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.

Answer:

5 steps:

1) Data preprocessing: RGB-->GRAY (color insensitive), Illumination equalization (illumination robust), Data augmentation (mitigate data inbalanced)

2) CNN model choose: LeNet containing 5 layers (simple and efficient)

3) Optimizer choose: Adam, simple and efficient with less hyperparameters

4) mini-batch training with many epoches, and validation

5) Testing


Step 3: Test a Model on New Images

Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.


In [13]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
rootDir = './sign-imgs'
img_num = len(os.listdir(rootDir))
X_web = np.array([])
y_web = np.array([])
for idx, file in enumerate(os.listdir(rootDir)): 
        path = os.path.join(rootDir, file) 
        img = cv2.imread(path)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)    # opencv's imread return BGR, but here is RGB
        gray = cv2.resize(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), (32, 32))
        gray = cv2.equalizeHist(gray)    # equalize the histogram
        gray = (gray-128)/128 # normalization
        label = file.split('.')[0]
        if idx==0:
            X_web = gray.reshape(1, 32, 32, 1)
            y_web = np.array([int(label)])
        else:
            X_web = np.append(X_web, gray.reshape(1, 32, 32, 1), axis=0)
            y_web = np.append(y_web, [int(label)])
        plt.subplot(1, img_num, idx+1)
        plt.imshow(img)
        plt.title('label:' + label)
        plt.axis('off')
plt.show()


Question 6

Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.

Answer: The fifth image suffers a serious distorition which makes classification difficult.


In [14]:
### Run the predictions here.
### Feel free to use as many code cells as needed.
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
    saver.restore(sess, './lenet2')
    y_perd = sess.run(predict_operation, feed_dict={x: X_web, y: y_web})
    print(y_perd)


[14 10 15  2 13]

Question 7

Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.

NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.

Answer: The accuracy on captured pictures is 40%, which is lower than 77%, the accuracy on the testing dataset. Since here are only 5 captured traffic sign images, the result is not so convinced in statistics.


In [15]:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
saver = tf.train.Saver()
with tf.Session(config=config) as sess:
    saver.restore(sess, './lenet2')
    softmax_prob = sess.run(softmax_operation, feed_dict={x: X_web, y: y_web})
    topk = sess.run(tf.nn.top_k(softmax_prob, k=5))
    plt.figure(figsize=(20,10))
    n_classes = len(softmax_prob[0])
    for i in range(len(softmax_prob)):
        plt.subplot(len(softmax_prob), 1, i+1)
#         plt.hist(softmax_prob[i], bins=n_classes)
        plt.bar(np.arange(n_classes), softmax_prob[i])
        plt.title('GT: ' + str(y_web[i]))
        plt.grid(True)
        plt.tight_layout()
        print('topk-values:' + str(topk.values[i]) + '            topk-indices:' + str(topk.indices[i]))
    plt.show()


topk-values:[  9.99995232e-01   4.80695780e-06   1.15365761e-08   3.66459779e-11
   2.06460326e-12]            topk-indices:[14  4  1 38  6]
topk-values:[  9.42848921e-01   5.66614792e-02   3.83592240e-04   1.03511302e-04
   2.11890256e-06]            topk-indices:[10 17 14  7 33]
topk-values:[  9.01910841e-01   9.76738930e-02   4.06040839e-04   9.18292244e-06
   1.33716638e-09]            topk-indices:[15 12  9  2 17]
topk-values:[  9.70954835e-01   2.90444419e-02   4.80605138e-07   2.64848723e-07
   1.12020970e-09]            topk-indices:[ 2  5  1 10  3]
topk-values:[ 0.92948645  0.03361332  0.01224814  0.01144927  0.00707328]            topk-indices:[13 29 30 12 19]

Question 8

Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

Answer: The model is certain of 14, 2 classes. For 17, 22, 34 classes, it is uncertain. Besides, for 17 class, the model was incorrect in its initial prediction but is correct within top-k.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.


In [ ]: