Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data


In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = 'traffic-signs-data/train.p'
validation_file= 'traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas


In [2]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
import pandas as pd

df = pd.read_csv("signnames.csv")

# TODO: Number of training examples
n_train = len(y_train)

# TODO: Number of testing examples.
n_test = len(y_test)

# TODO: What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]
#print(train['sizes'][0])
#print(train['coords'][0])
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
# also can use
#n_classes = df.shape[0]
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)


Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.


In [3]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline

In [4]:
#counts function(used to plotting the count of each sign) 
def counts(_counts, titles):
    _nums = []
    _signs = []
    for nums in _counts:
        _signs.append(nums)
        _nums.append(_counts[nums])
    _nums_ = pd.Series.from_array(_nums)
    plt.figure(figsize = (12,8))
    fig = _nums_.plot(kind = 'bar')
    fig.set_title(titles)
    fig.set_ylabel("numbers")
    fig.set_xlabel("signs No.")
    fig.set_xticklabels(_signs)
    plt.show()
    return _nums

In [5]:
#this function is used to plotting traffic sign images, plot 6 * 7 subplot images
def image_visiulization(x, y):
    data = np.zeros((43,1))
    for i in range(len(y)):
        if(data[y[i]] == 0):
            data[y[i]] = i
        if(np.min(data[:]>0)):
            break
    plt.figure(figsize = (12,8), dpi = 320)
    for i in range(43):
        ax = plt.subplot(3,15,i+1)
        ax.axis('off')
        image = x[int(data[i]),:,:,:]
        ax.set_title("No:{0}".format(i))
        ax.imshow(image)
    plt.show()

In [6]:
#visiulization for training data
train_counts = dict(zip(*np.unique(y_train, return_counts=True)))
train_num_ = counts(train_counts, "training data counts")
image_visiulization(X_train,y_train)
#print count of each sign
print('Label Counts: {}'.format(train_counts))
print('Samples: {}'.format(len(y_train)))
print('First 20 Labels: {}'.format(y_train[:20]))
print('Image - Shape: {}'.format(X_train.shape[1:]))


Label Counts: {0: 180, 1: 1980, 2: 2010, 3: 1260, 4: 1770, 5: 1650, 6: 360, 7: 1290, 8: 1260, 9: 1320, 10: 1800, 11: 1170, 12: 1890, 13: 1920, 14: 690, 15: 540, 16: 360, 17: 990, 18: 1080, 19: 180, 20: 300, 21: 270, 22: 330, 23: 450, 24: 240, 25: 1350, 26: 540, 27: 210, 28: 480, 29: 240, 30: 390, 31: 690, 32: 210, 33: 599, 34: 360, 35: 1080, 36: 330, 37: 180, 38: 1860, 39: 270, 40: 300, 41: 210, 42: 210}
Samples: 34799
First 20 Labels: [41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41]
Image - Shape: (32, 32, 3)

In [7]:
#visiulization for valid data
valid_counts = dict(zip(*np.unique(y_valid, return_counts=True)))
counts(valid_counts, "valid data counts")
image_visiulization(X_valid,y_valid)
#print count of each sign
print('Label Counts: {}'.format(valid_counts))
print('Samples: {}'.format(len(y_valid)))
print('First 20 Labels: {}'.format(y_valid[:20]))
print('Image - Shape: {}'.format(X_valid.shape[1:]))


Label Counts: {0: 30, 1: 240, 2: 240, 3: 150, 4: 210, 5: 210, 6: 60, 7: 150, 8: 150, 9: 150, 10: 210, 11: 150, 12: 210, 13: 240, 14: 90, 15: 90, 16: 60, 17: 120, 18: 120, 19: 30, 20: 60, 21: 60, 22: 60, 23: 60, 24: 30, 25: 150, 26: 60, 27: 30, 28: 60, 29: 30, 30: 60, 31: 90, 32: 30, 33: 90, 34: 60, 35: 120, 36: 60, 37: 30, 38: 210, 39: 30, 40: 60, 41: 30, 42: 30}
Samples: 4410
First 20 Labels: [41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41]
Image - Shape: (32, 32, 3)

In [8]:
#visiulization for test data
test_counts = dict(zip(*np.unique(y_test, return_counts=True)))
counts(test_counts, "test data counts")
image_visiulization(X_test,y_test)
#print count of each sign
print('Label Counts: {}'.format(test_counts))
print('Samples: {}'.format(len(y_test)))
print('First 20 Labels: {}'.format(y_test[:20]))
print('Image - Shape: {}'.format(X_test.shape[1:]))


Label Counts: {0: 60, 1: 720, 2: 750, 3: 450, 4: 660, 5: 630, 6: 150, 7: 450, 8: 450, 9: 480, 10: 660, 11: 420, 12: 690, 13: 720, 14: 270, 15: 210, 16: 150, 17: 360, 18: 390, 19: 60, 20: 90, 21: 90, 22: 120, 23: 150, 24: 90, 25: 480, 26: 180, 27: 60, 28: 150, 29: 90, 30: 150, 31: 270, 32: 60, 33: 210, 34: 120, 35: 390, 36: 120, 37: 60, 38: 690, 39: 90, 40: 90, 41: 60, 42: 90}
Samples: 12630
First 20 Labels: [16  1 38 33 11 38 18 12 25 35 12  7 23  7  4  9 21 20 27 38]
Image - Shape: (32, 32, 3)

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Pre-process the Data Set (normalization, grayscale, etc.)

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.


In [9]:
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.

In [10]:
# covert the image to grayscale, we test this methods in our model, remove this step may can get more stable results
def grayscale(x):
    grayimg = []
    #number of samples,weidth, heigh, and channel
    n,w,h,c = x.shape[:]
    print(n)
    for i in range(n):
        grayscale = np.zeros((w,h,1))
        r, g, b = x[i,:,:,0], x[i,:,:,1], x[i,:,:,2]
        grayscale[:,:,0] = 0.2989 * r + 0.5870 * g + 0.1140 * b
        grayimg.append(grayscale)
    return np.array(grayimg)

In [11]:
# normalize the input image from 0 to 1
def normalize(x):
    normalize = np.array(x/255)
    return normalize

In [12]:
# one-hot-encode, the traffic data has 43 unique classes
def one_hot_encode(y):
    labels = []
    for i in range(len(y)):
        enc_label = np.zeros(43)
        np.put(enc_label, y[i], 1)
        labels.append(enc_label)
    return np.array(labels)

In [13]:
# rotation our dataset(-15 degree to 15 degree).
def rotate(src, angle, scale=1.):
    import cv2
    #print(src.shape)
    w = src.shape[1]
    h = src.shape[0]
    #print(w,h)
    rangle = np.deg2rad(angle)  # angle in radians
    # now calculate new image width and height
    nw = (abs(np.sin(rangle)*h) + abs(np.cos(rangle)*w))*scale
    nh = (abs(np.cos(rangle)*h) + abs(np.sin(rangle)*w))*scale
    # ask OpenCV for the rotation matrix
    rot_mat = cv2.getRotationMatrix2D((nw*0.5, nh*0.5), angle, scale)
    # calculate the move from the old center to the new center combined
    # with the rotation
    rot_move = np.dot(rot_mat, np.array([(nw-w)*0.5, (nh-h)*0.5,0]))
    # the move only affects the translation, so update the translation
    # part of the transform
    rot_mat[0,2] += rot_move[0]
    rot_mat[1,2] += rot_move[1]
    return cv2.warpAffine(src, rot_mat, (w,h), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_REFLECT_101)

In [14]:
#Randomize Data
#Explore the data, we can see those dataset need randomized 
def RandomizeData(X,y):
    from sklearn.utils import shuffle
    X, y = shuffle(X, y)
    return X, y

In [15]:
#Visiulize before Normalize
plt.figure(figsize = (5,2), dpi = 160)
for i in range(10):
    ax = plt.subplot(2,5,i+1)
    ax.axis('off')
    image = X_train[i]
    ax.set_title("No:{0}".format(i))
    ax.imshow(image)
plt.show()



In [16]:
#Preprocess the dataset, for both train, valid and test data

#train data
features_train = X_train#grayscale(X_train)
features_train = normalize(features_train)
labels_train_ = y_train
print(features_train.shape, labels_train_.shape)

# valid data
features_valid = X_valid#grayscale(X_valid)
#features_valid = grayscale(features_valid)
features_valid = normalize(features_valid)
labels_valid = one_hot_encode(y_valid)
print(features_valid.shape, labels_valid.shape)
features_valid, labels_valid = RandomizeData(features_valid, labels_valid)

#test data
features_test = X_test#grayscale(X_test)
#features_test = grayscale(features_test)
features_test = normalize(features_test)
labels_test = one_hot_encode(y_test)
print(features_test.shape, labels_test.shape)
features_test, labels_test = RandomizeData(features_test, labels_test)


(34799, 32, 32, 3) (34799,)
(4410, 32, 32, 3) (4410, 43)
(12630, 32, 32, 3) (12630, 43)

In [17]:
#Visiulize after Normalize
plt.figure(figsize = (5,2), dpi = 160)
for i in range(10):
    ax = plt.subplot(2,5,i+1)
    ax.axis('off')
    image = features_train[i]
    ax.set_title("No:{0}".format(i))
    ax.imshow(image)
plt.show()



In [18]:
#visiulization for training data
train_counts = dict(zip(*np.unique(labels_train_, return_counts=True)))
train_num_ = counts(train_counts, "training data counts")
#image_visiulization(X_train,y_train)
#print count of each sign
print('Label Counts: {}'.format(train_counts))
print('Samples: {}'.format(len(labels_train_)))
print('First 20 Labels: {}'.format(labels_train_[:20]))
print('Image - Shape: {}'.format(features_train.shape[1:]))


Label Counts: {0: 180, 1: 1980, 2: 2010, 3: 1260, 4: 1770, 5: 1650, 6: 360, 7: 1290, 8: 1260, 9: 1320, 10: 1800, 11: 1170, 12: 1890, 13: 1920, 14: 690, 15: 540, 16: 360, 17: 990, 18: 1080, 19: 180, 20: 300, 21: 270, 22: 330, 23: 450, 24: 240, 25: 1350, 26: 540, 27: 210, 28: 480, 29: 240, 30: 390, 31: 690, 32: 210, 33: 599, 34: 360, 35: 1080, 36: 330, 37: 180, 38: 1860, 39: 270, 40: 300, 41: 210, 42: 210}
Samples: 34799
First 20 Labels: [41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41]
Image - Shape: (32, 32, 3)

In [19]:
import random
def ColorJitterAug(img, brightness, contrast, saturation):
    """Apply random brightness, contrast and saturation jitter in random order"""
    coef = np.array([[[0.299, 0.587, 0.114]]])
    if brightness > 0:
        """Augumenter body"""
        alpha = 1.0 + random.uniform(-brightness, brightness) * 0.7
        img *= alpha
        img = np.clip(img, 0.,255.)

    if contrast > 0:
        """Augumenter body"""
        alpha = 1.0 + random.uniform(-contrast, contrast) * 0.7
        gray = img*coef
        gray = (3.0*(1.0-alpha)/gray.size)*np.sum(gray)
        img *= alpha
        img += gray
        img = np.clip(img, 0.,255.)

    if saturation > 0:
        """Augumenter body"""
        alpha = 1.0 + random.uniform(-saturation, saturation) * 0.7
        gray = img*coef
        gray = np.sum(gray, axis=2, keepdims=True)
        gray *= (1.0-alpha)
        img *= alpha
        img += gray
        img = np.clip(img, 0.,255.)
    return img

In [20]:
def get_class_number(class_img, y):
    """
    Return the number of target class
    """
    return np.bincount(y)[class_img]

def get_class_images_and_label_index(target_class, X, y):
    """
    Return target images and target labels' index
    """
    n = get_class_number(target_class, y)
    class_images = []
    class_labels = []
    i = 0
    while n > 0:
        if y[i] == target_class:
            image = X[i].squeeze()
            class_images.append(image)
            class_labels.append(i)
            n -= 1
        i += 1
    return class_images, class_labels

In [21]:
class_features_0, class_labels_0 = get_class_images_and_label_index(0, features_train, labels_train_)
print(len(class_labels_0))


180

In [22]:
# Example Images Visiulization
def visiulization(x, y):
    plt.figure(figsize = (12,8), dpi = 320)
    for i in range(45):
        ax = plt.subplot(3,15,i+1)
        ax.axis('off')
        image = x[i]
        ax.set_title("No:{0}".format(i))
        ax.imshow(image)
    plt.show()

In [23]:
visiulization(class_features_0, class_labels_0)



In [24]:
#Visiulize before applied data augment tech
plt.figure(figsize = (5,2), dpi = 160)
for i in range(10):
    ax = plt.subplot(2,5,i+1)
    ax.axis('off')
    image = features_train[i]
    ax.set_title("No:{0}".format(i))
    ax.imshow(image)
plt.show()



In [25]:
def gaussian_noise(x):
    from skimage import util
    img = util.random_noise(x, mode='gaussian', mean = 0.0, var = 0.001)
    return img

In [26]:
#Data augment examples
plt.figure(figsize = (5,2), dpi = 160)
for i in range(10):
    ax = plt.subplot(2,5,i+1)
    ax.axis('off')
    image = features_train[i]
    is_brightness = random.randint(0,1)
    is_contrast = random.randint(0,1)
    is_saturation = random.randint(0,1)
    generate_image = ColorJitterAug(image, is_brightness, is_contrast, is_saturation)
    is_rotation = random.randint(0,1)
    ang = random.randint(-10,10)
    if(is_rotation == 1):
        img = rotate(generate_image, ang)
    else:
        img = generate_image
    img = gaussian_noise(img)
    ax.set_title("No:{0}".format(i))
    ax.imshow(img)
plt.show()



In [27]:
# create numpy array to storage our new dataset
features_balanced_train = np.zeros((43 *  2500, 32, 32, 3))
labels_balanced_train = np.zeros((43 * 2500))

In [28]:
#new dataset
for class_num in range(43):
    """
    data augmentate part, have 2500 * 43 dataset
    """
    class_features_, class_labels_ = get_class_images_and_label_index(class_num, features_train, labels_train_)
    start_point = len(class_labels_)
    features_balanced_train[(class_num * 2500):(class_num * 2500 + start_point),:,:,:] = class_features_
    labels_balanced_train[(class_num * 2500):(class_num * 2500 + start_point)] = class_num
    start = class_num * 2500 + start_point
    end = (class_num + 1) * 2500
    #print(features_balanced_train[0].shape, labels_balanced_train[719])
    for i in range(start,end):
        #method = random.randint(0,1)
        #print(method)
        index = random.randint(0,start_point - 1)
        #print(index)
        img = class_features_[index]
        #is_brightness = random.randint(0,1)
        #is_contrast = random.randint(0,1)
        #is_saturation = random.randint(0,1)
        generate_image = ColorJitterAug(img, 0, 1, 1)
        #generate_image = img
        #is_rotation = random.randint(0,1)
        ang = random.randint(-5,5)
        #if(is_rotation == 1):
        img = rotate(generate_image, ang)
        #else:
        #    img = generate_image
        img = gaussian_noise(img)
        features_balanced_train[i] = img
        labels_balanced_train[i] = class_num

In [29]:
#visiulization for training data
train_counts = dict(zip(*np.unique(labels_balanced_train, return_counts=True)))
train_num_ = counts(train_counts, "training data counts")
#image_visiulization(X_train,y_train)
#print count of each sign
print('Label Counts: {}'.format(train_counts))
print('Samples: {}'.format(len(labels_balanced_train)))
print('First 20 Labels: {}'.format(labels_balanced_train[:20]))
print('Image - Shape: {}'.format(features_balanced_train.shape[1:]))


Label Counts: {0.0: 2500, 1.0: 2500, 2.0: 2500, 3.0: 2500, 4.0: 2500, 5.0: 2500, 6.0: 2500, 7.0: 2500, 8.0: 2500, 9.0: 2500, 10.0: 2500, 11.0: 2500, 12.0: 2500, 13.0: 2500, 14.0: 2500, 15.0: 2500, 16.0: 2500, 17.0: 2500, 18.0: 2500, 19.0: 2500, 20.0: 2500, 21.0: 2500, 22.0: 2500, 23.0: 2500, 24.0: 2500, 25.0: 2500, 26.0: 2500, 27.0: 2500, 28.0: 2500, 29.0: 2500, 30.0: 2500, 31.0: 2500, 32.0: 2500, 33.0: 2500, 34.0: 2500, 35.0: 2500, 36.0: 2500, 37.0: 2500, 38.0: 2500, 39.0: 2500, 40.0: 2500, 41.0: 2500, 42.0: 2500}
Samples: 107500
First 20 Labels: [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
  0.  0.]
Image - Shape: (32, 32, 3)

In [30]:
example_labels = [i for i in range(43)]
print(example_labels)


[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]

In [31]:
#add one hot encoder to each training labels
example_ohe_label = one_hot_encode(example_labels)
labels_train = np.zeros((43 * 2500, 43))
for i in range(43):
    labels_train[i * 2500: (i+1) * 2500 - 1] = example_ohe_label[i]
print(labels_train.shape)


(107500, 43)

In [32]:
#Randomize dataset
#features_balanced_train = grayscale(features_balanced_train)
features_balanced_train, labels_train = RandomizeData(features_balanced_train, labels_train)

Model Architecture


In [33]:
### Define your architecture here.
### Feel free to use as many code cells as needed.
#some code show as below is from my deep learning nanodegree foundation project2(Image classification for CIFAR-10).

In [34]:
import tensorflow as tf
# some basical function
def neural_net_image_input(image_shape):
    """
    Return a Tensor for a bach of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    #the format here we use is N,H,W,C,[batch, in_height, in_weight, in_channel]
    H,W,C = image_shape
    return tf.placeholder(tf.float32, (None, H, W, C), name = "x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    return tf.placeholder(tf.float32, (None, n_classes), name = "y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    return tf.placeholder(tf.float32, name = "keep_prob")

In [35]:
# batch normalization
def batch_norm(inputs, is_training, is_conv_out=True, decay = 0.999):
    """
    batch normalization implemented here
    """
    scale = tf.Variable(tf.ones([inputs.get_shape()[-1]]))
    beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]]))
    pop_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable=False)
    pop_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable=False)
    if is_training:
        if is_conv_out:
            batch_mean, batch_var = tf.nn.moments(inputs,[0,1,2])
        else:
            batch_mean, batch_var = tf.nn.moments(inputs,[0])   

        train_mean = tf.assign(pop_mean,
                               pop_mean * decay + batch_mean * (1 - decay))
        train_var = tf.assign(pop_var,
                              pop_var * decay + batch_var * (1 - decay))
        with tf.control_dependencies([train_mean, train_var]):
            return tf.nn.batch_normalization(inputs,
                batch_mean, batch_var, beta, scale, 0.001)
    else:
        return tf.nn.batch_normalization(inputs,
            pop_mean, pop_var, beta, scale, 0.001)

In [36]:
# traditional convolution layer -> activation function -> pooling layer structure
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, padding = 'SAME', name = None, w_name = None, b_name = None):
    """
    Apply convolution then pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    :param padding:default is 'SAME', also can set as 'VALID'
    :Param pooling:two types, "average pool","max pool"
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    #get convolution kernel size
    conv_h,conv_w = conv_ksize
    #get the output depth 
    output_depth = conv_num_outputs
    #get [B,H,W,C] size, in_batch, in_height, in_weight, & in_depth
    in_b, in_h, in_w, in_depth = x_tensor.get_shape().as_list()
    #get weights
    mu = 0.0
    sigma = 0.1
    F_W = tf.Variable(tf.truncated_normal([int(conv_h), int(conv_w), int(in_depth), int(output_depth)],mean = mu, stddev = sigma), name = w_name)
    #get bias
    F_B = tf.Variable(tf.zeros(int(output_depth)), name = b_name)
    #convolution operation(do linear operation here)
    s_h,s_w = conv_strides
    #padding = 'SAME'
    conv = tf.nn.conv2d(x_tensor, F_W, [1, int(s_h), int(s_w), 1], padding, name = name) + F_B
    conv = batch_norm(conv, is_training = 1)
    conv = tf.nn.relu(conv)
    pool_h, pool_w = pool_ksize
    pool_h_s,pool_w_s = pool_strides
    conv = tf.nn.max_pool(conv, [1, pool_h, pool_w, 1],[1, pool_h_s, pool_w_s, 1],padding)
    return conv

In [37]:
# traditional convolution layer -> activation function -> pooling layer structure
def conv2d_avgpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, padding = 'SAME', name = None, w_name = None, b_name = None):
    """
    Apply convolution then pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    :param padding:default is 'SAME', also can set as 'VALID'
    :Param pooling:two types, "average pool","max pool"
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    #get convolution kernel size
    conv_h,conv_w = conv_ksize
    #get the output depth 
    output_depth = conv_num_outputs
    #get [B,H,W,C] size, in_batch, in_height, in_weight, & in_depth
    in_b, in_h, in_w, in_depth = x_tensor.get_shape().as_list()
    #get weights
    mu = 0.0
    sigma = 0.1
    F_W = tf.Variable(tf.truncated_normal([int(conv_h), int(conv_w), int(in_depth), int(output_depth)],mean = mu, stddev = sigma), name = w_name)
    #get bias
    F_B = tf.Variable(tf.zeros(int(output_depth)), name = b_name)
    #convolution operation(do linear operation here)
    s_h,s_w = conv_strides
    #padding = 'SAME'
    conv = tf.nn.conv2d(x_tensor, F_W, [1, int(s_h), int(s_w), 1], padding, name = name) + F_B
    conv = batch_norm(conv, is_training = 1)
    conv = tf.nn.relu(conv)
    pool_h, pool_w = pool_ksize
    pool_h_s,pool_w_s = pool_strides
    conv = tf.nn.avg_pool(conv, [1, pool_h, pool_w, 1],[1, pool_h_s, pool_w_s, 1],padding)
    return conv

In [38]:
def conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides, padding = 'SAME', bn = True ,name = None, w_name = None, b_name = None):
    """
    Apply convolution then pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    #get convolution kernel size
    conv_h,conv_w = conv_ksize
    #get the output depth 
    output_depth = conv_num_outputs
    #get [B,H,W,C] size, in_batch, in_height, in_weight, & in_depth
    in_b, in_h, in_w, in_depth = x_tensor.get_shape().as_list()
    #get weights
    mu = 0.0
    sigma = 0.1
    F_W = tf.Variable(tf.truncated_normal([int(conv_h), int(conv_w), int(in_depth), int(output_depth)],mean = mu, stddev = sigma), name = w_name)
    #get bias
    F_B = tf.Variable(tf.zeros(int(output_depth)), name = b_name)
    #convolution operation(do linear operation here)
    s_h,s_w = conv_strides
    #padding = 'SAME'
    conv = tf.nn.conv2d(x_tensor, F_W, [1, int(s_h), int(s_w), 1], padding, name = name) + F_B
    if(bn):conv = batch_norm(conv, is_training = 1)
    conv = tf.nn.relu(conv)
    return conv

In [39]:
# residual block
def residual_block(x_tensor, num_outputs, down_sample = True, projection = False):
    """
    Apply Residual Block
    :param x_tensor: Tensorflow Tensor
    :param num_outputs: Number of outputs channels
    :param down_sample: True or False, need apply pooling methods or not
    :param projection: True or False, need projection or not
    """
    #get [B,H,W,C] size, in_batch, in_height, in_weight, & in_depth
    in_b, in_h, in_w, in_depth = x_tensor.get_shape().as_list()
    if (down_sample):
        # use stride = [2,2], ksize = [2,2], padding = SAME
        tf.nn.max_pool(x_tensor, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
    conv = conv2d(x_tensor, num_outputs, [2,2], [1,1])
    conv = conv2d(conv, num_outputs, [3,3], [1,1])
    
    if in_depth != num_outputs:
        if(projection):
            input_ = conv2d(x_tensor, num_outputs, [1,1], [2,2], bn = 'False')
        else:
            input_ = tf.pad(x_tensor, [[0,0], [0,0], [0,0], [0, num_outputs - in_depth]])
    else:
        input_ = x_tensor
    
    return(conv + input_)

In [40]:
# residual block
def residual_group(x_tensor, num_block, num_outputs, name = None):
    """
    residual net group
    :param x_tensor: Tensorflow tensor
    :param num_block: the number of block
    :param num_outputs: output channels
    :param name:the name of residual group
    """
    with tf.variable_scope("%s_head"%name):
        output = residual_block(x_tensor, num_outputs, down_sample = 'Ture')
    for i in range(num_block - 1):
        with tf.variable_scope("%s_%d" % (name, i+1)):
            output = residual_block(output, num_outputs, down_sample = 'False')
    return output

In [41]:
# flatten layer
def flatten(x_tensor):
    """
    Flatten x_tensor t o (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    in_b, in_h, in_w, in_depth = x_tensor.get_shape().as_list()
    fc = tf.reshape(x_tensor, [-1, int(in_h)*int(in_w)*int(in_depth)])
    return fc

In [42]:
# fully connected layer
def fully_conn(x_tensor, num_outputs, w_name = None, b_name = None):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    mu = 0.0
    sigma = 0.1
    input_dim = x_tensor.get_shape().as_list()[1]
    fc_W = tf.Variable(tf.truncated_normal(shape=(input_dim, num_outputs), mean = mu, stddev = sigma), name = w_name)
    fc_b = tf.Variable(tf.zeros(num_outputs), name = b_name)
    fc = tf.matmul(x_tensor, fc_W) + fc_b
    fc = tf.nn.relu(fc)
    return fc

In [43]:
# final output layer
def output(x_tensor, num_outputs, w_name = None, b_name = None):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    input_dim = x_tensor.get_shape().as_list()[1]
    mu = 0.0
    sigma = 0.1
    fc_W = tf.Variable(tf.truncated_normal(shape=(input_dim, num_outputs), mean = mu, stddev = sigma), name = w_name)
    fc_b = tf.Variable(tf.zeros(num_outputs), name = b_name)
    logits = tf.add(tf.matmul(x_tensor, fc_W) ,fc_b)
    return logits

In [44]:
def lenet(x, keep_prob):
    """
    This is Lenet architecture here
    """
    #"""
    #Simple LeNet structure
    #Hits,
    #conv2d_pool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, padding)
    #Convolution2d_maxpooling, Conv_1
    conv_1 = conv2d_pool(x,6,[5,5],[1,1],[2,2],[2,2],padding = 'VALID', name = "conv_1", w_name = 'w1', b_name = 'b1')
    #dropout layer
    conv_1 = tf.nn.dropout(conv_1, keep_prob=keep_prob )
    #Convolution2d_maxpooling, Conv_2
    conv_2 = conv2d_pool(conv_1,16,[5,5],[1,1],[2,2],[2,2],padding = 'VALID', name = "conv_2", w_name = 'w1', b_name = 'b1')
    #dropout layer
    conv_2 = tf.nn.dropout(conv_2, keep_prob = keep_prob )
    #Flatten
    flat = flatten(conv_2)
    #Fully Connected layer, fully_conn_1
    fully_conn_1 = fully_conn(flat, 320,w_name = 'fcw1', b_name = 'fcb1')
    #dropout layer
    fully_conn_1 = tf.nn.dropout(fully_conn_1, keep_prob = keep_prob )
    #Fully Connected layer, fully_conn_2
    fully_conn_2 = fully_conn(fully_conn_1, 320 ,w_name = 'fcw2', b_name = 'fcb2')
    #dropout layer
    fully_conn_2 = tf.nn.dropout(fully_conn_2, keep_prob = keep_prob )
    #output layer, logits
    logits = output(fully_conn_2, 43)
    return logits

In [45]:
def my_net(x, keep_prob):
    """My Nets Architecture(its works better than LeNet)"""
    conv_1 = conv2d_avgpool(x,32,[3,3],[1,1],[2,2],[2,2], name = "conv_1")#32
    #print(conv_1.shape)
    conv_1 = tf.nn.dropout(conv_1, keep_prob = keep_prob)
    conv_2 = conv2d_avgpool(conv_1, 64, [3,3],[1,1],[2,2],[2,2], name = "conv_2")#64
    #print(conv_2.shape)
    conv_2 = tf.nn.dropout(conv_2, keep_prob = keep_prob)
    conv_3 = conv2d_avgpool(conv_2, 128,[3,3],[1,1],[2,2],[2,2], name = "conv_3")#128
    #print(conv_3.shape)
    conv_3 = tf.nn.dropout(conv_3, keep_prob = keep_prob)
    #conv_3 = residual_group(conv_2, num_block=2, num_outputs=256,name = 'conv_3')
    fc0 = flatten(conv_3)
    #print(keep_prob)
    fc1 = fully_conn(fc0,512)
    fc1 = tf.nn.dropout(fc1, keep_prob = keep_prob)#512
    fc2 = fully_conn(fc1, 256)
    fc2 = tf.nn.dropout(fc2, keep_prob = keep_prob)#256
    outputs = output(fc2, 43)
    return outputs

In [46]:
#This is our network architecture
#Simple experiment network is LeNet.later we will change it to other arcghitecture
def net_architecture(x, keep_prob, architecture = 0):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    if(architecture == 0):
        return lenet(x, keep_prob)
    if(architecture == 1):
        return my_net(x, keep_prob)

In [47]:
# to create our neural network pipeline
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
    
#Input
x = neural_net_image_input((32,32,3))
y = neural_net_label_input(43)
keep_prob = neural_net_keep_prob_input()
    
#Model
logits = net_architecture(x, keep_prob, architecture = 1)
    
#Name logits Tensor, 
logits = tf.identity(logits, name='logits')

#print(tf.Graph().get_tensor_by_name('w1'))
#Loss and Optimizer set, here we use adam methods.
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
    
#Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.


In [48]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.

In [49]:
# optimization function
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of image data
    : label_batch: Batch of label data
    """
    train_feed_dict = {
                x: feature_batch,
                y: label_batch,
                keep_prob: keep_probability
                }
    session.run(optimizer, feed_dict = train_feed_dict)

In [50]:
# print Status, features_valid & labels_valid are global variables used here
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    curr_loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob : 1.0})
    valid_acc = session.run(accuracy, feed_dict = {x: features_valid,y: labels_valid, keep_prob : 1.0})
    print("current session loss: {:.4f}, valid accuracy is: {:.4f}".format(curr_loss, valid_acc))

In [51]:
# Hyperparameters
epochs = 45
batch_size = 128
keep_probability = 0.7
save_model_path = './classification'

In [52]:
# Train the nets
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    num_examples = len(features_balanced_train)
    
    print("Training...")
    print()
    for i in range(epochs):
        #X_train, y_train = shuffle(features_train, y_train)
        for offset in range(0, num_examples,batch_size):
            end = offset + batch_size
            batch_x, batch_y = features_balanced_train[offset:end], labels_train[offset:end]
            train_neural_network(sess, optimizer, keep_probability, batch_x, batch_y)
        print('Epoch {:>2} '.format(i + 1), end='')
        print_stats(sess, batch_x, batch_y, cost, accuracy)
        
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)


Training...

Epoch  1 current session loss: 1.0145, valid accuracy is: 0.5615
Epoch  2 current session loss: 0.4320, valid accuracy is: 0.7751
Epoch  3 current session loss: 0.2447, valid accuracy is: 0.8680
Epoch  4 current session loss: 0.1818, valid accuracy is: 0.9086
Epoch  5 current session loss: 0.1513, valid accuracy is: 0.9324
Epoch  6 current session loss: 0.1377, valid accuracy is: 0.9313
Epoch  7 current session loss: 0.1097, valid accuracy is: 0.9478
Epoch  8 current session loss: 0.1021, valid accuracy is: 0.9454
Epoch  9 current session loss: 0.0938, valid accuracy is: 0.9438
Epoch 10 current session loss: 0.0767, valid accuracy is: 0.9508
Epoch 11 current session loss: 0.0690, valid accuracy is: 0.9490
Epoch 12 current session loss: 0.0717, valid accuracy is: 0.9490
Epoch 13 current session loss: 0.0948, valid accuracy is: 0.9458
Epoch 14 current session loss: 0.0705, valid accuracy is: 0.9544
Epoch 15 current session loss: 0.0688, valid accuracy is: 0.9635
Epoch 16 current session loss: 0.0630, valid accuracy is: 0.9551
Epoch 17 current session loss: 0.0588, valid accuracy is: 0.9610
Epoch 18 current session loss: 0.0700, valid accuracy is: 0.9553
Epoch 19 current session loss: 0.0749, valid accuracy is: 0.9571
Epoch 20 current session loss: 0.0749, valid accuracy is: 0.9596
Epoch 21 current session loss: 0.0660, valid accuracy is: 0.9662
Epoch 22 current session loss: 0.0708, valid accuracy is: 0.9556
Epoch 23 current session loss: 0.0706, valid accuracy is: 0.9621
Epoch 24 current session loss: 0.0666, valid accuracy is: 0.9601
Epoch 25 current session loss: 0.0616, valid accuracy is: 0.9528
Epoch 26 current session loss: 0.0657, valid accuracy is: 0.9565
Epoch 27 current session loss: 0.0792, valid accuracy is: 0.9676
Epoch 28 current session loss: 0.0609, valid accuracy is: 0.9676
Epoch 29 current session loss: 0.0463, valid accuracy is: 0.9601
Epoch 30 current session loss: 0.0767, valid accuracy is: 0.9610
Epoch 31 current session loss: 0.0513, valid accuracy is: 0.9658
Epoch 32 current session loss: 0.0614, valid accuracy is: 0.9651
Epoch 33 current session loss: 0.0521, valid accuracy is: 0.9662
Epoch 34 current session loss: 0.0553, valid accuracy is: 0.9633
Epoch 35 current session loss: 0.0420, valid accuracy is: 0.9619
Epoch 36 current session loss: 0.0513, valid accuracy is: 0.9660
Epoch 37 current session loss: 0.0421, valid accuracy is: 0.9653
Epoch 38 current session loss: 0.0429, valid accuracy is: 0.9621
Epoch 39 current session loss: 0.0488, valid accuracy is: 0.9664
Epoch 40 current session loss: 0.0424, valid accuracy is: 0.9590
Epoch 41 current session loss: 0.0411, valid accuracy is: 0.9560
Epoch 42 current session loss: 0.0473, valid accuracy is: 0.9524
Epoch 43 current session loss: 0.0398, valid accuracy is: 0.9583
Epoch 44 current session loss: 0.0690, valid accuracy is: 0.9585
Epoch 45 current session loss: 0.0556, valid accuracy is: 0.9590

In [53]:
#using our trained model works for test data
#set parameters here
import random
from sklearn.preprocessing import LabelBinarizer

%matplotlib inline
%config InlineBackend.figure_format = 'retina'
save_model_path = './classification'
n_samples = 5
top_n_predictions = 3

In [54]:
def batch_features_labels(features, labels, batch_size):
    """
    The usage of this function is Split features and labels into batches
    """
    for start in range(0, len(features), batch_size):
        end = min(start + batch_size, len(features))
        yield features[start:end], labels[start:end]

In [55]:
def display_image_predictions(features, labels, predictions, nrows = 5, n_predictions = 3):
    df = pd.read_csv("signnames.csv")
    names = []
    for i in range(df.shape[0]):
        names.append(df.iloc[i].SignName)
    #[names.append(df.iloc[i].SignName for i in range(df.shape[1]))]
    print(names)
    n_classes = 43
    label_names = names
    label_binarizer = LabelBinarizer()
    label_binarizer.fit(range(n_classes))
    label_ids = label_binarizer.inverse_transform(np.array(labels))

    fig, axies = plt.subplots(nrows=nrows, ncols=2, figsize=(15,15))
    fig.tight_layout()
    fig.suptitle('Softmax Predictions', fontsize=20, y=1.1)

    n_predictions = n_predictions
    margin = 0.05
    ind = np.arange(n_predictions)
    width = (1. - 2. * margin) / n_predictions
    #print(features)
    for image_i, (feature, label_id, pred_indicies, pred_values) in enumerate(zip(features, label_ids, predictions.indices, predictions.values)):
        pred_names = [label_names[pred_i] for pred_i in pred_indicies]
        correct_name = label_names[label_id]
        #print(image_i, label_names[label_id])
        #print(feature.shape)
        #feature.reshape([-1,32])
        import scipy
        #feature = scipy.misc.fromimage(feature, 0)
        #print(feature.shape)
        img = np.asarray(feature * 255)
        #img = np.expand_dims(img, axis=0)
        #print(img.shape)
        axies[image_i][0].imshow(img[:,:,0], cmap='gray')
        axies[image_i][0].set_title(correct_name, fontsize = 7)
        axies[image_i][0].set_axis_off()

        axies[image_i][1].barh(ind + margin, pred_values[::-1], width)
        axies[image_i][1].set_yticks(ind + margin)
        axies[image_i][1].set_yticklabels(pred_names[::-1], fontsize = 7)
        axies[image_i][1].set_xticks([0, 0.5, 1.0])

In [56]:
def test():
    """
    using our saved model to test dataset
    """
    loaded_graph = tf.Graph()
    with tf.Session(graph=loaded_graph) as sess:
        # Load trained model here
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        #test model here
        #for train_feature_batch, train_label_batch in batch_features_labels(features_test, labels_test, batch_size):
        #    test_batch_acc_total += sess.run(
        #        loaded_acc,
        #        feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
        #    test_batch_count += 1
        acc = sess.run(loaded_acc, feed_dict = {loaded_x : features_test, loaded_y : labels_test, loaded_keep_prob : 1.0})
        print('Testing Accuracy: {}\n'.format(acc))
        
        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(features_test, labels_test)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        display_image_predictions(random_test_features, random_test_labels, random_test_predictions)

In [57]:
test()


Testing Accuracy: 0.948694109916687

['Speed limit (20km/h)', 'Speed limit (30km/h)', 'Speed limit (50km/h)', 'Speed limit (60km/h)', 'Speed limit (70km/h)', 'Speed limit (80km/h)', 'End of speed limit (80km/h)', 'Speed limit (100km/h)', 'Speed limit (120km/h)', 'No passing', 'No passing for vehicles over 3.5 metric tons', 'Right-of-way at the next intersection', 'Priority road', 'Yield', 'Stop', 'No vehicles', 'Vehicles over 3.5 metric tons prohibited', 'No entry', 'General caution', 'Dangerous curve to the left', 'Dangerous curve to the right', 'Double curve', 'Bumpy road', 'Slippery road', 'Road narrows on the right', 'Road work', 'Traffic signals', 'Pedestrians', 'Children crossing', 'Bicycles crossing', 'Beware of ice/snow', 'Wild animals crossing', 'End of all speed and passing limits', 'Turn right ahead', 'Turn left ahead', 'Ahead only', 'Go straight or right', 'Go straight or left', 'Keep right', 'Keep left', 'Roundabout mandatory', 'End of no passing', 'End of no passing by vehicles over 3.5 metric tons']

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images


In [58]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
#load the images and set the label y for each input image
image_features_files = ["traffic_signs_0_.jpg","traffic_signs_1.jpg","traffic_signs_2.jpg","traffic_signs_3.jpg","traffic_signs_4.jpg"]
df = pd.read_csv("signnames.csv")
names = []
for i in range(df.shape[0]):
    names.append(df.iloc[i].SignName)
image_labels = [names[37],names[18],names[17],names[3],names[25]]
img_mat = [37,18,17,3,25]

In [59]:
#display the images here
import cv2
from skimage import transform,data
from PIL import Image
import matplotlib.image as mpimg

address = "images_from_web/"
plt.figure(figsize = (3,2), dpi = 180)
# display the images and show the labels here
imgs = []
labs = []
for i in range(5):
    ax = plt.subplot(3,2,i+1)
    ax.axis('off')
    image = mpimg.imread(address+image_features_files[i]) 
    print(image.shape)
    #image = Image.open(address+image_features_files[i]).convert("L")
    image = transform.resize(image,(32,32))
    imgs.append(image)
    #print(image)
    ax.set_title("{0}".format(image_labels[i]),fontsize=6)
    labs.append(img_mat[i])
    ax.imshow(image) 
plt.show()


(32, 32, 3)
(32, 32, 3)
(32, 32, 3)
(32, 32, 3)
(32, 32, 3)

Predict the Sign Type for Each Image


In [60]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.

In [61]:
#pre-process the images with the same pre-processing pipeline here
#1.0 read images and create feature matrix x, and label matrix y
print(len(imgs))
#imgs = imgs.tolist()
imgs = np.array(imgs)
labs = np.array(labs)
print(imgs.shape, labs.shape)


5
(5, 32, 32, 3) (5,)

In [62]:
#2.0 doing pre-processing pipeline here
features_web = imgs#grayscale(imgs)
features_web = normalize(features_web)
labels_web = one_hot_encode(labs)
print(features_web.shape, labels_web.shape)


(5, 32, 32, 3) (5, 43)

In [63]:
#3.0 load pre-trained model and predict the results for each image
def web_test(top_n_predictions = 3, n_predictions = 3):
    loaded_graph = tf.Graph()
    with tf.Session(graph=loaded_graph) as sess:
        # Load trained model here
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Print the results of input Samples
        #random_test_features, random_test_labels = tuple((list(zip(features_web, labels_web))))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: features_web, loaded_y: labels_web, loaded_keep_prob: 1.0})
        display_image_predictions(features_web, labels_web, random_test_predictions, nrows = 5, n_predictions = n_predictions)
        return random_test_predictions

In [64]:
web_prediction = web_test()


['Speed limit (20km/h)', 'Speed limit (30km/h)', 'Speed limit (50km/h)', 'Speed limit (60km/h)', 'Speed limit (70km/h)', 'Speed limit (80km/h)', 'End of speed limit (80km/h)', 'Speed limit (100km/h)', 'Speed limit (120km/h)', 'No passing', 'No passing for vehicles over 3.5 metric tons', 'Right-of-way at the next intersection', 'Priority road', 'Yield', 'Stop', 'No vehicles', 'Vehicles over 3.5 metric tons prohibited', 'No entry', 'General caution', 'Dangerous curve to the left', 'Dangerous curve to the right', 'Double curve', 'Bumpy road', 'Slippery road', 'Road narrows on the right', 'Road work', 'Traffic signals', 'Pedestrians', 'Children crossing', 'Bicycles crossing', 'Beware of ice/snow', 'Wild animals crossing', 'End of all speed and passing limits', 'Turn right ahead', 'Turn left ahead', 'Ahead only', 'Go straight or right', 'Go straight or left', 'Keep right', 'Keep left', 'Roundabout mandatory', 'End of no passing', 'End of no passing by vehicles over 3.5 metric tons']

Analyze Performance


In [65]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

In [66]:
#print(web_prediction[1])
predicts = []
for k in range(5):
    predicts.append(web_prediction[1][k][0])
print("the predict results is {0}, and real label is {1}".format(predicts,img_mat))
accuracy_number = 0
for i in range(5):
    if(img_mat[i] == predicts[i]):
        accuracy_number +=1
accuracy_rate = accuracy_number/5
print("accurate rate is {} %".format(accuracy_rate*100))


the predict results is [37, 18, 14, 3, 25], and real label is [37, 18, 17, 3, 25]
accurate rate is 80.0 %

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.


In [67]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.
web_prediction = web_test(top_n_predictions = 5, n_predictions = 5)


['Speed limit (20km/h)', 'Speed limit (30km/h)', 'Speed limit (50km/h)', 'Speed limit (60km/h)', 'Speed limit (70km/h)', 'Speed limit (80km/h)', 'End of speed limit (80km/h)', 'Speed limit (100km/h)', 'Speed limit (120km/h)', 'No passing', 'No passing for vehicles over 3.5 metric tons', 'Right-of-way at the next intersection', 'Priority road', 'Yield', 'Stop', 'No vehicles', 'Vehicles over 3.5 metric tons prohibited', 'No entry', 'General caution', 'Dangerous curve to the left', 'Dangerous curve to the right', 'Double curve', 'Bumpy road', 'Slippery road', 'Road narrows on the right', 'Road work', 'Traffic signals', 'Pedestrians', 'Children crossing', 'Bicycles crossing', 'Beware of ice/snow', 'Wild animals crossing', 'End of all speed and passing limits', 'Turn right ahead', 'Turn left ahead', 'Ahead only', 'Go straight or right', 'Go straight or left', 'Keep right', 'Keep left', 'Roundabout mandatory', 'End of no passing', 'End of no passing by vehicles over 3.5 metric tons']

Step 4: Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Your output should look something like this (above)


In [68]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(sess,image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1, row = 6, column = 8):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
    #sess = tf.Session()
    activation = tf_activation.eval(session=sess,feed_dict={loaded_x : image_input, loaded_keep_prob: 1.0})
    #sess = tf.Session()
    #activation = sess.run(tf_activation, feed_dict = {x : image_input})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(row,column, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max)#, cmap="gray"
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min)#, cmap="gray"
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest")#, cmap="gray"

In [69]:
inputs = tuple(zip(*random.sample(list(zip(features_web)), 1)))
#input_img = np.reshape(inputs, (-1,32,32,1))#features_web[0].reshape((-1,32,32,1))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
    # Load trained model here
    loader = tf.train.import_meta_graph(save_model_path + '.meta')
    loader.restore(sess, save_model_path)

    # Get Tensors from loaded model
    loaded_x = loaded_graph.get_tensor_by_name('x:0')
    loaded_y = loaded_graph.get_tensor_by_name('y:0')
    conv_1 = loaded_graph.get_tensor_by_name('conv_1:0')
    conv_2 = loaded_graph.get_tensor_by_name('conv_2:0')
    #conv_3 = loaded_graph.get_tensor_by_name('conv_3:0')
    loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
    loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
    loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
    with loaded_graph.as_default() as g: 
        #input_img = np.reshape(inputs[0], (-1,32,32,1))
        outputFeatureMap(sess,inputs[0], conv_1, activation_min=-1, activation_max=-1 ,plt_num=1, row = 8, column = 8)
        #outputFeatureMap(sess,inputs[0], conv_2,activation_min=-1, activation_max=-1 ,plt_num=1,  row = 8, column = 8)
        #outputFeatureMap(sess,inputs[0], conv_3, activation_min=-1, activation_max=-1 ,plt_num=1, row = 12, column = 12)


Question 9

Discuss how you used the visual output of your trained network's feature maps to show that it had learned to look for interesting characteristics in traffic sign images

Answer: I add one image as input operator with first convolution layer, and then got 64 different output feature maps, the 64 feature maps are all different, they all have different activation sturctures, it shows our first 64 convolution filters have already learned different low level structure from our training dataset

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.