Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 1: Dataset Exploration

Visualize the German Traffic Signs Dataset. This is open ended, some suggestions include: plotting traffic signs images, plotting the count of each sign, etc. Be creative!

The pickled data is a dictionary with 4 key/value pairs:

  • features -> the images pixel values, (width, height, channels)
  • labels -> the label of the traffic sign
  • sizes -> the original width and height of the image, (width, height)
  • coords -> coordinates of a bounding box around the sign in the image, (x1, y1, x2, y2). Based the original image (not the resized version).

In [1]:
# Load pickled data
# Importing all the necessary libraries to run this project
import pickle
import cv2
from sklearn import preprocessing
from sklearn.preprocessing import scale, StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
import skimage.transform as skimage_tf
import skimage.exposure as exposure
import skimage.io as sk_io
from tqdm import tqdm
import tensorflow as tf
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import pandas as pd
import math
import time
import os
import glob
%matplotlib inline                 

# TODO: fill this in based on where you saved the training and testing data
training_file = 'lab 2 data/train.p'
testing_file = 'lab 2 data/test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
print('Extracted training pickle data..')
X_test, y_test = test['features'], test['labels']
print('Extracted test pickle data..')

is_data_read = True


Extracted training pickle data..
Extracted test pickle data..

In [2]:
### To start off let's do a basic data summary.

assert is_data_read,'You failed to load the data'

# TODO: number of training examples
n_train = X_train.shape[0]

# TODO: number of testing examples
n_test = X_test.shape[0]

# TODO: what's the shape of an image?
image_shape = X_train.shape[1:]

# TODO: how many classes are in the dataset
#le = preprocessing.LabelEncoder()
#le.fit(y_train)
#n_classes = le.classes_.shape[0]
#n_classes = len(set(y_train))
n_classes = len(np.unique(y_train))

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)


Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

In [3]:
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
# Read and display different sign name from signnames.cvs and its counts
name = pd.read_csv('signnames.csv')
# Get a count of each traffic sign class.
train_label_counts = pd.Series(y_train, name='SampleCount').value_counts()
# Merge traffic sign class counts with class names and sort my class counts.
train_label_counts_names = name.join(train_label_counts)
#plt.figure(); train_label_counts_names['SampleCount'].plot(kind='bar')
#plt.figure(); plt.bar(train_label_counts_names.ClassId,train_label_counts_names.LabelCount)
plt.figure(); plt.hist(y_train,bins=n_classes)
train_label_counts_names


Out[3]:
ClassId SignName SampleCount
0 0 Speed limit (20km/h) 210
1 1 Speed limit (30km/h) 2220
2 2 Speed limit (50km/h) 2250
3 3 Speed limit (60km/h) 1410
4 4 Speed limit (70km/h) 1980
5 5 Speed limit (80km/h) 1860
6 6 End of speed limit (80km/h) 420
7 7 Speed limit (100km/h) 1440
8 8 Speed limit (120km/h) 1410
9 9 No passing 1470
10 10 No passing for vechiles over 3.5 metric tons 2010
11 11 Right-of-way at the next intersection 1320
12 12 Priority road 2100
13 13 Yield 2160
14 14 Stop 780
15 15 No vechiles 630
16 16 Vechiles over 3.5 metric tons prohibited 420
17 17 No entry 1110
18 18 General caution 1200
19 19 Dangerous curve to the left 210
20 20 Dangerous curve to the right 360
21 21 Double curve 330
22 22 Bumpy road 390
23 23 Slippery road 510
24 24 Road narrows on the right 270
25 25 Road work 1500
26 26 Traffic signals 600
27 27 Pedestrians 240
28 28 Children crossing 540
29 29 Bicycles crossing 270
30 30 Beware of ice/snow 450
31 31 Wild animals crossing 780
32 32 End of all speed and passing limits 240
33 33 Turn right ahead 689
34 34 Turn left ahead 420
35 35 Ahead only 1200
36 36 Go straight or right 390
37 37 Go straight or left 210
38 38 Keep right 2070
39 39 Keep left 300
40 40 Roundabout mandatory 360
41 41 End of no passing 240
42 42 End of no passing by vechiles over 3.5 metric ... 240

In [4]:
# This function is used to display sample images from train and test datasets
def display_sample_images(images,labels):
    SignNames = pd.read_csv('signnames.csv')
    _, ax = plt.subplots(7,6,figsize=(20,20))
    for ClassId, SignName in zip(SignNames.ClassId, SignNames.SignName):
        # Random select an image from particular class
        idx = np.argwhere(labels == ClassId)[np.random.randint(0, 20)][0]
        ax[ClassId % 7, ClassId % 6].imshow(images[idx])
        ax[ClassId % 7, ClassId % 6].set_title(SignName)
    plt.tight_layout()

print('Display random sample from each classes')

display_sample_images(X_train, y_train)
display_sample_images(X_test, y_test)


Display random sample from each classes

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Your model can be derived from a deep feedforward net or a deep convolutional network.
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.


In [5]:
### Preprocess the data here.
### Feel free to use as many code cells as needed.

# This controls the output range in normalization
image_value_ranges = [0.1,0.9]

def convert_rgb2yuv(image_data):
    """
    Converts input RGB data to YUV format
    """
    yuv_image_data = []
    if len(image_data.shape) > 3: 
        for i in range(len(image_data)):
            yuv_image_data.append(cv2.cvtColor(image_data[i], cv2.COLOR_RGB2YUV))
    else:
    # this else part is used when an single image is inputted for pre-processing rather than batch of images
        yuv_image_data.append(cv2.cvtColor(image_data, cv2.COLOR_RGB2YUV))
    return np.array(yuv_image_data)

def convert_rgb2gray(image_data):
    gray_image_data = []
    for i in range(len(image_data)):
        gray_image_data.append(cv2.cvtColor(image_data[i], cv2.COLOR_RGB2GRAY))
    return np.array(gray_image_data)


def normalize_Y_lecun(image_data):
    """
    This function tries to replicate pre-processing in Lecun paper. Input to this function is Y channel
    return mean and std normalized image in image_value_ranges
    """
    im_data = np.array(image_data,np.float32)
    #im_reshape = np.reshape(image_data,(-1,1024))
    #standard_scaler = StandardScaler(with_mean=True)
    #im_rescale = standard_scaler.fit_transform(im_reshape)
    #im_rescale = scale(im_reshape, axis=0, with_mean=True, with_std=False)
    #im_data = np.reshape(im_rescale,(-1,32,32))
    minV = np.amin(im_data)
    maxV = np.amax(im_data)
    lowerLimit = image_value_ranges[0]
    upperLimit = image_value_ranges[1]
    im_data =  lowerLimit + ((im_data - minV)*(upperLimit - lowerLimit))/(maxV - minV)
    return im_data

def normalize_Y(image_data,sub_mean = False,use_channel=False):
    """
    Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
    :param image_data: The image data to be normalized
    :return: Normalized image data
    """
    # ToDo: Implement Min-Max scaling for greyscale image data
    minV = np.amin(image_data)
    maxV = np.amax(image_data)
    lowerLimit = image_value_ranges[0]
    upperLimit = image_value_ranges[1]
    image_data = np.array(image_data,np.float32)
    image_data[:,:,:,0] =  lowerLimit + ((image_data[:,:,:,0] - minV)*(upperLimit - lowerLimit))/(maxV - minV)
    #image_data =  lowerLimit + ((image_data - minV)*(upperLimit - lowerLimit))/(maxV - minV)
    if sub_mean:
        image_data[:,:,:,0] = image_data[:,:,:,0] - np.mean(image_data[:,:,:,0], axis=0)
    if use_channel:
        return image_data
    else:
        return image_data[:,:,:,0]

def preprocess_images(image_data, use_only_y = True, use_mean = False):
    """
    This is the main function to perform pre-processing. It supports single and color channel processing
    set use_only_y = true --> to process only single y channel
    set use_only_y = false --> to process yuv channels
    set use_mean = false --> To subtract image mean 
    """
    # Convert rgb color format to yuv format 
    p_image_data = convert_rgb2yuv(image_data)    
    if use_only_y:
        p_image_data = normalize_Y_lecun(p_image_data[:,:,:,0])
        #p_image_data = normalize_Y(p_image_data,use_channel=False)
    else:
        p_image_data = normalize_Y(p_image_data,use_channel=True)

    return p_image_data

def pre_process_images_test(image_data, mean_value, std_value):
    """
    This function uses training image mean and std to normalize test images.
    Currently this is not implemented
    """
    #p_image_data = convert_rgb2yuv(image_data)
    #p_image_data = normalize_Y_test(p_image_data[:,:,:,0])
    #return p_image_data
    
print('Image preprocessing is defined and successful..')
is_preprocess_defined = True

# testing the functionality
#idx = np.random.randint(n_train)
#sample_input_image = cv2.cvtColor(X_train[idx], cv2.COLOR_RGB2GRAY)
#sample_processed_image = preprocess_images(X_train[idx])
#_, ax = plt.subplots(2,2)
#ax[0,0].imshow(sample_input_image, cmap='gray')
#ax[0,0].set_title('RGB Gray Image')
#ax[0,1].imshow(sample_input_image, cmap='gray')
#ax[0,1].set_title('Normalized Y Image')

#ax[1,0].set_title('RGB Gray Image Histogram')
#_= ax[1,0].hist(sample_input_image.ravel(),bins=256, color='black')
#ax[1,1].set_title('Normalized Y Image Histogram')
#_= ax[1,1].hist(sample_processed_image.ravel(),bins=256, color='black')
#plt.tight_layout()


Image preprocessing is defined and successful..

Question 1

Describe the techniques used to preprocess the data.

Answer:

Preprocessing techniques used are as follows:

1) Normalize RGB images directly in range of [0.1, 0.9] or [-0.5, 0.5] or [-1, 1]
2) Convert RGB to YUV and Normalize YUV components in range of [0.1, 0.9] or [-0.5, 0.5] or [-1, 1]
3) Convret RGB to YUV and Normalize Y components images in range of [0.1, 0.9] or [-0.5, 0.5] or [-1, 1]
4) Convert RGB to YUV and perform mean and std normalization as described in Lecun paper and then convert images in range of [0.1, 0.9] or [-0.5, 0.5] or [-1, 1]
5) Convert RGB to Gray and Normalize gray component in range of [0.1, 0.9] or [-0.5, 0.5] or [-1, 1]

Based on the experiments conducted, the method 3) seems to perform better than other methods described. Some how network is not able to utilize color components in better way. Deeper insights and experiments are requried to analyse the cause fo color image poor performance Method 4) described in the paper is not used as there is need to maintain mean and std values used in training and use it while normalizing the test images. Due to time constraint, I have not implemented this approach

Why preprocessing:
Preprocessing is like conditioning data to make it easier to find optimum solution. While performing gradient descent, if the data is normalized, then global minima can be reached in much better way

Since using RGB does not have any pure intensity information, it tends to perform poorer. Converting data to YUV overcomes this drawback. As Y component plays the role of the intensity.


In [6]:
### Generate data additional (if you want to!)
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.

# Definitions to jitter input data to provide additional data
assert is_preprocess_defined,'Preprocessing image functions not defined'

def shift_image_location(image, xoffset, yoffset):
    """
    Shift images across x and y axis
    Range of the shift is restricted between [-2,2] across each axis
    """
    rows,cols, depth = image.shape
    tparam = skimage_tf.AffineTransform(translation = (xoffset,yoffset))
    out = skimage_tf.warp(image,tparam)
    assert((out.shape[0] == 32) & (out.shape[1] == 32))
    # This conversion is required as OpenCV rgb2yuv does not accept float64
    return out.astype(np.float32)
    
# function to rotate images by given degrees
def rotate_image(image, degree):
    """
    Rotates images using given degree
    Rotation is restricted between [-10, 10] degree
    """
    rows, cols, depth = image.shape
    rad = (np.pi / 180) * degree
    tparam = skimage_tf.AffineTransform(rotation = rad)
    out = skimage_tf.warp(image,tparam)
    assert((out.shape[0] == 32) & (out.shape[1] == 32))
    return out.astype(np.float32)
    
# function to resize the image
def scale_image(image, ratio):
    """
    Scales the input image and maintain the input shape
    Scale is restricted between [0.9, 1.1] degree
    """
    rows, cols, depth = image.shape
    scale = skimage_tf.rescale(image,ratio)
    m_rows, m_cols, m_depth = scale.shape
    #print(ratio)
    #print(scale.shape)
    if ratio > 1.0:
        #print('GT')
        offset = m_rows - rows
        out = scale[offset:offset+rows, offset:offset+cols]
    else:
        #print('LT')
        out = np.zeros((rows,cols,depth))
        offset = rows - m_rows
        out[offset:offset+rows, offset:offset+cols] = scale
    
    assert((out.shape[0] == 32) & (out.shape[1] == 32))
    return out.astype(np.float32)

def affine_image(image, xoffset, yoffset, degree, ratio):
    """
    Performs affine transform i.e. shift, rotate and scale
    """
    out = shift_image_location(image, xoffset, yoffset)
    out = rotate_image(out, degree)
    out = scale_image(out,ratio)
    return out.astype(np.float32)

def change_intensity(image, choice):
    """
    Modify image intensity using either gamma, log and sigmoid based on input random choice 
    """
    rows, cols, depth = image.shape
    if choice == 1:
        rnd = 2 * np.random.random()
        out = exposure.adjust_gamma(image,gamma=rnd)
    elif choice == 2:
        out = exposure.adjust_log(image)
    else:
        out = exposure.adjust_sigmoid(image)

    assert((out.shape[0] == 32) & (out.shape[1] == 32))
    return out.astype(np.float32)

def combined_operations(image, xoffset, yoffset, degree, ratio, choice):
    """
    Function to combine all the jitter operations in one place
    """
    out = shift_image_location(image, xoffset, yoffset)
    out = rotate_image(out, degree)
    out = scale_image(out,ratio)
    out = change_intensity(out, choice)
    return out.astype(np.float32)

def jitter_image_data(input_images,input_labels,batch_size):
    """
    Jitter the input data. Performs scale, shift, rotate and intensity change
    Given batches of images performs jitter and return jittered data
    Input and output shapes remain the same
    """
    num_images = input_images.shape[0]
    jitter_images = []
    jitter_images_labels = [] 
    indx = np.random.choice(input_images.shape[0],batch_size,replace = False)
    images = input_images[indx]
    labels = input_labels[indx]
    for imageIdx in range(batch_size):
        # shift x range [-2,2]
        xoffset = int(4 * np.random.random() - 2)
        # shift y range [-2,2]
        yoffset = int(4 * np.random.random() - 2)
        # rotate range [-10,10]
        degree = int (20 * np.random.random() - 10)
        # scale range [0.9, 1.1]
        ratio = 0.2 * np.random.random() + 0.9
        # random intensity choices (gamma, log, sigmoid)
        choice = np.random.randint(4)
        # Jitter input image and appends the result
        jitter_images.append(combined_operations(images[imageIdx], xoffset, yoffset, degree, ratio, choice))
        # Append corresponding label
        jitter_images_labels.append(labels[imageIdx])
    
    return preprocess_images(np.array(jitter_images)), np.array(jitter_images_labels)


def jitter_image_data_old(input_images,input_labels,batch_size):
    """
    Jitter the input data. Performs scale, shift, rotate and intensity change.
    For every images in batch, corresponding five jitter images are created
    Random selection of batch_size from 5*batch_size jitter images
    """
    num_images = input_images.shape[0]
    jitter_images = []
    jitter_images_labels = [] 
    indx = np.random.choice(input_images.shape[0],batch_size,replace = False)
    images = input_images[indx]
    labels = input_labels[indx]
    for imageIdx in range(batch_size):
            xoffset = int(4 * np.random.random() - 2)
            yoffset = int(4 * np.random.random() - 2)
            degree = int (20 * np.random.random() - 10)
            ratio = 0.2 * np.random.random() + 0.9
            choice = np.random.randint(4)
            # Add original image to the jitter data
            jitter_images.append(images[imageIdx])
            jitter_images_labels.append(labels[imageIdx])
            # Shift image
            jitter_images.append(shift_image_location(images[imageIdx], xoffset, yoffset))
            jitter_images_labels.append(labels[imageIdx])
            # Rotate image
            jitter_images.append(rotate_image(images[imageIdx], degree))
            jitter_images_labels.append(labels[imageIdx])
            # Scale image
            jitter_images.append(scale_image(images[imageIdx], ratio))
            jitter_images_labels.append(labels[imageIdx])
            # Affine
            jitter_images.append(affine_image(images[imageIdx], xoffset, yoffset, degree, ratio))
            jitter_images_labels.append(labels[imageIdx])
            # Brightness
            jitter_images.append(change_intensity(images[imageIdx], choice))
            jitter_images_labels.append(labels[imageIdx])
    
    s_ind = np.random.choice(input_images.shape[0],batch_size)
    
    assert(len(s_ind) == batch_size)
    
    return preprocess_images(np.array(jitter_images)[s_ind]), np.array(jitter_images_labels)[s_ind]

# Testing
#samples = X_train[1:100] 
#slabels = y_train[1:100]
#out1, out2 = jitter_image_data(samples,slabels,32)

In [7]:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(y_train)
train_labels = encoder.transform(y_train)
test_labels = encoder.transform(y_test)

# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)

print('Labels One-Hot Encoded')
is_labels_encod = True
is_train_test_split = False


Labels One-Hot Encoded

In [8]:
# Perform data split for training, validation and testing
assert is_preprocess_defined, 'You skipped the step to preprocess the images'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'

if not is_train_test_split:
    # Get randomized datasets for training and validation
    train_features, valid_features, train_labels, valid_labels = train_test_split(
        X_train,
        train_labels,
        test_size=0.05,
        random_state=832289)

    #test_features = preprocess_images(X_test)

    print('Training features and labels randomized and split.')
    print('Number of training images {}'.format(train_features.shape[0]))
    print('Number of validation images {}'.format(valid_features.shape[0]))
    print('Number of testing images {}'.format(X_test.shape[0]))

is_train_test_split = True


Training features and labels randomized and split.
Number of training images 37248
Number of validation images 1961
Number of testing images 12630

Question 2

Describe how you set up the training, validation and testing data for your model. If you generated additional data, why?

Answer:

In machine or deep learning, in order to train a superivised model we will first collect data. Once data is available, then training is performed on the data. In order to get the essence of how the model is performing test data is needed. This test data cannot be a subset of training data. Because, then testing would not generalize well. So in order to evaluate a model, we need to split the data as training and test data.

Now comes another problem. Since there are always more hyperparameters to choose from, how to get the essence of how well your trained model performed without looking into test set. If the test set is used again for this purpose, then parameter tuning will inherit some property in relation to test set and again we will lose generalization. So, in order to avoid it, training set is again splitted into training and validation set. After the model is trained, it is tested in validation set and fined tune to obtain maximump performance in both training and validation. Since validation was actually a part of training set, even if model inherits properties from validation set, we dont bother. After all the fine tuning is done, then model is tested against test set to get the actually performance.

In this lab, training and test data is already given. So there is need to split only training data into training and validation set. This is acheived using train_test_split function from sklearn. Since data is small and in order to utilize all the available data for training, only 5% of the training data is used as validation. Nevertheless, it is sufficient to fine tune hyperparameters.

One of the major advancement that favored people towards deep learning is the availability of data. Even though this problem looks trivial, additional data is important. It is not always possible to capture data under different conditions like dawn, dusk, night, etc. It is also expensive to collect data. Even though a model is trained with available data and also we expect the model to perform good at all condition. For all these reasons additional data is used.


In [9]:
#Placeholder definition
features = tf.placeholder(tf.float32, [None, 32, 32])
image = tf.reshape(features, [-1,32,32,1])
keep_prob = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32, [None, 43])

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

def model_twolayers():
    endpoints = {}
    with tf.variable_scope('conv1') as scope:
        W_conv1 = weight_variable([5, 5, 1, 32])
        b_conv1 = bias_variable([32])
        h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
        h_pool1 = max_pool_2x2(h_conv1)
        endpoints['conv1'] = h_conv1
        endpoints['conv1_pool1'] = h_pool1
    
    with tf.variable_scope('conv2') as scope:
        W_conv2 = weight_variable([5, 5, 32, 64])
        b_conv2 = bias_variable([64])
        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
        h_pool2 = max_pool_2x2(h_conv2)
        endpoints['conv2'] = h_conv2
        endpoints['conv2_pool2'] = h_pool2

    with tf.variable_scope('fc1') as scope:
        W_fc1 = weight_variable([8*8*64, 1024])
        b_fc1 = bias_variable([1024])
        h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*64])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
        endpoints['fc1'] = h_fc1_drop
    
    with tf.variable_scope('fc2') as scope:
        W_fc2 = weight_variable([1024, 43])
        b_fc2 = bias_variable([43])
        logits = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
        endpoints['logits'] = logits
    
    return logits, endpoints

def model_fourlayers():
    endpoints = {}
    with tf.variable_scope('conv1') as scope:
        W_conv1 = weight_variable([5, 5, 1, 32])
        b_conv1 = bias_variable([32])
        h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
        h_pool1 = max_pool_2x2(h_conv1)
        endpoints['conv1'] = h_conv1
        endpoints['conv1_pool1'] = h_pool1
    
    with tf.variable_scope('conv2') as scope:
        W_conv2 = weight_variable([5, 5, 32, 64])
        b_conv2 = bias_variable([64])
        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
        h_pool2 = max_pool_2x2(h_conv2)
        endpoints['conv2'] = h_conv2
        endpoints['conv2_pool2'] = h_pool2
    
    with tf.variable_scope('conv3') as scope:
        W_conv3 = weight_variable([5, 5, 64, 128])
        b_conv3 = bias_variable([128])
        h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
        h_pool3 = max_pool_2x2(h_conv3)
        endpoints['conv3'] = h_conv3
        endpoints['conv3_pool3'] = h_pool3
    
    with tf.variable_scope('conv4') as scope:
        W_conv4 = weight_variable([3, 3, 128, 256])
        b_conv4 = bias_variable([256])
        h_conv4 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4)
        h_pool4 = max_pool_2x2(h_conv4)
        endpoints['conv4'] = h_conv4
        endpoints['conv4_pool4'] = h_pool4

    with tf.variable_scope('fc1') as scope:
        W_fc1 = weight_variable([2*2*256, 512])
        b_fc1 = bias_variable([512])
        h_pool4_flat = tf.reshape(h_pool4, [-1, 2*2*256])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool4_flat, W_fc1) + b_fc1)
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
        endpoints['fc1'] = h_fc1_drop
    
    with tf.variable_scope('fc2') as scope:
        W_fc2 = weight_variable([512, 43])
        b_fc2 = bias_variable([43])
        logits = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
        endpoints['logits'] = logits
    
    return logits, endpoints

def loss(logits, labels):
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))
    return loss

def training(loss, learning_rate, name = 'Adam'):
    if name == 'GD':
        optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
    else:
        optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
    
    return optimizer

def evaluation(logits, labels):
    correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    return accuracy

In [10]:
print('Model Architecture for two layers is as follows')
_, conv_dict1 = model_twolayers()
conv_dict1


Model Architecture for two layers is as follows
Out[10]:
{'conv1': <tf.Tensor 'conv1/Relu:0' shape=(?, 32, 32, 32) dtype=float32>,
 'conv1_pool1': <tf.Tensor 'conv1/MaxPool:0' shape=(?, 16, 16, 32) dtype=float32>,
 'conv2': <tf.Tensor 'conv2/Relu:0' shape=(?, 16, 16, 64) dtype=float32>,
 'conv2_pool2': <tf.Tensor 'conv2/MaxPool:0' shape=(?, 8, 8, 64) dtype=float32>,
 'fc1': <tf.Tensor 'fc1/dropout/mul:0' shape=(?, 1024) dtype=float32>,
 'logits': <tf.Tensor 'fc2/add:0' shape=(?, 43) dtype=float32>}

In [11]:
print('Model Architecture for four layers is as follows')
_, conv_dict2 = model_fourlayers()
conv_dict2


Model Architecture for four layers is as follows
Out[11]:
{'conv1': <tf.Tensor 'conv1_1/Relu:0' shape=(?, 32, 32, 32) dtype=float32>,
 'conv1_pool1': <tf.Tensor 'conv1_1/MaxPool:0' shape=(?, 16, 16, 32) dtype=float32>,
 'conv2': <tf.Tensor 'conv2_1/Relu:0' shape=(?, 16, 16, 64) dtype=float32>,
 'conv2_pool2': <tf.Tensor 'conv2_1/MaxPool:0' shape=(?, 8, 8, 64) dtype=float32>,
 'conv3': <tf.Tensor 'conv3/Relu:0' shape=(?, 8, 8, 128) dtype=float32>,
 'conv3_pool3': <tf.Tensor 'conv3/MaxPool:0' shape=(?, 4, 4, 128) dtype=float32>,
 'conv4': <tf.Tensor 'conv4/Relu:0' shape=(?, 4, 4, 256) dtype=float32>,
 'conv4_pool4': <tf.Tensor 'conv4/MaxPool:0' shape=(?, 2, 2, 256) dtype=float32>,
 'fc1': <tf.Tensor 'fc1_1/dropout/mul:0' shape=(?, 512) dtype=float32>,
 'logits': <tf.Tensor 'fc2_1/add:0' shape=(?, 43) dtype=float32>}

Note:

The following cell contains the same model as the above. But it has organised so that model can be saved and load as when required. Using the above method made it difficult to store and restore models due to different architectures tried and tested


In [ ]:
# The following cell contains the same model as the above. But it has organised so that model can be saved and load as when required. 
# Using the above method made it difficult to store and restore models due to different architectures tried and tested

# Graph name to enable restore
graph_model_simple = tf.Graph()

with graph_model_simple.as_default():
    
    features = tf.placeholder(tf.float32, [None, 32, 32])
    image = tf.reshape(features, [-1,32,32,1])
    keep_prob = tf.placeholder(tf.float32)
    labels = tf.placeholder(tf.float32, [None, 43])

    def weight_variable(shape):
      initial = tf.truncated_normal(shape, stddev=0.1)
      return tf.Variable(initial)

    def bias_variable(shape):
      initial = tf.constant(0.1, shape=shape)
      return tf.Variable(initial)

    def conv2d(x, W):
      return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

    def max_pool_2x2(x):
      return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                            strides=[1, 2, 2, 1], padding='SAME')

    def model_twolayers():
        endpoints = {}
        with tf.variable_scope('conv1') as scope:
            W_conv1 = weight_variable([5, 5, 1, 32])
            b_conv1 = bias_variable([32])
            h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
            h_pool1 = max_pool_2x2(h_conv1)
            endpoints['conv1'] = h_conv1
            endpoints['conv1_pool1'] = h_pool1

        with tf.variable_scope('conv2') as scope:
            W_conv2 = weight_variable([5, 5, 32, 64])
            b_conv2 = bias_variable([64])
            h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
            h_pool2 = max_pool_2x2(h_conv2)
            endpoints['conv2'] = h_conv2
            endpoints['conv2_pool2'] = h_pool2

        with tf.variable_scope('fc1') as scope:
            W_fc1 = weight_variable([8*8*64, 1024])
            b_fc1 = bias_variable([1024])
            h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*64])
            h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
            h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
            endpoints['fc1'] = h_fc1_drop

        with tf.variable_scope('fc2') as scope:
            W_fc2 = weight_variable([1024, 43])
            b_fc2 = bias_variable([43])
            logits = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
            endpoints['logits'] = logits

        return logits, endpoints

    def model_fourlayers():
        endpoints = {}
        with tf.variable_scope('conv1') as scope:
            W_conv1 = weight_variable([5, 5, 1, 32])
            b_conv1 = bias_variable([32])
            h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
            h_pool1 = max_pool_2x2(h_conv1)
            endpoints['conv1'] = h_conv1
            endpoints['conv1_pool1'] = h_pool1

        with tf.variable_scope('conv2') as scope:
            W_conv2 = weight_variable([5, 5, 32, 64])
            b_conv2 = bias_variable([64])
            h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
            h_pool2 = max_pool_2x2(h_conv2)
            endpoints['conv2'] = h_conv2
            endpoints['conv2_pool2'] = h_pool2

        with tf.variable_scope('conv3') as scope:
            W_conv3 = weight_variable([5, 5, 64, 128])
            b_conv3 = bias_variable([128])
            h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
            h_pool3 = max_pool_2x2(h_conv3)
            endpoints['conv3'] = h_conv3
            endpoints['conv3_pool3'] = h_pool3

        with tf.variable_scope('conv4') as scope:
            W_conv4 = weight_variable([3, 3, 128, 256])
            b_conv4 = bias_variable([256])
            h_conv4 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4)
            h_pool4 = max_pool_2x2(h_conv4)
            endpoints['conv4'] = h_conv4
            endpoints['conv4_pool4'] = h_pool4

        with tf.variable_scope('fc1') as scope:
            W_fc1 = weight_variable([2*2*256, 512])
            b_fc1 = bias_variable([512])
            h_pool4_flat = tf.reshape(h_pool4, [-1, 2*2*256])
            h_fc1 = tf.nn.relu(tf.matmul(h_pool4_flat, W_fc1) + b_fc1)
            h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
            endpoints['fc1'] = h_fc1_drop

        with tf.variable_scope('fc2') as scope:
            W_fc2 = weight_variable([512, 43])
            b_fc2 = bias_variable([43])
            logits = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
            endpoints['logits'] = logits

        return logits, endpoints

    def loss(logits, labels):
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))
        return loss

    def training(loss, learning_rate, name = 'Adam'):
        if name == 'GD':
            optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
        else:
            optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

        return optimizer

    def evaluation(logits, labels):
        correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        return accuracy
    
    def prediction_prob(logits):
        return tf.nn.softmax(logits)
    
    def prediction_class(logits):
        return tf.nn.top_k(logits)
    
    # This remains fixed for this graph. If you need to use model_fourlayers has to enable here
    logits, _ = model_twolayers()
    losses = loss(logits,labels)
    train_op = training(losses, 0.001)
    acc = evaluation(logits,labels)
    pred = prediction_prob(logits)
    pred_class = prediction_class(logits)
    init = tf.initialize_all_variables()
    saver = tf.train.Saver()

In [12]:
#Placeholder definition
import tensorflow as tf
graph_model_multi2 = tf.Graph()

with graph_model_multi2.as_default():
    
    features = tf.placeholder(tf.float32, [None, 32, 32])
    image = tf.reshape(features, [-1,32,32,1])
    keep_prob = tf.placeholder(tf.float32)
    lr_value = tf.placeholder(tf.float32)
    labels = tf.placeholder(tf.float32, [None, 43])

    def weight_variable(shape):
      initial = tf.truncated_normal(shape, stddev=0.1)
      return tf.Variable(initial)

    def bias_variable(shape):
      initial = tf.constant(0.1, shape=shape)
      return tf.Variable(initial)

    def conv2d(x, W):
      return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

    def max_pool_2x2(x):
      return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                            strides=[1, 2, 2, 1], padding='SAME')

    def model():
        endpoints = {}
        with tf.variable_scope('conv1') as scope:
            W_conv1 = weight_variable([5, 5, 1, 32])
            b_conv1 = bias_variable([32])
            h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
            h_pool1 = max_pool_2x2(h_conv1)
            endpoints['conv1'] = h_conv1
            endpoints['conv1_pool1'] = h_pool1

        with tf.variable_scope('conv2') as scope:
            W_conv2 = weight_variable([5, 5, 32, 64])
            b_conv2 = bias_variable([64])
            h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
            h_pool2 = max_pool_2x2(h_conv2)
            endpoints['conv2'] = h_conv2
            endpoints['conv2_pool2'] = h_pool2
        
        with tf.variable_scope('combined') as scope:
            h_pool1_flat = tf.reshape(h_pool1, [-1, 16*16*32])
            h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*64])
            combined_pool_flat = tf.concat(1, [h_pool1_flat, h_pool2_flat])
            combined_pool_flat_shape = combined_pool_flat.get_shape()[1].value
            endpoints['combined'] = combined_pool_flat

        with tf.variable_scope('fc1') as scope:
            W_fc1 = weight_variable([combined_pool_flat_shape, 1024])
            b_fc1 = bias_variable([1024])
            h_fc1 = tf.nn.relu(tf.matmul(combined_pool_flat, W_fc1) + b_fc1)
            h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
            endpoints['fc1'] = h_fc1_drop
        
        with tf.variable_scope('fc2') as scope:
            W_fc2 = weight_variable([1024, 100])
            b_fc2 = bias_variable([100])
            h_fc2 = tf.nn.relu(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
            h_fc2_drop = tf.nn.dropout(h_fc2, keep_prob)
            endpoints['fc2'] = h_fc2_drop
            
        with tf.variable_scope('fc3') as scope:
            W_fc3 = weight_variable([100, 43])
            b_fc3 = bias_variable([43])
            logits = tf.matmul(h_fc2_drop, W_fc3) + b_fc3
            endpoints['logits'] = logits

        return logits, endpoints

    def loss(logits, labels):
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, labels))
        return loss

    def training(loss, learning_rate, name = 'Adam'):
        if name == 'GD':
            optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
        else:
            optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)

        return optimizer

    def evaluation(logits, labels):
        correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        return accuracy
    
    def prediction_prob(logits):
        return tf.nn.softmax(logits)
    
    def prediction_class(logits):
        return tf.nn.top_k(logits)
    
    logits, graphdef = model()
    losses = loss(logits,labels)
    train_op = training(losses, lr_value)
    acc = evaluation(logits,labels)
    pred = prediction_prob(logits)
    pred_class = prediction_class(logits)
    init = tf.initialize_all_variables()
    saver = tf.train.Saver()

print('MultiScale model2 defined..')
graphdef


MultiScale model2 defined..
Out[12]:
{'combined': <tf.Tensor 'combined/concat:0' shape=(?, 12288) dtype=float32>,
 'conv1': <tf.Tensor 'conv1/Relu:0' shape=(?, 32, 32, 32) dtype=float32>,
 'conv1_pool1': <tf.Tensor 'conv1/MaxPool:0' shape=(?, 16, 16, 32) dtype=float32>,
 'conv2': <tf.Tensor 'conv2/Relu:0' shape=(?, 16, 16, 64) dtype=float32>,
 'conv2_pool2': <tf.Tensor 'conv2/MaxPool:0' shape=(?, 8, 8, 64) dtype=float32>,
 'fc1': <tf.Tensor 'fc1/dropout/mul:0' shape=(?, 1024) dtype=float32>,
 'fc2': <tf.Tensor 'fc2/dropout/mul:0' shape=(?, 100) dtype=float32>,
 'logits': <tf.Tensor 'fc3/add:0' shape=(?, 43) dtype=float32>}

Question 3

What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.

Answer:

I have experimented with numerous model. Please see Traffic_Signs_Recognition_ver2.ipynb for more experiments. Basically my model can classified into two types as mentioned in Lecun's paper. Simple model with convolutional layers and multiscale model with convolutional layers

Simple Model: (Scrool above to see the model output with shape information)

Two Layers: conv1->relu1->pool1->conv2->relu2->pool2->fc1->fc1_drop->fc2 Four Layers:conv1->relu1->pool1->conv2->relu2->pool2->conv3->relu3->pool3->conv4->relu4->pool4->fc1->fc1_drop->fc2

MultiScale Model: (Scrool above to see the model output with shape information)

Two Layers: conv1->relu1->pool1->conv2->relu2->pool2->fc1->fc1_drop->fc2 | ^ V ------------------------->| (output of pool1 is combined with pool2 in fc1)


In [ ]:
### Train your model here.
### Feel free to use as many code cells as needed.

In [ ]:
def run_training(steps,batch_size,learning_rate,k_prob,use_jitter=False, modelname ='Two'):
    stime = time.time()
    sess = tf.InteractiveSession()

    if modelname == 'Four':
        logits, _ = model_fourlayers()
    else:
        logits, _ = model_twolayers()
    
    losses = loss(logits,labels)
    train_op = training(losses, learning_rate)
    acc = evaluation(logits,labels)
    tf.initialize_all_variables().run()
    
    train_acc_history = []
    val_acc_history = []
    loss_history = []
    log_batch_step = 1000
    batches_history = []
       
    if not use_jitter:
        norm_train_features = preprocess_images(train_features)
    else:
        norm_train_features = train_features
    
    norm_valid_features = preprocess_images(valid_features)
    norn_test_features = preprocess_images(X_test[1:1000])
    
    valid_feed_dict = {features: norm_valid_features, labels : valid_labels, keep_prob: k_prob}
    test_feed_dict = {features: norn_test_features, labels : test_labels[1:1000], keep_prob: 1}
    
    print('Training model with {} layers started for step: {} batchsize: {} learning_rate: {} keep_prob: {}'.format(modelname, steps,batch_size,learning_rate,k_prob))
    
    for step in range(steps):
        # Get a batch of training features and labels
        #batch_start = batch_i*batch_size
        #batch_features = train_features[batch_start:batch_start + batch_size]
        #batch_labels = train_labels[batch_start:batch_start + batch_size]
        batch_start = np.random.choice(norm_train_features.shape[0],batch_size)
        batch_features = norm_train_features[batch_start]
        batch_labels = train_labels[batch_start]
        if use_jitter:
            batch_features, batch_labels = jitter_image_data(batch_features, batch_labels, batch_size)

        # Run optimizer
        loss_value = sess.run(train_op, feed_dict={features: batch_features, labels: batch_labels, keep_prob: k_prob})
                
        if step%log_batch_step == 0:
            train_accuracy = acc.eval(feed_dict={features:batch_features, labels: batch_labels, keep_prob: 1})
            valid_accuracy = acc.eval(feed_dict=valid_feed_dict)
            test_accuracy = acc.eval(feed_dict=test_feed_dict)
            print("Steps %d, training accuracy: %g  validation accuracy: %g test accuracy: %g"%(step, train_accuracy, valid_accuracy, test_accuracy))
            previous_batch = batches_history[-1] if batches_history else 0
            batches_history.append(log_batch_step + previous_batch)
            train_acc_history.append(train_accuracy)
            val_acc_history.append(valid_accuracy)
            loss_history.append(loss_value)
        
    # Check accuracy against Test data
    test_accuracy = sess.run(acc, feed_dict=test_feed_dict)
    
    print('Training completed with test accuracy : {}'.format(test_accuracy))
    
    return batches_history, train_acc_history, val_acc_history, test_accuracy

In [ ]:
# fine tuning hyperparameters
# Flag to train with and without additional data
jit = [False, True]
# Select one of the two models
models = ['Two','Four']
# Steps size is fixed for experimentation so that performance can be observed at common point
steps = [10000]
# Experimenting with different batch sizes
batches = [32, 64, 128, 256]
# Experimenting with different learning rate
lr = [1e-2, 1e-3, 1e-4]
# Experimenting with different dropout probabilities
kp = [0.5, 0.65, 0.8]
result = []
best_acc = 0.0
best_params = []
for j in jit:
    for m in models:
        for s in steps:
            for b in batches:
                for l in lr:
                    for k in kp:
                        batch_val, train_acc, val_acc, test_acc = run_training(s,b,l,k,j,m)
                        if test_acc > best_acc:
                            best_acc = test_acc
                            best_params = [s,b,l,k,m]
                            batch_best = batch_val
                            acc_best = [train_acc, val_acc]

print('Best Validation Accuracy : {}'.format(best_acc))
print('Best parameters: steps:{} batches:{} learning_rate:{} keep_prob:{}'.format(best_params[0],best_params[1],best_params[2],best_params[3]))

acc_plot = plt.subplot()
acc_plot.set_title('Accuracy')
acc_plot.plot(batch_best, acc_best[0], 'g', label='Training Accuracy')
acc_plot.plot(batch_best, acc_best[1], 'r', label='Validation Accuracy')
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
Model Two Layer training log file without additional data: Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0479347 Steps 1000, training accuracy: 0 validation accuracy: 0.063743 Steps 2000, training accuracy: 0.125 validation accuracy: 0.0484447 Steps 3000, training accuracy: 0 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0.0625 validation accuracy: 0.0622132 Steps 5000, training accuracy: 0.03125 validation accuracy: 0.0530342 Steps 6000, training accuracy: 0 validation accuracy: 0.0601734 Steps 7000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 9000, training accuracy: 0 validation accuracy: 0.0474248 Training completed with test accuracy : 0.06306306272745132 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0377358 Steps 1000, training accuracy: 0.03125 validation accuracy: 0.0601734 Steps 2000, training accuracy: 0 validation accuracy: 0.0555839 Steps 3000, training accuracy: 0.03125 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0 validation accuracy: 0.0622132 Steps 5000, training accuracy: 0.03125 validation accuracy: 0.054564 Steps 6000, training accuracy: 0.03125 validation accuracy: 0.0520143 Steps 7000, training accuracy: 0.125 validation accuracy: 0.054564 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 9000, training accuracy: 0.0625 validation accuracy: 0.0520143 Training completed with test accuracy : 0.052052050828933716 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.03125 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.125 validation accuracy: 0.0520143 Steps 1000, training accuracy: 0.90625 validation accuracy: 0.811831 Steps 2000, training accuracy: 0.875 validation accuracy: 0.805201 Steps 3000, training accuracy: 0.90625 validation accuracy: 0.862825 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.845487 Steps 5000, training accuracy: 0.875 validation accuracy: 0.856706 Steps 6000, training accuracy: 1 validation accuracy: 0.868944 Steps 7000, training accuracy: 0.9375 validation accuracy: 0.873534 Steps 8000, training accuracy: 0.96875 validation accuracy: 0.889342 Steps 9000, training accuracy: 1 validation accuracy: 0.899031 Training completed with test accuracy : 0.7767767906188965 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0428353 Steps 1000, training accuracy: 0.84375 validation accuracy: 0.681285 Steps 2000, training accuracy: 0.84375 validation accuracy: 0.759816 Steps 3000, training accuracy: 0.90625 validation accuracy: 0.794493 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.81591 Steps 5000, training accuracy: 0.90625 validation accuracy: 0.82101 Steps 6000, training accuracy: 0.90625 validation accuracy: 0.830698 Steps 7000, training accuracy: 0.96875 validation accuracy: 0.82152 Steps 8000, training accuracy: 0.9375 validation accuracy: 0.845487 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.811831 Training completed with test accuracy : 0.7407407760620117 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.25 validation accuracy: 0.0418154 Steps 1000, training accuracy: 0.96875 validation accuracy: 0.864355 Steps 2000, training accuracy: 1 validation accuracy: 0.938807 Steps 3000, training accuracy: 1 validation accuracy: 0.952575 Steps 4000, training accuracy: 0.96875 validation accuracy: 0.962774 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.959204 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.969913 Steps 7000, training accuracy: 1 validation accuracy: 0.976032 Steps 8000, training accuracy: 1 validation accuracy: 0.970933 Steps 9000, training accuracy: 1 validation accuracy: 0.978072 Training completed with test accuracy : 0.9269269108772278 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.1875 validation accuracy: 0.0464049 Steps 1000, training accuracy: 0.8125 validation accuracy: 0.776135 Steps 2000, training accuracy: 0.9375 validation accuracy: 0.866395 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.91229 Steps 4000, training accuracy: 1 validation accuracy: 0.938297 Steps 5000, training accuracy: 1 validation accuracy: 0.944416 Steps 6000, training accuracy: 1 validation accuracy: 0.958185 Steps 7000, training accuracy: 1 validation accuracy: 0.967364 Steps 8000, training accuracy: 1 validation accuracy: 0.953085 Steps 9000, training accuracy: 1 validation accuracy: 0.973993 Training completed with test accuracy : 0.8718718886375427 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.15625 validation accuracy: 0.054564 Steps 1000, training accuracy: 1 validation accuracy: 0.91127 Steps 2000, training accuracy: 1 validation accuracy: 0.955635 Steps 3000, training accuracy: 1 validation accuracy: 0.964304 Steps 4000, training accuracy: 1 validation accuracy: 0.979602 Steps 5000, training accuracy: 1 validation accuracy: 0.976543 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.979092 Steps 7000, training accuracy: 1 validation accuracy: 0.973993 Steps 8000, training accuracy: 1 validation accuracy: 0.978582 Steps 9000, training accuracy: 1 validation accuracy: 0.971443 Training completed with test accuracy : 0.9179179072380066 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.1875 validation accuracy: 0.0316165 Steps 1000, training accuracy: 0.90625 validation accuracy: 0.901071 Steps 2000, training accuracy: 0.96875 validation accuracy: 0.954105 Steps 3000, training accuracy: 1 validation accuracy: 0.966344 Steps 4000, training accuracy: 1 validation accuracy: 0.977562 Steps 5000, training accuracy: 1 validation accuracy: 0.976033 Steps 6000, training accuracy: 1 validation accuracy: 0.983172 Steps 7000, training accuracy: 1 validation accuracy: 0.981642 Steps 8000, training accuracy: 1 validation accuracy: 0.981132 Steps 9000, training accuracy: 1 validation accuracy: 0.985212 Training completed with test accuracy : 0.9189189076423645 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0300867 Steps 1000, training accuracy: 0.625 validation accuracy: 0.439572 Steps 2000, training accuracy: 0.75 validation accuracy: 0.63794 Steps 3000, training accuracy: 0.90625 validation accuracy: 0.752167 Steps 4000, training accuracy: 0.875 validation accuracy: 0.797042 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.848547 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.885773 Steps 7000, training accuracy: 1 validation accuracy: 0.90872 Steps 8000, training accuracy: 0.96875 validation accuracy: 0.925548 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.937277 Training completed with test accuracy : 0.8858858942985535 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0260071 Steps 1000, training accuracy: 0.53125 validation accuracy: 0.515043 Steps 2000, training accuracy: 0.78125 validation accuracy: 0.703723 Steps 3000, training accuracy: 0.875 validation accuracy: 0.799592 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.833758 Steps 5000, training accuracy: 0.90625 validation accuracy: 0.891382 Steps 6000, training accuracy: 1 validation accuracy: 0.915349 Steps 7000, training accuracy: 0.9375 validation accuracy: 0.926568 Steps 8000, training accuracy: 1 validation accuracy: 0.946966 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.945946 Training completed with test accuracy : 0.8888888955116272 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.03125 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0158083 Steps 1000, training accuracy: 0.65625 validation accuracy: 0.661397 Steps 2000, training accuracy: 0.875 validation accuracy: 0.791943 Steps 3000, training accuracy: 0.875 validation accuracy: 0.890872 Steps 4000, training accuracy: 0.96875 validation accuracy: 0.916879 Steps 5000, training accuracy: 0.9375 validation accuracy: 0.928098 Steps 6000, training accuracy: 1 validation accuracy: 0.951045 Steps 7000, training accuracy: 1 validation accuracy: 0.960224 Steps 8000, training accuracy: 1 validation accuracy: 0.951555 Steps 9000, training accuracy: 1 validation accuracy: 0.977052 Training completed with test accuracy : 0.8668668270111084 Training model with Two layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0341662 Steps 1000, training accuracy: 0.6875 validation accuracy: 0.640999 Steps 2000, training accuracy: 0.9375 validation accuracy: 0.785314 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.861805 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.90668 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.9128 Steps 6000, training accuracy: 1 validation accuracy: 0.941866 Steps 7000, training accuracy: 1 validation accuracy: 0.950025 Steps 8000, training accuracy: 1 validation accuracy: 0.958695 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.968383 Training completed with test accuracy : 0.890890896320343 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.140625 validation accuracy: 0.0428353 Steps 1000, training accuracy: 0.890625 validation accuracy: 0.754207 Steps 2000, training accuracy: 0.921875 validation accuracy: 0.833758 Steps 3000, training accuracy: 0.9375 validation accuracy: 0.832228 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.81846 Steps 5000, training accuracy: 0.9375 validation accuracy: 0.826619 Steps 6000, training accuracy: 0.859375 validation accuracy: 0.835798 Steps 7000, training accuracy: 0.96875 validation accuracy: 0.857216 Steps 8000, training accuracy: 0.890625 validation accuracy: 0.810301 Steps 9000, training accuracy: 0.859375 validation accuracy: 0.776135 Training completed with test accuracy : 0.7807807922363281 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0305966 Steps 1000, training accuracy: 0.8125 validation accuracy: 0.763896 Steps 2000, training accuracy: 0.921875 validation accuracy: 0.746558 Steps 3000, training accuracy: 0.875 validation accuracy: 0.81744 Steps 4000, training accuracy: 0.953125 validation accuracy: 0.829169 Steps 5000, training accuracy: 0.890625 validation accuracy: 0.842427 Steps 6000, training accuracy: 0.90625 validation accuracy: 0.841407 Steps 7000, training accuracy: 0.875 validation accuracy: 0.863845 Steps 8000, training accuracy: 0.953125 validation accuracy: 0.846507 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.800102 Training completed with test accuracy : 0.7907907962799072 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.010010010562837124 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0163182 Steps 1000, training accuracy: 0.921875 validation accuracy: 0.825089 Steps 2000, training accuracy: 0.953125 validation accuracy: 0.867414 Steps 3000, training accuracy: 0.984375 validation accuracy: 0.886792 Steps 4000, training accuracy: 0.984375 validation accuracy: 0.893932 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.902601 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.90464 Steps 7000, training accuracy: 0.875 validation accuracy: 0.869964 Steps 8000, training accuracy: 0.984375 validation accuracy: 0.91025 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.923508 Training completed with test accuracy : 0.7617617845535278 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.125 validation accuracy: 0.0443651 Steps 1000, training accuracy: 0.046875 validation accuracy: 0.0622132 Steps 2000, training accuracy: 0.046875 validation accuracy: 0.0622132 Steps 3000, training accuracy: 0.09375 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0.109375 validation accuracy: 0.0530342 Steps 5000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0.046875 validation accuracy: 0.0530342 Steps 7000, training accuracy: 0.078125 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 9000, training accuracy: 0.0625 validation accuracy: 0.0520143 Training completed with test accuracy : 0.044044043868780136 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.078125 validation accuracy: 0.045385 Steps 1000, training accuracy: 0.953125 validation accuracy: 0.891892 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.955125 Steps 3000, training accuracy: 1 validation accuracy: 0.959714 Steps 4000, training accuracy: 1 validation accuracy: 0.972973 Steps 5000, training accuracy: 1 validation accuracy: 0.980112 Steps 6000, training accuracy: 0.984375 validation accuracy: 0.981642 Steps 7000, training accuracy: 1 validation accuracy: 0.983682 Steps 8000, training accuracy: 1 validation accuracy: 0.981132 Steps 9000, training accuracy: 1 validation accuracy: 0.983172 Training completed with test accuracy : 0.9319319128990173 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.078125 validation accuracy: 0.0484447 Steps 1000, training accuracy: 0.96875 validation accuracy: 0.920449 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.977562 Steps 3000, training accuracy: 1 validation accuracy: 0.980112 Steps 4000, training accuracy: 1 validation accuracy: 0.980622 Steps 5000, training accuracy: 1 validation accuracy: 0.984702 Steps 6000, training accuracy: 1 validation accuracy: 0.984702 Steps 7000, training accuracy: 1 validation accuracy: 0.983682 Steps 8000, training accuracy: 1 validation accuracy: 0.983172 Steps 9000, training accuracy: 1 validation accuracy: 0.990311 Training completed with test accuracy : 0.9279279112815857 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0392657 Steps 1000, training accuracy: 1 validation accuracy: 0.947986 Steps 2000, training accuracy: 1 validation accuracy: 0.964304 Steps 3000, training accuracy: 0.984375 validation accuracy: 0.970423 Steps 4000, training accuracy: 1 validation accuracy: 0.972463 Steps 5000, training accuracy: 1 validation accuracy: 0.984192 Steps 6000, training accuracy: 1 validation accuracy: 0.985212 Steps 7000, training accuracy: 1 validation accuracy: 0.986231 Steps 8000, training accuracy: 1 validation accuracy: 0.983172 Steps 9000, training accuracy: 1 validation accuracy: 0.982662 Training completed with test accuracy : 0.9199199080467224 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.15625 validation accuracy: 0.0504844 Steps 1000, training accuracy: 1 validation accuracy: 0.935237 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.961244 Steps 3000, training accuracy: 1 validation accuracy: 0.979092 Steps 4000, training accuracy: 0.984375 validation accuracy: 0.977052 Steps 5000, training accuracy: 1 validation accuracy: 0.977562 Steps 6000, training accuracy: 1 validation accuracy: 0.978582 Steps 7000, training accuracy: 1 validation accuracy: 0.980622 Steps 8000, training accuracy: 1 validation accuracy: 0.986741 Steps 9000, training accuracy: 1 validation accuracy: 0.981642 Training completed with test accuracy : 0.9259259104728699 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0356961 Steps 1000, training accuracy: 0.625 validation accuracy: 0.54207 Steps 2000, training accuracy: 0.84375 validation accuracy: 0.741458 Steps 3000, training accuracy: 0.9375 validation accuracy: 0.833248 Steps 4000, training accuracy: 0.96875 validation accuracy: 0.869454 Steps 5000, training accuracy: 1 validation accuracy: 0.898011 Steps 6000, training accuracy: 0.953125 validation accuracy: 0.91229 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.948496 Steps 8000, training accuracy: 0.9375 validation accuracy: 0.951555 Steps 9000, training accuracy: 1 validation accuracy: 0.958185 Training completed with test accuracy : 0.9109109044075012 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0158083 Steps 1000, training accuracy: 0.765625 validation accuracy: 0.598164 Steps 2000, training accuracy: 0.921875 validation accuracy: 0.778684 Steps 3000, training accuracy: 0.90625 validation accuracy: 0.869454 Steps 4000, training accuracy: 0.953125 validation accuracy: 0.893932 Steps 5000, training accuracy: 1 validation accuracy: 0.922488 Steps 6000, training accuracy: 0.984375 validation accuracy: 0.943906 Steps 7000, training accuracy: 0.96875 validation accuracy: 0.959714 Steps 8000, training accuracy: 1 validation accuracy: 0.963284 Steps 9000, training accuracy: 1 validation accuracy: 0.970423 Training completed with test accuracy : 0.8988988995552063 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.03125 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0239674 Steps 1000, training accuracy: 0.71875 validation accuracy: 0.752677 Steps 2000, training accuracy: 0.90625 validation accuracy: 0.868434 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.91331 Steps 4000, training accuracy: 1 validation accuracy: 0.958185 Steps 5000, training accuracy: 1 validation accuracy: 0.968893 Steps 6000, training accuracy: 1 validation accuracy: 0.971953 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.978582 Steps 8000, training accuracy: 1 validation accuracy: 0.979092 Steps 9000, training accuracy: 1 validation accuracy: 0.987251 Training completed with test accuracy : 0.890890896320343 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0219276 Steps 1000, training accuracy: 0.78125 validation accuracy: 0.657318 Steps 2000, training accuracy: 0.890625 validation accuracy: 0.824069 Steps 3000, training accuracy: 0.953125 validation accuracy: 0.889852 Steps 4000, training accuracy: 0.984375 validation accuracy: 0.922998 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.952575 Steps 6000, training accuracy: 1 validation accuracy: 0.954105 Steps 7000, training accuracy: 1 validation accuracy: 0.964814 Steps 8000, training accuracy: 1 validation accuracy: 0.970933 Steps 9000, training accuracy: 1 validation accuracy: 0.973993 Training completed with test accuracy : 0.8898898959159851 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.078125 validation accuracy: 0.054564 Steps 1000, training accuracy: 0.890625 validation accuracy: 0.81642 Steps 2000, training accuracy: 0.875 validation accuracy: 0.8205 Steps 3000, training accuracy: 0.929688 validation accuracy: 0.866395 Steps 4000, training accuracy: 0.929688 validation accuracy: 0.870984 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.870984 Steps 6000, training accuracy: 1 validation accuracy: 0.868944 Steps 7000, training accuracy: 0.960938 validation accuracy: 0.862315 Steps 8000, training accuracy: 0.960938 validation accuracy: 0.886792 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.886282 Training completed with test accuracy : 0.8138138055801392 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0448751 Steps 1000, training accuracy: 0.03125 validation accuracy: 0.0622132 Steps 2000, training accuracy: 0.0390625 validation accuracy: 0.0525242 Steps 3000, training accuracy: 0.0546875 validation accuracy: 0.0530342 Steps 4000, training accuracy: 0.046875 validation accuracy: 0.0601734 Steps 5000, training accuracy: 0.03125 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0.0625 validation accuracy: 0.0622132 Steps 7000, training accuracy: 0.0546875 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.09375 validation accuracy: 0.0622132 Steps 9000, training accuracy: 0.0390625 validation accuracy: 0.0622132 Training completed with test accuracy : 0.06306306272745132 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.0234375 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.010010010562837124 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.101562 validation accuracy: 0.0617032 Steps 1000, training accuracy: 0.078125 validation accuracy: 0.0474248 Steps 2000, training accuracy: 0.03125 validation accuracy: 0.054564 Steps 3000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0.09375 validation accuracy: 0.0520143 Steps 5000, training accuracy: 0.046875 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 7000, training accuracy: 0.078125 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 9000, training accuracy: 0.03125 validation accuracy: 0.0622132 Training completed with test accuracy : 0.044044043868780136 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.117188 validation accuracy: 0.0530342 Steps 1000, training accuracy: 0.03125 validation accuracy: 0.0494646 Steps 2000, training accuracy: 0.0625 validation accuracy: 0.0540541 Steps 3000, training accuracy: 0.0234375 validation accuracy: 0.0601734 Steps 4000, training accuracy: 0.0625 validation accuracy: 0.0530342 Steps 5000, training accuracy: 0.0625 validation accuracy: 0.0601734 Steps 6000, training accuracy: 0.078125 validation accuracy: 0.0520143 Steps 7000, training accuracy: 0.101562 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0390625 validation accuracy: 0.0530342 Steps 9000, training accuracy: 0.046875 validation accuracy: 0.0530342 Training completed with test accuracy : 0.052052054554224014 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.027027 Steps 1000, training accuracy: 0.960938 validation accuracy: 0.933197 Steps 2000, training accuracy: 1 validation accuracy: 0.962774 Steps 3000, training accuracy: 1 validation accuracy: 0.973993 Steps 4000, training accuracy: 1 validation accuracy: 0.983682 Steps 5000, training accuracy: 1 validation accuracy: 0.984192 Steps 6000, training accuracy: 1 validation accuracy: 0.986231 Steps 7000, training accuracy: 1 validation accuracy: 0.986741 Steps 8000, training accuracy: 1 validation accuracy: 0.984702 Steps 9000, training accuracy: 1 validation accuracy: 0.987761 Training completed with test accuracy : 0.935935914516449 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0392657 Steps 1000, training accuracy: 1 validation accuracy: 0.961754 Steps 2000, training accuracy: 1 validation accuracy: 0.982662 Steps 3000, training accuracy: 1 validation accuracy: 0.989801 Steps 4000, training accuracy: 1 validation accuracy: 0.990821 Steps 5000, training accuracy: 1 validation accuracy: 0.989291 Steps 6000, training accuracy: 1 validation accuracy: 0.991331 Steps 7000, training accuracy: 1 validation accuracy: 0.989801 Steps 8000, training accuracy: 1 validation accuracy: 0.993371 Steps 9000, training accuracy: 1 validation accuracy: 0.989801 Training completed with test accuracy : 0.9519519209861755 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.0234375 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.046875 validation accuracy: 0.027537 Steps 1000, training accuracy: 0.992188 validation accuracy: 0.977562 Steps 2000, training accuracy: 1 validation accuracy: 0.977562 Steps 3000, training accuracy: 1 validation accuracy: 0.991841 Steps 4000, training accuracy: 1 validation accuracy: 0.986741 Steps 5000, training accuracy: 1 validation accuracy: 0.991841 Steps 6000, training accuracy: 1 validation accuracy: 0.993881 Steps 7000, training accuracy: 1 validation accuracy: 0.994391 Steps 8000, training accuracy: 1 validation accuracy: 0.987761 Steps 9000, training accuracy: 1 validation accuracy: 0.99694 Training completed with test accuracy : 0.9449449181556702 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.0546875 validation accuracy: 0.0540541 Steps 1000, training accuracy: 1 validation accuracy: 0.950025 Steps 2000, training accuracy: 1 validation accuracy: 0.973483 Steps 3000, training accuracy: 1 validation accuracy: 0.978072 Steps 4000, training accuracy: 1 validation accuracy: 0.983682 Steps 5000, training accuracy: 1 validation accuracy: 0.981642 Steps 6000, training accuracy: 1 validation accuracy: 0.984702 Steps 7000, training accuracy: 1 validation accuracy: 0.989801 Steps 8000, training accuracy: 1 validation accuracy: 0.992351 Steps 9000, training accuracy: 1 validation accuracy: 0.983172 Training completed with test accuracy : 0.912912905216217 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.0078125 validation accuracy: 0.0290668 Steps 1000, training accuracy: 0.851562 validation accuracy: 0.672106 Steps 2000, training accuracy: 0.929688 validation accuracy: 0.833758 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.887812 Steps 4000, training accuracy: 0.976562 validation accuracy: 0.926568 Steps 5000, training accuracy: 0.992188 validation accuracy: 0.951555 Steps 6000, training accuracy: 0.992188 validation accuracy: 0.956145 Steps 7000, training accuracy: 1 validation accuracy: 0.959714 Steps 8000, training accuracy: 1 validation accuracy: 0.965834 Steps 9000, training accuracy: 0.992188 validation accuracy: 0.969403 Training completed with test accuracy : 0.9089089035987854 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.0234375 validation accuracy: 0.0219276 Steps 1000, training accuracy: 0.726562 validation accuracy: 0.627231 Steps 2000, training accuracy: 0.890625 validation accuracy: 0.809281 Steps 3000, training accuracy: 0.984375 validation accuracy: 0.883223 Steps 4000, training accuracy: 0.984375 validation accuracy: 0.920959 Steps 5000, training accuracy: 0.992188 validation accuracy: 0.947986 Steps 6000, training accuracy: 1 validation accuracy: 0.954105 Steps 7000, training accuracy: 0.992188 validation accuracy: 0.967873 Steps 8000, training accuracy: 0.992188 validation accuracy: 0.968383 Steps 9000, training accuracy: 0.992188 validation accuracy: 0.971443 Training completed with test accuracy : 0.8928928971290588 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.0234375 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.015625 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.0625 validation accuracy: 0.0617032 Steps 1000, training accuracy: 0.859375 validation accuracy: 0.834778 Steps 2000, training accuracy: 0.929688 validation accuracy: 0.915859 Steps 3000, training accuracy: 0.976562 validation accuracy: 0.958185 Steps 4000, training accuracy: 0.992188 validation accuracy: 0.965834 Steps 5000, training accuracy: 0.976562 validation accuracy: 0.962774 Steps 6000, training accuracy: 0.992188 validation accuracy: 0.985212 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.967874 Steps 8000, training accuracy: 1 validation accuracy: 0.987761 Steps 9000, training accuracy: 1 validation accuracy: 0.982152 Training completed with test accuracy : 0.8888888955116272 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.015625 validation accuracy: 0.0249873 Steps 1000, training accuracy: 0.867188 validation accuracy: 0.766956 Steps 2000, training accuracy: 0.960938 validation accuracy: 0.895461 Steps 3000, training accuracy: 0.992188 validation accuracy: 0.931667 Steps 4000, training accuracy: 1 validation accuracy: 0.958185 Steps 5000, training accuracy: 1 validation accuracy: 0.959204 Steps 6000, training accuracy: 1 validation accuracy: 0.979602 Steps 7000, training accuracy: 1 validation accuracy: 0.976542 Steps 8000, training accuracy: 1 validation accuracy: 0.983172 Steps 9000, training accuracy: 1 validation accuracy: 0.986741 Training completed with test accuracy : 0.8948948979377747 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.078125 validation accuracy: 0.0591535 Steps 1000, training accuracy: 0.0546875 validation accuracy: 0.0540541 Steps 2000, training accuracy: 0.0507812 validation accuracy: 0.0520143 Steps 3000, training accuracy: 0.0351562 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0.0664062 validation accuracy: 0.0622132 Steps 5000, training accuracy: 0.0546875 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0.0429688 validation accuracy: 0.0622132 Steps 7000, training accuracy: 0.078125 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 9000, training accuracy: 0.046875 validation accuracy: 0.0520143 Training completed with test accuracy : 0.06306306272745132 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.0507812 validation accuracy: 0.0535441 Steps 1000, training accuracy: 0.957031 validation accuracy: 0.885263 Steps 2000, training accuracy: 0.96875 validation accuracy: 0.90923 Steps 3000, training accuracy: 0.980469 validation accuracy: 0.90923 Steps 4000, training accuracy: 0.992188 validation accuracy: 0.917389 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.91178 Steps 6000, training accuracy: 0.972656 validation accuracy: 0.888322 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.9128 Steps 8000, training accuracy: 0.992188 validation accuracy: 0.920449 Steps 9000, training accuracy: 0.992188 validation accuracy: 0.9128 Training completed with test accuracy : 0.8368368148803711 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.0117188 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.0195312 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.0859375 validation accuracy: 0.0622132 Steps 1000, training accuracy: 0.972656 validation accuracy: 0.932687 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.946456 Steps 3000, training accuracy: 0.992188 validation accuracy: 0.942376 Steps 4000, training accuracy: 1 validation accuracy: 0.948496 Steps 5000, training accuracy: 1 validation accuracy: 0.945436 Steps 6000, training accuracy: 0.996094 validation accuracy: 0.939827 Steps 7000, training accuracy: 0.976562 validation accuracy: 0.932687 Steps 8000, training accuracy: 0.992188 validation accuracy: 0.946966 Steps 9000, training accuracy: 0.984375 validation accuracy: 0.942886 Training completed with test accuracy : 0.770770788192749 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.078125 validation accuracy: 0.0520143 Steps 1000, training accuracy: 0.976562 validation accuracy: 0.923508 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.936767 Steps 3000, training accuracy: 0.988281 validation accuracy: 0.932687 Steps 4000, training accuracy: 0.988281 validation accuracy: 0.922488 Steps 5000, training accuracy: 1 validation accuracy: 0.932177 Steps 6000, training accuracy: 0.980469 validation accuracy: 0.938807 Steps 7000, training accuracy: 0.996094 validation accuracy: 0.944416 Steps 8000, training accuracy: 1 validation accuracy: 0.944926 Steps 9000, training accuracy: 0.992188 validation accuracy: 0.930648 Training completed with test accuracy : 0.8458458185195923 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0742188 validation accuracy: 0.0571137 Steps 1000, training accuracy: 0.980469 validation accuracy: 0.952575 Steps 2000, training accuracy: 0.996094 validation accuracy: 0.979602 Steps 3000, training accuracy: 1 validation accuracy: 0.980622 Steps 4000, training accuracy: 1 validation accuracy: 0.981132 Steps 5000, training accuracy: 1 validation accuracy: 0.983682 Steps 6000, training accuracy: 1 validation accuracy: 0.983172 Steps 7000, training accuracy: 1 validation accuracy: 0.988781 Steps 8000, training accuracy: 1 validation accuracy: 0.983682 Steps 9000, training accuracy: 1 validation accuracy: 0.985211 Training completed with test accuracy : 0.9429429173469543 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0742188 validation accuracy: 0.0356961 Steps 1000, training accuracy: 1 validation accuracy: 0.971953 Steps 2000, training accuracy: 1 validation accuracy: 0.987761 Steps 3000, training accuracy: 1 validation accuracy: 0.989291 Steps 4000, training accuracy: 1 validation accuracy: 0.991331 Steps 5000, training accuracy: 1 validation accuracy: 0.988781 Steps 6000, training accuracy: 1 validation accuracy: 0.989291 Steps 7000, training accuracy: 1 validation accuracy: 0.99592 Steps 8000, training accuracy: 1 validation accuracy: 0.993371 Steps 9000, training accuracy: 1 validation accuracy: 0.991841 Training completed with test accuracy : 0.9449449181556702 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.0117188 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.0117188 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.0117188 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0742188 validation accuracy: 0.0520143 Steps 1000, training accuracy: 0.988281 validation accuracy: 0.981642 Steps 2000, training accuracy: 1 validation accuracy: 0.988271 Steps 3000, training accuracy: 1 validation accuracy: 0.987251 Steps 4000, training accuracy: 1 validation accuracy: 0.986741 Steps 5000, training accuracy: 1 validation accuracy: 0.993371 Steps 6000, training accuracy: 1 validation accuracy: 0.993881 Steps 7000, training accuracy: 1 validation accuracy: 0.993371 Steps 8000, training accuracy: 1 validation accuracy: 0.992351 Steps 9000, training accuracy: 1 validation accuracy: 0.991331 Training completed with test accuracy : 0.9219219088554382 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0546875 validation accuracy: 0.054564 Steps 1000, training accuracy: 1 validation accuracy: 0.974503 Steps 2000, training accuracy: 1 validation accuracy: 0.981132 Steps 3000, training accuracy: 1 validation accuracy: 0.987761 Steps 4000, training accuracy: 1 validation accuracy: 0.986741 Steps 5000, training accuracy: 1 validation accuracy: 0.989801 Steps 6000, training accuracy: 1 validation accuracy: 0.985212 Steps 7000, training accuracy: 1 validation accuracy: 0.992861 Steps 8000, training accuracy: 1 validation accuracy: 0.984702 Steps 9000, training accuracy: 1 validation accuracy: 0.992351 Training completed with test accuracy : 0.9209209084510803 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0078125 validation accuracy: 0.0188679 Steps 1000, training accuracy: 0.824219 validation accuracy: 0.692504 Steps 2000, training accuracy: 0.945312 validation accuracy: 0.849567 Steps 3000, training accuracy: 0.988281 validation accuracy: 0.90515 Steps 4000, training accuracy: 0.992188 validation accuracy: 0.945436 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.955125 Steps 6000, training accuracy: 0.996094 validation accuracy: 0.963794 Steps 7000, training accuracy: 0.996094 validation accuracy: 0.972463 Steps 8000, training accuracy: 0.996094 validation accuracy: 0.976033 Steps 9000, training accuracy: 0.996094 validation accuracy: 0.979092 Training completed with test accuracy : 0.9049049019813538 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0546875 validation accuracy: 0.0300867 Steps 1000, training accuracy: 0.886719 validation accuracy: 0.763896 Steps 2000, training accuracy: 0.949219 validation accuracy: 0.886792 Steps 3000, training accuracy: 0.996094 validation accuracy: 0.925548 Steps 4000, training accuracy: 0.988281 validation accuracy: 0.947986 Steps 5000, training accuracy: 1 validation accuracy: 0.963284 Steps 6000, training accuracy: 0.996094 validation accuracy: 0.970933 Steps 7000, training accuracy: 0.996094 validation accuracy: 0.973993 Steps 8000, training accuracy: 1 validation accuracy: 0.981132 Steps 9000, training accuracy: 1 validation accuracy: 0.980622 Training completed with test accuracy : 0.9029029011726379 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.0117188 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0117188 validation accuracy: 0.00968893 Steps 1000, training accuracy: 0.886719 validation accuracy: 0.865885 Steps 2000, training accuracy: 0.972656 validation accuracy: 0.949006 Steps 3000, training accuracy: 0.976562 validation accuracy: 0.972973 Steps 4000, training accuracy: 0.992188 validation accuracy: 0.978072 Steps 5000, training accuracy: 0.996094 validation accuracy: 0.982152 Steps 6000, training accuracy: 1 validation accuracy: 0.984702 Steps 7000, training accuracy: 1 validation accuracy: 0.980622 Steps 8000, training accuracy: 1 validation accuracy: 0.987251 Steps 9000, training accuracy: 0.996094 validation accuracy: 0.981132 Training completed with test accuracy : 0.8618618845939636 Training model with Two layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0273438 validation accuracy: 0.0173381 Steps 1000, training accuracy: 0.898438 validation accuracy: 0.836818 Steps 2000, training accuracy: 0.957031 validation accuracy: 0.921979 Steps 3000, training accuracy: 0.992188 validation accuracy: 0.946456 Steps 4000, training accuracy: 1 validation accuracy: 0.971443 Steps 5000, training accuracy: 0.992188 validation accuracy: 0.976033 Steps 6000, training accuracy: 1 validation accuracy: 0.982152 Steps 7000, training accuracy: 1 validation accuracy: 0.982662 Steps 8000, training accuracy: 1 validation accuracy: 0.985212 Steps 9000, training accuracy: 1 validation accuracy: 0.986231 Training completed with test accuracy : 0.8998998999595642 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0469148 Steps 1000, training accuracy: 0.125 validation accuracy: 0.0576237 Steps 2000, training accuracy: 0.03125 validation accuracy: 0.0520143 Steps 3000, training accuracy: 0.03125 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0.125 validation accuracy: 0.0530342 Steps 5000, training accuracy: 0 validation accuracy: 0.0622132 Steps 6000, training accuracy: 0.03125 validation accuracy: 0.0622132 Steps 7000, training accuracy: 0 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0622132 Steps 9000, training accuracy: 0.125 validation accuracy: 0.054564 Training completed with test accuracy : 0.06406406313180923 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0494646 Steps 1000, training accuracy: 0.78125 validation accuracy: 0.556349 Steps 2000, training accuracy: 0.625 validation accuracy: 0.63692 Steps 3000, training accuracy: 0.8125 validation accuracy: 0.698623 Steps 4000, training accuracy: 0.90625 validation accuracy: 0.738399 Steps 5000, training accuracy: 0.8125 validation accuracy: 0.704233 Steps 6000, training accuracy: 0.8125 validation accuracy: 0.742988 Steps 7000, training accuracy: 0.71875 validation accuracy: 0.740948 Steps 8000, training accuracy: 0.84375 validation accuracy: 0.705762 Steps 9000, training accuracy: 0.625 validation accuracy: 0.755227 Training completed with test accuracy : 0.7127127647399902 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0158083 Steps 1000, training accuracy: 0.1875 validation accuracy: 0.247833 Steps 2000, training accuracy: 0.625 validation accuracy: 0.518103 Steps 3000, training accuracy: 0.53125 validation accuracy: 0.569607 Steps 4000, training accuracy: 0.90625 validation accuracy: 0.696583 Steps 5000, training accuracy: 0.59375 validation accuracy: 0.716981 Steps 6000, training accuracy: 0.78125 validation accuracy: 0.753697 Steps 7000, training accuracy: 0.84375 validation accuracy: 0.756247 Steps 8000, training accuracy: 0.875 validation accuracy: 0.788373 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.798572 Training completed with test accuracy : 0.7097097635269165 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0397756 Steps 1000, training accuracy: 0.625 validation accuracy: 0.588985 Steps 2000, training accuracy: 0.78125 validation accuracy: 0.683325 Steps 3000, training accuracy: 0.8125 validation accuracy: 0.683835 Steps 4000, training accuracy: 0.8125 validation accuracy: 0.750127 Steps 5000, training accuracy: 0.84375 validation accuracy: 0.738399 Steps 6000, training accuracy: 0.84375 validation accuracy: 0.762876 Steps 7000, training accuracy: 0.8125 validation accuracy: 0.753697 Steps 8000, training accuracy: 0.59375 validation accuracy: 0.737889 Steps 9000, training accuracy: 0.6875 validation accuracy: 0.703723 Training completed with test accuracy : 0.7327327728271484 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.125 validation accuracy: 0.0117287 Steps 1000, training accuracy: 0.96875 validation accuracy: 0.765936 Steps 2000, training accuracy: 1 validation accuracy: 0.891892 Steps 3000, training accuracy: 0.9375 validation accuracy: 0.932177 Steps 4000, training accuracy: 1 validation accuracy: 0.959714 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.91076 Steps 6000, training accuracy: 1 validation accuracy: 0.963794 Steps 7000, training accuracy: 1 validation accuracy: 0.981642 Steps 8000, training accuracy: 1 validation accuracy: 0.977052 Steps 9000, training accuracy: 1 validation accuracy: 0.956655 Training completed with test accuracy : 0.9349349141120911 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0346762 Steps 1000, training accuracy: 0.875 validation accuracy: 0.827639 Steps 2000, training accuracy: 0.96875 validation accuracy: 0.932177 Steps 3000, training accuracy: 1 validation accuracy: 0.957165 Steps 4000, training accuracy: 1 validation accuracy: 0.959714 Steps 5000, training accuracy: 1 validation accuracy: 0.965834 Steps 6000, training accuracy: 1 validation accuracy: 0.965324 Steps 7000, training accuracy: 1 validation accuracy: 0.973483 Steps 8000, training accuracy: 1 validation accuracy: 0.983172 Steps 9000, training accuracy: 1 validation accuracy: 0.983682 Training completed with test accuracy : 0.9259259104728699 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.045895 Steps 1000, training accuracy: 0.90625 validation accuracy: 0.894952 Steps 2000, training accuracy: 0.9375 validation accuracy: 0.946456 Steps 3000, training accuracy: 1 validation accuracy: 0.957165 Steps 4000, training accuracy: 1 validation accuracy: 0.964304 Steps 5000, training accuracy: 1 validation accuracy: 0.972973 Steps 6000, training accuracy: 1 validation accuracy: 0.972973 Steps 7000, training accuracy: 1 validation accuracy: 0.979092 Steps 8000, training accuracy: 1 validation accuracy: 0.968383 Steps 9000, training accuracy: 1 validation accuracy: 0.984192 Training completed with test accuracy : 0.9109109044075012 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0168281 Steps 1000, training accuracy: 0.875 validation accuracy: 0.853136 Steps 2000, training accuracy: 1 validation accuracy: 0.946966 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.938297 Steps 4000, training accuracy: 1 validation accuracy: 0.964814 Steps 5000, training accuracy: 1 validation accuracy: 0.978582 Steps 6000, training accuracy: 1 validation accuracy: 0.980622 Steps 7000, training accuracy: 1 validation accuracy: 0.978582 Steps 8000, training accuracy: 1 validation accuracy: 0.975013 Steps 9000, training accuracy: 1 validation accuracy: 0.970423 Training completed with test accuracy : 0.9509509205818176 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0229475 Steps 1000, training accuracy: 0.5625 validation accuracy: 0.324324 Steps 2000, training accuracy: 0.65625 validation accuracy: 0.498215 Steps 3000, training accuracy: 0.75 validation accuracy: 0.619582 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.696073 Steps 5000, training accuracy: 0.875 validation accuracy: 0.764406 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.811321 Steps 7000, training accuracy: 1 validation accuracy: 0.834778 Steps 8000, training accuracy: 1 validation accuracy: 0.879653 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.893422 Training completed with test accuracy : 0.8678678274154663 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0229475 Steps 1000, training accuracy: 0.5625 validation accuracy: 0.402346 Steps 2000, training accuracy: 0.78125 validation accuracy: 0.603264 Steps 3000, training accuracy: 0.9375 validation accuracy: 0.72514 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.791433 Steps 5000, training accuracy: 0.9375 validation accuracy: 0.843957 Steps 6000, training accuracy: 1 validation accuracy: 0.875574 Steps 7000, training accuracy: 0.9375 validation accuracy: 0.891892 Steps 8000, training accuracy: 1 validation accuracy: 0.896991 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.924018 Training completed with test accuracy : 0.8858858346939087 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.03125 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.00968893 Steps 1000, training accuracy: 0.5625 validation accuracy: 0.552779 Steps 2000, training accuracy: 0.84375 validation accuracy: 0.756757 Steps 3000, training accuracy: 0.875 validation accuracy: 0.853136 Steps 4000, training accuracy: 0.96875 validation accuracy: 0.873534 Steps 5000, training accuracy: 1 validation accuracy: 0.926058 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.932687 Steps 7000, training accuracy: 0.96875 validation accuracy: 0.939827 Steps 8000, training accuracy: 1 validation accuracy: 0.961244 Steps 9000, training accuracy: 0.9375 validation accuracy: 0.951555 Training completed with test accuracy : 0.8548548221588135 Training model with Four layers started for step: 10000 batchsize: 32 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0142784 Steps 1000, training accuracy: 0.625 validation accuracy: 0.511474 Steps 2000, training accuracy: 0.78125 validation accuracy: 0.705252 Steps 3000, training accuracy: 0.90625 validation accuracy: 0.803162 Steps 4000, training accuracy: 0.8125 validation accuracy: 0.852116 Steps 5000, training accuracy: 0.875 validation accuracy: 0.893932 Steps 6000, training accuracy: 1 validation accuracy: 0.918409 Steps 7000, training accuracy: 1 validation accuracy: 0.934727 Steps 8000, training accuracy: 1 validation accuracy: 0.939827 Steps 9000, training accuracy: 1 validation accuracy: 0.946966 Training completed with test accuracy : 0.8638638854026794
Model Two Layer training log with additional data: Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 keep_prob: 0.5 Steps 0, training accuracy: 0.234375 validation accuracy: 0.0499745 Steps 1000, training accuracy: 0.34375 validation accuracy: 0.559408 Steps 2000, training accuracy: 0.3125 validation accuracy: 0.757267 Steps 3000, training accuracy: 0.46875 validation accuracy: 0.858745 Steps 4000, training accuracy: 0.53125 validation accuracy: 0.893422 Steps 5000, training accuracy: 0.453125 validation accuracy: 0.917899 Steps 6000, training accuracy: 0.796875 validation accuracy: 0.918919 Steps 7000, training accuracy: 0.5625 validation accuracy: 0.934727 Steps 8000, training accuracy: 0.65625 validation accuracy: 0.942886 Steps 9000, training accuracy: 0.6875 validation accuracy: 0.945946 Training completed with test accuracy : 0.9049049019813538 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 keep_prob: 0.65 Steps 0, training accuracy: 0.1875 validation accuracy: 0.0438552 Steps 1000, training accuracy: 0.40625 validation accuracy: 0.72565 Steps 2000, training accuracy: 0.4375 validation accuracy: 0.861805 Steps 3000, training accuracy: 0.234375 validation accuracy: 0.921469 Steps 4000, training accuracy: 0.46875 validation accuracy: 0.937787 Steps 5000, training accuracy: 0.5 validation accuracy: 0.935747 Steps 6000, training accuracy: 0.59375 validation accuracy: 0.942886 Steps 7000, training accuracy: 0.640625 validation accuracy: 0.957165 Steps 8000, training accuracy: 0.75 validation accuracy: 0.941866 Steps 9000, training accuracy: 0.75 validation accuracy: 0.958184 Training completed with test accuracy : 0.923923909664154 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.001 keep_prob: 0.8 Steps 0, training accuracy: 0.3125 validation accuracy: 0.0387557 Steps 1000, training accuracy: 0.265625 validation accuracy: 0.674146 Steps 2000, training accuracy: 0.265625 validation accuracy: 0.830699 Steps 3000, training accuracy: 0.46875 validation accuracy: 0.884753 Steps 4000, training accuracy: 0.5 validation accuracy: 0.902601 Steps 5000, training accuracy: 0.40625 validation accuracy: 0.90821 Steps 6000, training accuracy: 0.625 validation accuracy: 0.937787 Steps 7000, training accuracy: 0.75 validation accuracy: 0.948496 Steps 8000, training accuracy: 0.53125 validation accuracy: 0.950025 Steps 9000, training accuracy: 0.65625 validation accuracy: 0.945946 Training completed with test accuracy : 0.8768768310546875 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 keep_prob: 0.5 Steps 0, training accuracy: 0 validation accuracy: 0.0209077 Steps 1000, training accuracy: 0.109375 validation accuracy: 0.235594 Steps 2000, training accuracy: 0.1875 validation accuracy: 0.410505 Steps 3000, training accuracy: 0.109375 validation accuracy: 0.471698 Steps 4000, training accuracy: 0.25 validation accuracy: 0.604793 Steps 5000, training accuracy: 0.328125 validation accuracy: 0.670576 Steps 6000, training accuracy: 0.25 validation accuracy: 0.72514 Steps 7000, training accuracy: 0.296875 validation accuracy: 0.757777 Steps 8000, training accuracy: 0.390625 validation accuracy: 0.798572 Steps 9000, training accuracy: 0.21875 validation accuracy: 0.842937 Training completed with test accuracy : 0.8218218088150024 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 keep_prob: 0.65 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0321265 Steps 1000, training accuracy: 0.0625 validation accuracy: 0.318205 Steps 2000, training accuracy: 0.1875 validation accuracy: 0.486996 Steps 3000, training accuracy: 0.078125 validation accuracy: 0.575727 Steps 4000, training accuracy: 0.28125 validation accuracy: 0.618562 Steps 5000, training accuracy: 0.296875 validation accuracy: 0.73126 Steps 6000, training accuracy: 0.1875 validation accuracy: 0.766446 Steps 7000, training accuracy: 0.390625 validation accuracy: 0.810811 Steps 8000, training accuracy: 0.328125 validation accuracy: 0.825089 Steps 9000, training accuracy: 0.515625 validation accuracy: 0.863845 Training completed with test accuracy : 0.8218218088150024 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 keep_prob: 0.8 Steps 0, training accuracy: 0 validation accuracy: 0.0198878 Steps 1000, training accuracy: 0.03125 validation accuracy: 0.36104 Steps 2000, training accuracy: 0.25 validation accuracy: 0.510454 Steps 3000, training accuracy: 0.28125 validation accuracy: 0.621622 Steps 4000, training accuracy: 0.3125 validation accuracy: 0.707802 Steps 5000, training accuracy: 0.296875 validation accuracy: 0.778174 Steps 6000, training accuracy: 0.1875 validation accuracy: 0.782254 Steps 7000, training accuracy: 0.28125 validation accuracy: 0.836308 Steps 8000, training accuracy: 0.25 validation accuracy: 0.880163 Steps 9000, training accuracy: 0.375 validation accuracy: 0.873024 Training completed with test accuracy : 0.8398398160934448 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 1e-05 keep_prob: 0.5 Steps 0, training accuracy: 0 validation accuracy: 0.036206 Steps 1000, training accuracy: 0.25 validation accuracy: 0.0678225 Steps 2000, training accuracy: 0.046875 validation accuracy: 0.131566 Steps 3000, training accuracy: 0.046875 validation accuracy: 0.175931 Steps 4000, training accuracy: 0.0625 validation accuracy: 0.220806 Steps 5000, training accuracy: 0.109375 validation accuracy: 0.280469 Steps 6000, training accuracy: 0.078125 validation accuracy: 0.296787 Steps 7000, training accuracy: 0.28125 validation accuracy: 0.326364 Steps 8000, training accuracy: 0.109375 validation accuracy: 0.356451 Steps 9000, training accuracy: 0.171875 validation accuracy: 0.375319 Training completed with test accuracy : 0.44044044613838196 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 1e-05 keep_prob: 0.65 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0346762 Steps 1000, training accuracy: 0.125 validation accuracy: 0.0657828 Steps 2000, training accuracy: 0.09375 validation accuracy: 0.119837 Steps 3000, training accuracy: 0.171875 validation accuracy: 0.18052 Steps 4000, training accuracy: 0.046875 validation accuracy: 0.234574 Steps 5000, training accuracy: 0.125 validation accuracy: 0.285569 Steps 6000, training accuracy: 0.1875 validation accuracy: 0.320755 Steps 7000, training accuracy: 0.109375 validation accuracy: 0.36308 Steps 8000, training accuracy: 0.28125 validation accuracy: 0.355431 Steps 9000, training accuracy: 0.109375 validation accuracy: 0.397246 Training completed with test accuracy : 0.4144144058227539 Training model with Two layers started for step: 10000 batchsize: 64 learning_rate: 1e-05 keep_prob: 0.8 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0122387 Steps 1000, training accuracy: 0.109375 validation accuracy: 0.072412 Steps 2000, training accuracy: 0.0625 validation accuracy: 0.138705 Steps 3000, training accuracy: 0.203125 validation accuracy: 0.219786 Steps 4000, training accuracy: 0.21875 validation accuracy: 0.265681 Steps 5000, training accuracy: 0.109375 validation accuracy: 0.338603 Steps 6000, training accuracy: 0.21875 validation accuracy: 0.36002 Steps 7000, training accuracy: 0.15625 validation accuracy: 0.418664 Steps 8000, training accuracy: 0.09375 validation accuracy: 0.420194 Steps 9000, training accuracy: 0.125 validation accuracy: 0.439572 Training completed with test accuracy : 0.4964964985847473 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 keep_prob: 0.5 Steps 0, training accuracy: 0.117188 validation accuracy: 0.0142784 Steps 1000, training accuracy: 0.226562 validation accuracy: 0.688934 Steps 2000, training accuracy: 0.28125 validation accuracy: 0.833758 Steps 3000, training accuracy: 0.507812 validation accuracy: 0.91076 Steps 4000, training accuracy: 0.46875 validation accuracy: 0.926568 Steps 5000, training accuracy: 0.570312 validation accuracy: 0.929628 Steps 6000, training accuracy: 0.640625 validation accuracy: 0.946966 Steps 7000, training accuracy: 0.546875 validation accuracy: 0.948496 Steps 8000, training accuracy: 0.625 validation accuracy: 0.955635 Steps 9000, training accuracy: 0.625 validation accuracy: 0.962774 Training completed with test accuracy : 0.913913905620575 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 keep_prob: 0.65 Steps 0, training accuracy: 0.109375 validation accuracy: 0.0397756 Steps 1000, training accuracy: 0.398438 validation accuracy: 0.770525 Steps 2000, training accuracy: 0.351562 validation accuracy: 0.870474 Steps 3000, training accuracy: 0.640625 validation accuracy: 0.91127 Steps 4000, training accuracy: 0.632812 validation accuracy: 0.925548 Steps 5000, training accuracy: 0.664062 validation accuracy: 0.941356 Steps 6000, training accuracy: 0.6875 validation accuracy: 0.951555 Steps 7000, training accuracy: 0.703125 validation accuracy: 0.960224 Steps 8000, training accuracy: 0.703125 validation accuracy: 0.956655 Steps 9000, training accuracy: 0.71875 validation accuracy: 0.949006 Training completed with test accuracy : 0.9049049019813538 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.001 keep_prob: 0.8 Steps 0, training accuracy: 0.195312 validation accuracy: 0.0423253 Steps 1000, training accuracy: 0.28125 validation accuracy: 0.759306 Steps 2000, training accuracy: 0.367188 validation accuracy: 0.856196 Steps 3000, training accuracy: 0.445312 validation accuracy: 0.913819 Steps 4000, training accuracy: 0.554688 validation accuracy: 0.922489 Steps 5000, training accuracy: 0.578125 validation accuracy: 0.931157 Steps 6000, training accuracy: 0.679688 validation accuracy: 0.942376 Steps 7000, training accuracy: 0.585938 validation accuracy: 0.938297 Steps 8000, training accuracy: 0.71875 validation accuracy: 0.957165 Steps 9000, training accuracy: 0.71875 validation accuracy: 0.959204 Training completed with test accuracy : 0.8828828930854797 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 keep_prob: 0.5 Steps 0, training accuracy: 0.0390625 validation accuracy: 0.0198878 Steps 1000, training accuracy: 0.203125 validation accuracy: 0.391637 Steps 2000, training accuracy: 0.257812 validation accuracy: 0.507904 Steps 3000, training accuracy: 0.304688 validation accuracy: 0.686894 Steps 4000, training accuracy: 0.28125 validation accuracy: 0.758796 Steps 5000, training accuracy: 0.265625 validation accuracy: 0.794493 Steps 6000, training accuracy: 0.296875 validation accuracy: 0.831208 Steps 7000, training accuracy: 0.320312 validation accuracy: 0.867924 Steps 8000, training accuracy: 0.429688 validation accuracy: 0.884753 Steps 9000, training accuracy: 0.507812 validation accuracy: 0.90668 Training completed with test accuracy : 0.8728728890419006 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 keep_prob: 0.65 Steps 0, training accuracy: 0.148438 validation accuracy: 0.0219276 Steps 1000, training accuracy: 0.265625 validation accuracy: 0.374809 Steps 2000, training accuracy: 0.351562 validation accuracy: 0.565018 Steps 3000, training accuracy: 0.367188 validation accuracy: 0.694544 Steps 4000, training accuracy: 0.375 validation accuracy: 0.758797 Steps 5000, training accuracy: 0.289062 validation accuracy: 0.797042 Steps 6000, training accuracy: 0.273438 validation accuracy: 0.851096 Steps 7000, training accuracy: 0.460938 validation accuracy: 0.862315 Steps 8000, training accuracy: 0.421875 validation accuracy: 0.885773 Steps 9000, training accuracy: 0.375 validation accuracy: 0.90464 Training completed with test accuracy : 0.8418418169021606 Training model with Two layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 keep_prob: 0.8 Steps 0, training accuracy: 0.0625 validation accuracy: 0.0326364 Steps 1000, training accuracy: 0.164062 validation accuracy: 0.460479 Steps 2000, training accuracy: 0.273438 validation accuracy: 0.631821 Steps 3000, training accuracy: 0.367188 validation accuracy: 0.761856 Steps 4000, training accuracy: 0.375 validation accuracy: 0.802652 Steps 5000, training accuracy: 0.367188 validation accuracy: 0.837838 Steps 6000, training accuracy: 0.390625 validation accuracy: 0.867925 Steps 7000, training accuracy: 0.34375 validation accuracy: 0.901581 Steps 8000, training accuracy: 0.429688 validation accuracy: 0.90923 Steps 9000, training accuracy: 0.4375 validation accuracy: 0.929628 Training completed with test accuracy : 0.8738738894462585
Model Four Layer training log without additional data Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0244773 Steps 1000, training accuracy: 0.734375 validation accuracy: 0.620092 Steps 2000, training accuracy: 0.78125 validation accuracy: 0.670576 Steps 3000, training accuracy: 0.765625 validation accuracy: 0.732279 Steps 4000, training accuracy: 0.828125 validation accuracy: 0.72259 Steps 5000, training accuracy: 0.859375 validation accuracy: 0.745538 Steps 6000, training accuracy: 0.875 validation accuracy: 0.756757 Steps 7000, training accuracy: 0.84375 validation accuracy: 0.799082 Steps 8000, training accuracy: 0.90625 validation accuracy: 0.771545 Steps 9000, training accuracy: 0.859375 validation accuracy: 0.778174 Training completed with test accuracy : 0.6976977586746216 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0382458 Steps 1000, training accuracy: 0.640625 validation accuracy: 0.593575 Steps 2000, training accuracy: 0.765625 validation accuracy: 0.683325 Steps 3000, training accuracy: 0.796875 validation accuracy: 0.769505 Steps 4000, training accuracy: 0.828125 validation accuracy: 0.763896 Steps 5000, training accuracy: 0.859375 validation accuracy: 0.781234 Steps 6000, training accuracy: 0.859375 validation accuracy: 0.806221 Steps 7000, training accuracy: 0.78125 validation accuracy: 0.797552 Steps 8000, training accuracy: 0.875 validation accuracy: 0.770525 Steps 9000, training accuracy: 1 validation accuracy: 0.779704 Training completed with test accuracy : 0.7527527809143066 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0520143 Steps 1000, training accuracy: 0.78125 validation accuracy: 0.768485 Steps 2000, training accuracy: 0.953125 validation accuracy: 0.807241 Steps 3000, training accuracy: 0.90625 validation accuracy: 0.838348 Steps 4000, training accuracy: 0.921875 validation accuracy: 0.873534 Steps 5000, training accuracy: 0.859375 validation accuracy: 0.864355 Steps 6000, training accuracy: 0.890625 validation accuracy: 0.870984 Steps 7000, training accuracy: 0.953125 validation accuracy: 0.864355 Steps 8000, training accuracy: 0.9375 validation accuracy: 0.847527 Steps 9000, training accuracy: 0.90625 validation accuracy: 0.837838 Training completed with test accuracy : 0.7747747898101807 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 Steps 0, training accuracy: 0.0625 validation accuracy: 0.0520143 Steps 1000, training accuracy: 0.8125 validation accuracy: 0.806731 Steps 2000, training accuracy: 0.90625 validation accuracy: 0.831719 Steps 3000, training accuracy: 0.953125 validation accuracy: 0.866905 Steps 4000, training accuracy: 0.859375 validation accuracy: 0.81591 Steps 5000, training accuracy: 0.953125 validation accuracy: 0.861295 Steps 6000, training accuracy: 0.875 validation accuracy: 0.848037 Steps 7000, training accuracy: 0.921875 validation accuracy: 0.805711 Steps 8000, training accuracy: 0.234375 validation accuracy: 0.152983 Steps 9000, training accuracy: 0.03125 validation accuracy: 0.0520143 Training completed with test accuracy : 0.054054051637649536 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.09375 validation accuracy: 0.027027 Steps 1000, training accuracy: 0.921875 validation accuracy: 0.838858 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.941866 Steps 3000, training accuracy: 1 validation accuracy: 0.954105 Steps 4000, training accuracy: 0.984375 validation accuracy: 0.979092 Steps 5000, training accuracy: 1 validation accuracy: 0.986741 Steps 6000, training accuracy: 1 validation accuracy: 0.987761 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.977052 Steps 8000, training accuracy: 1 validation accuracy: 0.984192 Steps 9000, training accuracy: 1 validation accuracy: 0.983682 Training completed with test accuracy : 0.9319319128990173 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.0625 validation accuracy: 0.036716 Steps 1000, training accuracy: 0.953125 validation accuracy: 0.888832 Steps 2000, training accuracy: 0.984375 validation accuracy: 0.950025 Steps 3000, training accuracy: 1 validation accuracy: 0.965324 Steps 4000, training accuracy: 1 validation accuracy: 0.973483 Steps 5000, training accuracy: 1 validation accuracy: 0.975013 Steps 6000, training accuracy: 1 validation accuracy: 0.971953 Steps 7000, training accuracy: 1 validation accuracy: 0.984192 Steps 8000, training accuracy: 1 validation accuracy: 0.980112 Steps 9000, training accuracy: 1 validation accuracy: 0.985722 Training completed with test accuracy : 0.9199199080467224 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.010010010562837124 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0122387 Steps 1000, training accuracy: 0.953125 validation accuracy: 0.927588 Steps 2000, training accuracy: 1 validation accuracy: 0.948496 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.972973 Steps 4000, training accuracy: 1 validation accuracy: 0.976033 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.967874 Steps 6000, training accuracy: 1 validation accuracy: 0.980622 Steps 7000, training accuracy: 1 validation accuracy: 0.982152 Steps 8000, training accuracy: 1 validation accuracy: 0.983172 Steps 9000, training accuracy: 1 validation accuracy: 0.975523 Training completed with test accuracy : 0.9209209084510803 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 Steps 0, training accuracy: 0.078125 validation accuracy: 0.0464049 Steps 1000, training accuracy: 0.9375 validation accuracy: 0.901071 Steps 2000, training accuracy: 1 validation accuracy: 0.956145 Steps 3000, training accuracy: 1 validation accuracy: 0.963284 Steps 4000, training accuracy: 1 validation accuracy: 0.970933 Steps 5000, training accuracy: 1 validation accuracy: 0.978072 Steps 6000, training accuracy: 0.984375 validation accuracy: 0.970423 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.985212 Steps 8000, training accuracy: 1 validation accuracy: 0.981132 Steps 9000, training accuracy: 1 validation accuracy: 0.989291 Training completed with test accuracy : 0.9259259104728699 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0321265 Steps 1000, training accuracy: 0.5625 validation accuracy: 0.446201 Steps 2000, training accuracy: 0.78125 validation accuracy: 0.641 Steps 3000, training accuracy: 0.859375 validation accuracy: 0.763896 Steps 4000, training accuracy: 0.921875 validation accuracy: 0.826619 Steps 5000, training accuracy: 0.9375 validation accuracy: 0.869454 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.896481 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.903111 Steps 8000, training accuracy: 1 validation accuracy: 0.938297 Steps 9000, training accuracy: 0.984375 validation accuracy: 0.932177 Training completed with test accuracy : 0.8718718886375427 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0.0625 validation accuracy: 0.0346762 Steps 1000, training accuracy: 0.734375 validation accuracy: 0.503315 Steps 2000, training accuracy: 0.8125 validation accuracy: 0.670066 Steps 3000, training accuracy: 1 validation accuracy: 0.804182 Steps 4000, training accuracy: 0.921875 validation accuracy: 0.847017 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.887812 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.899541 Steps 7000, training accuracy: 1 validation accuracy: 0.924528 Steps 8000, training accuracy: 0.984375 validation accuracy: 0.940336 Steps 9000, training accuracy: 1 validation accuracy: 0.960734 Training completed with test accuracy : 0.8888888955116272 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.015625 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0 validation accuracy: 0.0193779 Steps 1000, training accuracy: 0.703125 validation accuracy: 0.628251 Steps 2000, training accuracy: 0.84375 validation accuracy: 0.81744 Steps 3000, training accuracy: 0.921875 validation accuracy: 0.883733 Steps 4000, training accuracy: 0.96875 validation accuracy: 0.940846 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.940336 Steps 6000, training accuracy: 1 validation accuracy: 0.954105 Steps 7000, training accuracy: 1 validation accuracy: 0.971443 Steps 8000, training accuracy: 0.921875 validation accuracy: 0.951045 Steps 9000, training accuracy: 1 validation accuracy: 0.975523 Training completed with test accuracy : 0.8418418765068054 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0346762 Steps 1000, training accuracy: 0.65625 validation accuracy: 0.570627 Steps 2000, training accuracy: 0.859375 validation accuracy: 0.751657 Steps 3000, training accuracy: 0.859375 validation accuracy: 0.807241 Steps 4000, training accuracy: 0.921875 validation accuracy: 0.90668 Steps 5000, training accuracy: 1 validation accuracy: 0.916879 Steps 6000, training accuracy: 1 validation accuracy: 0.942376 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.947476 Steps 8000, training accuracy: 1 validation accuracy: 0.953595 Steps 9000, training accuracy: 1 validation accuracy: 0.965324 Training completed with test accuracy : 0.8668668866157532 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.0859375 validation accuracy: 0.0535441 Steps 1000, training accuracy: 0.882812 validation accuracy: 0.767466 Steps 2000, training accuracy: 0.90625 validation accuracy: 0.838348 Steps 3000, training accuracy: 0.921875 validation accuracy: 0.885773 Steps 4000, training accuracy: 0.804688 validation accuracy: 0.786843 Steps 5000, training accuracy: 0.945312 validation accuracy: 0.871494 Steps 6000, training accuracy: 0.976562 validation accuracy: 0.870984 Steps 7000, training accuracy: 0.828125 validation accuracy: 0.763896 Steps 8000, training accuracy: 0.867188 validation accuracy: 0.774605 Steps 9000, training accuracy: 0.90625 validation accuracy: 0.864865 Training completed with test accuracy : 0.7797797918319702 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.078125 validation accuracy: 0.0560938 Steps 1000, training accuracy: 0.960938 validation accuracy: 0.892912 Steps 2000, training accuracy: 0.898438 validation accuracy: 0.841407 Steps 3000, training accuracy: 0.9375 validation accuracy: 0.887812 Steps 4000, training accuracy: 0.9375 validation accuracy: 0.916879 Steps 5000, training accuracy: 0.9375 validation accuracy: 0.896481 Steps 6000, training accuracy: 0.929688 validation accuracy: 0.891382 Steps 7000, training accuracy: 0.90625 validation accuracy: 0.879143 Steps 8000, training accuracy: 0.882812 validation accuracy: 0.844977 Steps 9000, training accuracy: 0.929688 validation accuracy: 0.849567 Training completed with test accuracy : 0.8078078031539917 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.010010010562837124 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.117188 validation accuracy: 0.0530342 Steps 1000, training accuracy: 0.945312 validation accuracy: 0.868434 Steps 2000, training accuracy: 0.867188 validation accuracy: 0.888832 Steps 3000, training accuracy: 0.945312 validation accuracy: 0.891892 Steps 4000, training accuracy: 0.929688 validation accuracy: 0.90515 Steps 5000, training accuracy: 0.945312 validation accuracy: 0.90872 Steps 6000, training accuracy: 0.851562 validation accuracy: 0.882203 Steps 7000, training accuracy: 0.960938 validation accuracy: 0.91127 Steps 8000, training accuracy: 0.914062 validation accuracy: 0.886792 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.929628 Training completed with test accuracy : 0.8398398160934448 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 Steps 0, training accuracy: 0.0859375 validation accuracy: 0.045385 Steps 1000, training accuracy: 0.953125 validation accuracy: 0.900051 Steps 2000, training accuracy: 0.953125 validation accuracy: 0.899541 Steps 3000, training accuracy: 0.945312 validation accuracy: 0.873024 Steps 4000, training accuracy: 0.945312 validation accuracy: 0.917389 Steps 5000, training accuracy: 0.976562 validation accuracy: 0.926058 Steps 6000, training accuracy: 0.9375 validation accuracy: 0.91331 Steps 7000, training accuracy: 0.898438 validation accuracy: 0.876084 Steps 8000, training accuracy: 0.90625 validation accuracy: 0.889852 Steps 9000, training accuracy: 0.921875 validation accuracy: 0.859255 Training completed with test accuracy : 0.837837815284729 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.0546875 validation accuracy: 0.0372259 Steps 1000, training accuracy: 0.96875 validation accuracy: 0.90923 Steps 2000, training accuracy: 1 validation accuracy: 0.964814 Steps 3000, training accuracy: 1 validation accuracy: 0.965834 Steps 4000, training accuracy: 1 validation accuracy: 0.977052 Steps 5000, training accuracy: 1 validation accuracy: 0.980112 Steps 6000, training accuracy: 1 validation accuracy: 0.981132 Steps 7000, training accuracy: 1 validation accuracy: 0.983172 Steps 8000, training accuracy: 1 validation accuracy: 0.984192 Steps 9000, training accuracy: 1 validation accuracy: 0.986231 Training completed with test accuracy : 0.9519519209861755 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0499745 Steps 1000, training accuracy: 0.976562 validation accuracy: 0.920449 Steps 2000, training accuracy: 0.992188 validation accuracy: 0.967364 Steps 3000, training accuracy: 1 validation accuracy: 0.976543 Steps 4000, training accuracy: 1 validation accuracy: 0.982662 Steps 5000, training accuracy: 1 validation accuracy: 0.988271 Steps 6000, training accuracy: 1 validation accuracy: 0.989291 Steps 7000, training accuracy: 1 validation accuracy: 0.985211 Steps 8000, training accuracy: 1 validation accuracy: 0.987761 Steps 9000, training accuracy: 1 validation accuracy: 0.988271 Training completed with test accuracy : 0.9289289116859436 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.015625 validation accuracy: 0.0214176 Steps 1000, training accuracy: 0.960938 validation accuracy: 0.935747 Steps 2000, training accuracy: 0.992188 validation accuracy: 0.959714 Steps 3000, training accuracy: 1 validation accuracy: 0.976542 Steps 4000, training accuracy: 1 validation accuracy: 0.984192 Steps 5000, training accuracy: 1 validation accuracy: 0.981132 Steps 6000, training accuracy: 1 validation accuracy: 0.986231 Steps 7000, training accuracy: 0.992188 validation accuracy: 0.964814 Steps 8000, training accuracy: 1 validation accuracy: 0.983172 Steps 9000, training accuracy: 1 validation accuracy: 0.984192 Training completed with test accuracy : 0.9189189076423645 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0448751 Steps 1000, training accuracy: 0.984375 validation accuracy: 0.930138 Steps 2000, training accuracy: 0.976562 validation accuracy: 0.952065 Steps 3000, training accuracy: 1 validation accuracy: 0.979602 Steps 4000, training accuracy: 1 validation accuracy: 0.962774 Steps 5000, training accuracy: 1 validation accuracy: 0.984702 Steps 6000, training accuracy: 1 validation accuracy: 0.985212 Steps 7000, training accuracy: 1 validation accuracy: 0.983682 Steps 8000, training accuracy: 1 validation accuracy: 0.983682 Steps 9000, training accuracy: 1 validation accuracy: 0.990821 Training completed with test accuracy : 0.9219219088554382 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.0625 validation accuracy: 0.0341662 Steps 1000, training accuracy: 0.734375 validation accuracy: 0.517593 Steps 2000, training accuracy: 0.867188 validation accuracy: 0.720041 Steps 3000, training accuracy: 0.953125 validation accuracy: 0.81693 Steps 4000, training accuracy: 0.976562 validation accuracy: 0.864355 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.90515 Steps 6000, training accuracy: 0.960938 validation accuracy: 0.921468 Steps 7000, training accuracy: 1 validation accuracy: 0.945946 Steps 8000, training accuracy: 1 validation accuracy: 0.941866 Steps 9000, training accuracy: 1 validation accuracy: 0.955125 Training completed with test accuracy : 0.8988988399505615 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0224375 Steps 1000, training accuracy: 0.664062 validation accuracy: 0.579806 Steps 2000, training accuracy: 0.890625 validation accuracy: 0.776645 Steps 3000, training accuracy: 0.953125 validation accuracy: 0.830699 Steps 4000, training accuracy: 0.953125 validation accuracy: 0.887302 Steps 5000, training accuracy: 0.953125 validation accuracy: 0.91331 Steps 6000, training accuracy: 0.992188 validation accuracy: 0.937277 Steps 7000, training accuracy: 1 validation accuracy: 0.943906 Steps 8000, training accuracy: 1 validation accuracy: 0.956655 Steps 9000, training accuracy: 0.992188 validation accuracy: 0.960224 Training completed with test accuracy : 0.8818818926811218 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.03125 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.015625 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.0859375 validation accuracy: 0.0530342 Steps 1000, training accuracy: 0.882812 validation accuracy: 0.776135 Steps 2000, training accuracy: 0.945312 validation accuracy: 0.887812 Steps 3000, training accuracy: 0.976562 validation accuracy: 0.926568 Steps 4000, training accuracy: 0.992188 validation accuracy: 0.945436 Steps 5000, training accuracy: 1 validation accuracy: 0.967363 Steps 6000, training accuracy: 0.984375 validation accuracy: 0.970423 Steps 7000, training accuracy: 0.992188 validation accuracy: 0.972973 Steps 8000, training accuracy: 1 validation accuracy: 0.982662 Steps 9000, training accuracy: 1 validation accuracy: 0.972973 Training completed with test accuracy : 0.8518518209457397 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 Steps 0, training accuracy: 0.03125 validation accuracy: 0.0214176 Steps 1000, training accuracy: 0.742188 validation accuracy: 0.650688 Steps 2000, training accuracy: 0.929688 validation accuracy: 0.827639 Steps 3000, training accuracy: 0.976562 validation accuracy: 0.878123 Steps 4000, training accuracy: 0.953125 validation accuracy: 0.913819 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.934217 Steps 6000, training accuracy: 0.992188 validation accuracy: 0.951555 Steps 7000, training accuracy: 1 validation accuracy: 0.956655 Steps 8000, training accuracy: 1 validation accuracy: 0.966854 Steps 9000, training accuracy: 1 validation accuracy: 0.970933 Training completed with test accuracy : 0.8608608841896057 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.0585938 validation accuracy: 0.0392657 Steps 1000, training accuracy: 0.9375 validation accuracy: 0.899031 Steps 2000, training accuracy: 0.988281 validation accuracy: 0.933197 Steps 3000, training accuracy: 0.972656 validation accuracy: 0.935237 Steps 4000, training accuracy: 0.964844 validation accuracy: 0.928608 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.929628 Steps 6000, training accuracy: 0.96875 validation accuracy: 0.933707 Steps 7000, training accuracy: 0.964844 validation accuracy: 0.914839 Steps 8000, training accuracy: 0.945312 validation accuracy: 0.919939 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.921979 Training completed with test accuracy : 0.8808808326721191 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.0390625 validation accuracy: 0.036206 Steps 1000, training accuracy: 0.941406 validation accuracy: 0.866395 Steps 2000, training accuracy: 0.964844 validation accuracy: 0.90974 Steps 3000, training accuracy: 0.957031 validation accuracy: 0.902601 Steps 4000, training accuracy: 0.960938 validation accuracy: 0.870984 Steps 5000, training accuracy: 0.96875 validation accuracy: 0.901581 Steps 6000, training accuracy: 0.960938 validation accuracy: 0.916879 Steps 7000, training accuracy: 0.953125 validation accuracy: 0.886792 Steps 8000, training accuracy: 0.953125 validation accuracy: 0.916369 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.9128 Training completed with test accuracy : 0.8308308124542236 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.015625 validation accuracy: 0.0372259 Steps 1000, training accuracy: 0.976562 validation accuracy: 0.952065 Steps 2000, training accuracy: 0.988281 validation accuracy: 0.956655 Steps 3000, training accuracy: 0.972656 validation accuracy: 0.945946 Steps 4000, training accuracy: 0.992188 validation accuracy: 0.952575 Steps 5000, training accuracy: 0.964844 validation accuracy: 0.946966 Steps 6000, training accuracy: 0.984375 validation accuracy: 0.953595 Steps 7000, training accuracy: 0.992188 validation accuracy: 0.964304 Steps 8000, training accuracy: 0.96875 validation accuracy: 0.961244 Steps 9000, training accuracy: 0.964844 validation accuracy: 0.947986 Training completed with test accuracy : 0.8148148059844971 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.01 Steps 0, training accuracy: 0.0703125 validation accuracy: 0.0474248 Steps 1000, training accuracy: 0.960938 validation accuracy: 0.930138 Steps 2000, training accuracy: 0.960938 validation accuracy: 0.927588 Steps 3000, training accuracy: 0.964844 validation accuracy: 0.918409 Steps 4000, training accuracy: 0.980469 validation accuracy: 0.950025 Steps 5000, training accuracy: 0.976562 validation accuracy: 0.944416 Steps 6000, training accuracy: 0.980469 validation accuracy: 0.946456 Steps 7000, training accuracy: 0.984375 validation accuracy: 0.951045 Steps 8000, training accuracy: 0.933594 validation accuracy: 0.885262 Steps 9000, training accuracy: 0.96875 validation accuracy: 0.945436 Training completed with test accuracy : 0.8648648262023926 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0820312 validation accuracy: 0.0372259 Steps 1000, training accuracy: 0.984375 validation accuracy: 0.948496 Steps 2000, training accuracy: 1 validation accuracy: 0.975523 Steps 3000, training accuracy: 1 validation accuracy: 0.984192 Steps 4000, training accuracy: 1 validation accuracy: 0.980622 Steps 5000, training accuracy: 1 validation accuracy: 0.988781 Steps 6000, training accuracy: 1 validation accuracy: 0.981642 Steps 7000, training accuracy: 1 validation accuracy: 0.992861 Steps 8000, training accuracy: 1 validation accuracy: 0.987761 Steps 9000, training accuracy: 1 validation accuracy: 0.986741 Training completed with test accuracy : 0.9399399161338806 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0820312 validation accuracy: 0.0280469 Steps 1000, training accuracy: 1 validation accuracy: 0.953085 Steps 2000, training accuracy: 1 validation accuracy: 0.979092 Steps 3000, training accuracy: 1 validation accuracy: 0.977562 Steps 4000, training accuracy: 1 validation accuracy: 0.985721 Steps 5000, training accuracy: 1 validation accuracy: 0.981642 Steps 6000, training accuracy: 1 validation accuracy: 0.989291 Steps 7000, training accuracy: 1 validation accuracy: 0.990821 Steps 8000, training accuracy: 1 validation accuracy: 0.994901 Steps 9000, training accuracy: 1 validation accuracy: 0.992351 Training completed with test accuracy : 0.9429429173469543 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0507812 validation accuracy: 0.0540541 Steps 1000, training accuracy: 0.996094 validation accuracy: 0.973483 Steps 2000, training accuracy: 1 validation accuracy: 0.980112 Steps 3000, training accuracy: 1 validation accuracy: 0.984192 Steps 4000, training accuracy: 0.996094 validation accuracy: 0.985212 Steps 5000, training accuracy: 1 validation accuracy: 0.995411 Steps 6000, training accuracy: 1 validation accuracy: 0.983682 Steps 7000, training accuracy: 1 validation accuracy: 0.993371 Steps 8000, training accuracy: 1 validation accuracy: 0.992351 Steps 9000, training accuracy: 1 validation accuracy: 0.994391 Training completed with test accuracy : 0.9269269108772278 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.001 Steps 0, training accuracy: 0.0429688 validation accuracy: 0.0576237 Steps 1000, training accuracy: 0.996094 validation accuracy: 0.968383 Steps 2000, training accuracy: 1 validation accuracy: 0.976033 Steps 3000, training accuracy: 1 validation accuracy: 0.984192 Steps 4000, training accuracy: 1 validation accuracy: 0.979602 Steps 5000, training accuracy: 1 validation accuracy: 0.983172 Steps 6000, training accuracy: 1 validation accuracy: 0.992351 Steps 7000, training accuracy: 1 validation accuracy: 0.989291 Steps 8000, training accuracy: 1 validation accuracy: 0.988271 Steps 9000, training accuracy: 1 validation accuracy: 0.992861 Training completed with test accuracy : 0.9329329133033752 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.015625 validation accuracy: 0.018358 Steps 1000, training accuracy: 0.75 validation accuracy: 0.646099 Steps 2000, training accuracy: 0.9375 validation accuracy: 0.82203 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.886282 Steps 4000, training accuracy: 0.988281 validation accuracy: 0.918919 Steps 5000, training accuracy: 0.996094 validation accuracy: 0.936257 Steps 6000, training accuracy: 0.996094 validation accuracy: 0.952575 Steps 7000, training accuracy: 1 validation accuracy: 0.957675 Steps 8000, training accuracy: 1 validation accuracy: 0.956655 Steps 9000, training accuracy: 1 validation accuracy: 0.957165 Training completed with test accuracy : 0.8898898959159851 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0195312 validation accuracy: 0.0229475 Steps 1000, training accuracy: 0.769531 validation accuracy: 0.626211 Steps 2000, training accuracy: 0.941406 validation accuracy: 0.790923 Steps 3000, training accuracy: 0.960938 validation accuracy: 0.879653 Steps 4000, training accuracy: 0.980469 validation accuracy: 0.916369 Steps 5000, training accuracy: 0.984375 validation accuracy: 0.932177 Steps 6000, training accuracy: 0.996094 validation accuracy: 0.946456 Steps 7000, training accuracy: 1 validation accuracy: 0.964814 Steps 8000, training accuracy: 1 validation accuracy: 0.968893 Steps 9000, training accuracy: 1 validation accuracy: 0.975523 Training completed with test accuracy : 0.8928928971290588 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 1000, training accuracy: 0 validation accuracy: 0.00662927 Steps 2000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 3000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Steps 4000, training accuracy: 0 validation accuracy: 0.00662927 Steps 5000, training accuracy: 0 validation accuracy: 0.00662927 Steps 6000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 7000, training accuracy: 0.0117188 validation accuracy: 0.00662927 Steps 8000, training accuracy: 0.0078125 validation accuracy: 0.00662927 Steps 9000, training accuracy: 0.00390625 validation accuracy: 0.00662927 Training completed with test accuracy : 0.01001000963151455 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0351562 validation accuracy: 0.0249873 Steps 1000, training accuracy: 0.832031 validation accuracy: 0.812851 Steps 2000, training accuracy: 0.960938 validation accuracy: 0.920959 Steps 3000, training accuracy: 0.992188 validation accuracy: 0.951045 Steps 4000, training accuracy: 0.984375 validation accuracy: 0.947476 Steps 5000, training accuracy: 1 validation accuracy: 0.974503 Steps 6000, training accuracy: 0.992188 validation accuracy: 0.967873 Steps 7000, training accuracy: 1 validation accuracy: 0.978582 Steps 8000, training accuracy: 1 validation accuracy: 0.982152 Steps 9000, training accuracy: 1 validation accuracy: 0.978072 Training completed with test accuracy : 0.8498498201370239 Training model with Four layers started for step: 10000 batchsize: 256 learning_rate: 0.0001 Steps 0, training accuracy: 0.0078125 validation accuracy: 0.018358 Steps 1000, training accuracy: 0.890625 validation accuracy: 0.753697 Steps 2000, training accuracy: 0.9375 validation accuracy: 0.889342 Steps 3000, training accuracy: 0.96875 validation accuracy: 0.931158 Steps 4000, training accuracy: 0.988281 validation accuracy: 0.943396 Steps 5000, training accuracy: 1 validation accuracy: 0.960734 Steps 6000, training accuracy: 1 validation accuracy: 0.969403 Steps 7000, training accuracy: 1 validation accuracy: 0.977562 Steps 8000, training accuracy: 1 validation accuracy: 0.978072 Steps 9000, training accuracy: 1 validation accuracy: 0.973993 Training completed with test accuracy : 0.8778778910636902 Best Validation Accuracy : 0.9519519209861755 Best parameters: steps:10000 batches:128 learning_rate:0.001 keep_prob:0.5
Model Four Training log with additional data: Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 keep_prob: 0.5 Steps 0, training accuracy: 0.1875 validation accuracy: 0.0336563 Steps 1000, training accuracy: 0 validation accuracy: 0.054564 Steps 2000, training accuracy: 0 validation accuracy: 0.0601734 Steps 3000, training accuracy: 0 validation accuracy: 0.0520143 Steps 4000, training accuracy: 0.125 validation accuracy: 0.0520143 Steps 5000, training accuracy: 0 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0 validation accuracy: 0.0520143 Steps 7000, training accuracy: 0.09375 validation accuracy: 0.0530342 Steps 8000, training accuracy: 0 validation accuracy: 0.0474248 Steps 9000, training accuracy: 0 validation accuracy: 0.0520143 Training completed with test accuracy : 0.057057056576013565 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 keep_prob: 0.65 Steps 0, training accuracy: 0.234375 validation accuracy: 0.0377358 Steps 1000, training accuracy: 0.15625 validation accuracy: 0.054564 Steps 2000, training accuracy: 0.109375 validation accuracy: 0.0479347 Steps 3000, training accuracy: 0.109375 validation accuracy: 0.0622132 Steps 4000, training accuracy: 0 validation accuracy: 0.0601734 Steps 5000, training accuracy: 0 validation accuracy: 0.0622132 Steps 6000, training accuracy: 0 validation accuracy: 0.0622132 Steps 7000, training accuracy: 0.09375 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.09375 validation accuracy: 0.0601734 Steps 9000, training accuracy: 0.203125 validation accuracy: 0.0520143 Training completed with test accuracy : 0.054054051637649536 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.01 keep_prob: 0.8 Steps 0, training accuracy: 0.375 validation accuracy: 0.0596634 Steps 1000, training accuracy: 0 validation accuracy: 0.0622132 Steps 2000, training accuracy: 0 validation accuracy: 0.063233 Steps 3000, training accuracy: 0.234375 validation accuracy: 0.0530342 Steps 4000, training accuracy: 0.078125 validation accuracy: 0.0601734 Steps 5000, training accuracy: 0.09375 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0.125 validation accuracy: 0.0622132 Steps 7000, training accuracy: 0 validation accuracy: 0.0622132 Steps 8000, training accuracy: 0.109375 validation accuracy: 0.0520143 Steps 9000, training accuracy: 0 validation accuracy: 0.0622132 Training completed with test accuracy : 0.052052054554224014 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 keep_prob: 0.5 Steps 0, training accuracy: 0.25 validation accuracy: 0.027537 Steps 1000, training accuracy: 0.21875 validation accuracy: 0.18052 Steps 2000, training accuracy: 0.25 validation accuracy: 0.534931 Steps 3000, training accuracy: 0.21875 validation accuracy: 0.785824 Steps 4000, training accuracy: 0.515625 validation accuracy: 0.864865 Steps 5000, training accuracy: 0.421875 validation accuracy: 0.883223 Steps 6000, training accuracy: 0.46875 validation accuracy: 0.91229 Steps 7000, training accuracy: 0.71875 validation accuracy: 0.922998 Steps 8000, training accuracy: 0.765625 validation accuracy: 0.921469 Steps 9000, training accuracy: 0.84375 validation accuracy: 0.941866 Training completed with test accuracy : 0.9119119048118591 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 keep_prob: 0.65 Steps 0, training accuracy: 0.296875 validation accuracy: 0.0290668 Steps 1000, training accuracy: 0.0625 validation accuracy: 0.116777 Steps 2000, training accuracy: 0.375 validation accuracy: 0.623151 Steps 3000, training accuracy: 0.203125 validation accuracy: 0.780214 Steps 4000, training accuracy: 0.359375 validation accuracy: 0.830189 Steps 5000, training accuracy: 0.75 validation accuracy: 0.844467 Steps 6000, training accuracy: 0.609375 validation accuracy: 0.898011 Steps 7000, training accuracy: 0.828125 validation accuracy: 0.885263 Steps 8000, training accuracy: 0.609375 validation accuracy: 0.917389 Steps 9000, training accuracy: 0.78125 validation accuracy: 0.933707 Training completed with test accuracy : 0.857857882976532 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.001 keep_prob: 0.8 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0382458 Steps 1000, training accuracy: 0.046875 validation accuracy: 0.17746 Steps 2000, training accuracy: 0.21875 validation accuracy: 0.657318 Steps 3000, training accuracy: 0.359375 validation accuracy: 0.831209 Steps 4000, training accuracy: 0.453125 validation accuracy: 0.863335 Steps 5000, training accuracy: 0.75 validation accuracy: 0.880673 Steps 6000, training accuracy: 0.859375 validation accuracy: 0.914329 Steps 7000, training accuracy: 0.890625 validation accuracy: 0.935747 Steps 8000, training accuracy: 0.796875 validation accuracy: 0.956655 Steps 9000, training accuracy: 0.765625 validation accuracy: 0.936257 Training completed with test accuracy : 0.8988988995552063 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 keep_prob: 0.5 Steps 0, training accuracy: 0 validation accuracy: 0.0260071 Steps 1000, training accuracy: 0.234375 validation accuracy: 0.129016 Steps 2000, training accuracy: 0.03125 validation accuracy: 0.231515 Steps 3000, training accuracy: 0.234375 validation accuracy: 0.36512 Steps 4000, training accuracy: 0.1875 validation accuracy: 0.419174 Steps 5000, training accuracy: 0.25 validation accuracy: 0.486487 Steps 6000, training accuracy: 0.390625 validation accuracy: 0.566038 Steps 7000, training accuracy: 0.453125 validation accuracy: 0.627231 Steps 8000, training accuracy: 0.328125 validation accuracy: 0.659867 Steps 9000, training accuracy: 0.375 validation accuracy: 0.72463 Training completed with test accuracy : 0.7497497797012329 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 keep_prob: 0.65 Steps 0, training accuracy: 0 validation accuracy: 0.017848 Steps 1000, training accuracy: 0.109375 validation accuracy: 0.202958 Steps 2000, training accuracy: 0.21875 validation accuracy: 0.335033 Steps 3000, training accuracy: 0.296875 validation accuracy: 0.448751 Steps 4000, training accuracy: 0.265625 validation accuracy: 0.522183 Steps 5000, training accuracy: 0.203125 validation accuracy: 0.63692 Steps 6000, training accuracy: 0.34375 validation accuracy: 0.704233 Steps 7000, training accuracy: 0.25 validation accuracy: 0.739929 Steps 8000, training accuracy: 0.375 validation accuracy: 0.753187 Steps 9000, training accuracy: 0.296875 validation accuracy: 0.796022 Training completed with test accuracy : 0.7727727890014648 Training model with Four layers started for step: 10000 batchsize: 64 learning_rate: 0.0001 keep_prob: 0.8 Steps 0, training accuracy: 0 validation accuracy: 0.0336563 Steps 1000, training accuracy: 0.234375 validation accuracy: 0.190209 Steps 2000, training accuracy: 0.171875 validation accuracy: 0.352371 Steps 3000, training accuracy: 0.15625 validation accuracy: 0.448241 Steps 4000, training accuracy: 0.40625 validation accuracy: 0.507904 Steps 5000, training accuracy: 0.34375 validation accuracy: 0.620092 Steps 6000, training accuracy: 0.078125 validation accuracy: 0.676696 Steps 7000, training accuracy: 0.25 validation accuracy: 0.698113 Steps 8000, training accuracy: 0.3125 validation accuracy: 0.72259 Steps 9000, training accuracy: 0.1875 validation accuracy: 0.801122 Training completed with test accuracy : 0.7447447776794434 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 keep_prob: 0.5 Steps 0, training accuracy: 0.09375 validation accuracy: 0.0469148 Steps 1000, training accuracy: 0 validation accuracy: 0.0601734 Steps 2000, training accuracy: 0.0546875 validation accuracy: 0.0509944 Steps 3000, training accuracy: 0.046875 validation accuracy: 0.0622132 Steps 4000, training accuracy: 0.117188 validation accuracy: 0.0622132 Steps 5000, training accuracy: 0.0859375 validation accuracy: 0.0622132 Steps 6000, training accuracy: 0.09375 validation accuracy: 0.0622132 Steps 7000, training accuracy: 0.109375 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0625 validation accuracy: 0.0622132 Steps 9000, training accuracy: 0.171875 validation accuracy: 0.0474248 Training completed with test accuracy : 0.06306306272745132 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 keep_prob: 0.65 Steps 0, training accuracy: 0.109375 validation accuracy: 0.036716 Steps 1000, training accuracy: 0 validation accuracy: 0.0560938 Steps 2000, training accuracy: 0.0390625 validation accuracy: 0.063233 Steps 3000, training accuracy: 0 validation accuracy: 0.0530342 Steps 4000, training accuracy: 0 validation accuracy: 0.0622132 Steps 5000, training accuracy: 0.0546875 validation accuracy: 0.0520143 Steps 6000, training accuracy: 0.148438 validation accuracy: 0.054564 Steps 7000, training accuracy: 0 validation accuracy: 0.0520143 Steps 8000, training accuracy: 0.0546875 validation accuracy: 0.0622132 Steps 9000, training accuracy: 0.234375 validation accuracy: 0.0520143 Training completed with test accuracy : 0.052052054554224014 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.01 keep_prob: 0.8 Steps 0, training accuracy: 0.125 validation accuracy: 0.0305966 Steps 1000, training accuracy: 0.0859375 validation accuracy: 0.0535441 Steps 2000, training accuracy: 0 validation accuracy: 0.0525242 Steps 3000, training accuracy: 0 validation accuracy: 0.0530342 Steps 4000, training accuracy: 0.0234375 validation accuracy: 0.0622132 Steps 5000, training accuracy: 0.101562 validation accuracy: 0.0601734 Steps 6000, training accuracy: 0.0390625 validation accuracy: 0.0601734 Steps 7000, training accuracy: 0.0390625 validation accuracy: 0.054564 Steps 8000, training accuracy: 0 validation accuracy: 0.0622132 Steps 9000, training accuracy: 0.140625 validation accuracy: 0.0520143 Training completed with test accuracy : 0.052052050828933716 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 keep_prob: 0.5 Steps 0, training accuracy: 0.1875 validation accuracy: 0.0469148 Steps 1000, training accuracy: 0.179688 validation accuracy: 0.397246 Steps 2000, training accuracy: 0.445312 validation accuracy: 0.764916 Steps 3000, training accuracy: 0.578125 validation accuracy: 0.871494 Steps 4000, training accuracy: 0.625 validation accuracy: 0.884243 Steps 5000, training accuracy: 0.710938 validation accuracy: 0.901581 Steps 6000, training accuracy: 0.773438 validation accuracy: 0.913309 Steps 7000, training accuracy: 0.789062 validation accuracy: 0.938807 Steps 8000, training accuracy: 0.929688 validation accuracy: 0.951555 Steps 9000, training accuracy: 0.945312 validation accuracy: 0.962264 Training completed with test accuracy : 0.8928928971290588 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 keep_prob: 0.65 Steps 0, training accuracy: 0.148438 validation accuracy: 0.0387557 Steps 1000, training accuracy: 0.1875 validation accuracy: 0.527282 Steps 2000, training accuracy: 0.46875 validation accuracy: 0.794493 Steps 3000, training accuracy: 0.578125 validation accuracy: 0.866395 Steps 4000, training accuracy: 0.617188 validation accuracy: 0.903111 Steps 5000, training accuracy: 0.765625 validation accuracy: 0.90821 Steps 6000, training accuracy: 0.875 validation accuracy: 0.932687 Steps 7000, training accuracy: 0.867188 validation accuracy: 0.942886 Steps 8000, training accuracy: 0.90625 validation accuracy: 0.936767 Steps 9000, training accuracy: 0.875 validation accuracy: 0.943906 Training completed with test accuracy : 0.8878878951072693 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.001 keep_prob: 0.8 Steps 0, training accuracy: 0.15625 validation accuracy: 0.0377358 Steps 1000, training accuracy: 0.195312 validation accuracy: 0.553799 Steps 2000, training accuracy: 0.445312 validation accuracy: 0.831719 Steps 3000, training accuracy: 0.71875 validation accuracy: 0.885773 Steps 4000, training accuracy: 0.664062 validation accuracy: 0.91076 Steps 5000, training accuracy: 0.632812 validation accuracy: 0.930648 Steps 6000, training accuracy: 0.828125 validation accuracy: 0.939827 Steps 7000, training accuracy: 0.828125 validation accuracy: 0.933197 Steps 8000, training accuracy: 0.78125 validation accuracy: 0.951555 Steps 9000, training accuracy: 0.882812 validation accuracy: 0.954615 Training completed with test accuracy : 0.8848848938941956 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 keep_prob: 0.5 Steps 0, training accuracy: 0.0859375 validation accuracy: 0.0198878 Steps 1000, training accuracy: 0.132812 validation accuracy: 0.18613 Steps 2000, training accuracy: 0.21875 validation accuracy: 0.321265 Steps 3000, training accuracy: 0.195312 validation accuracy: 0.447731 Steps 4000, training accuracy: 0.390625 validation accuracy: 0.5436 Steps 5000, training accuracy: 0.289062 validation accuracy: 0.613972 Steps 6000, training accuracy: 0.3125 validation accuracy: 0.682305 Steps 7000, training accuracy: 0.398438 validation accuracy: 0.753697 Steps 8000, training accuracy: 0.351562 validation accuracy: 0.788373 Steps 9000, training accuracy: 0.34375 validation accuracy: 0.776135 Training completed with test accuracy : 0.815815806388855 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 keep_prob: 0.65 Steps 0, training accuracy: 0.046875 validation accuracy: 0.0188679 Steps 1000, training accuracy: 0.109375 validation accuracy: 0.27537 Steps 2000, training accuracy: 0.117188 validation accuracy: 0.433962 Steps 3000, training accuracy: 0.242188 validation accuracy: 0.532381 Steps 4000, training accuracy: 0.3125 validation accuracy: 0.624171 Steps 5000, training accuracy: 0.375 validation accuracy: 0.722081 Steps 6000, training accuracy: 0.3125 validation accuracy: 0.736869 Steps 7000, training accuracy: 0.351562 validation accuracy: 0.804691 Steps 8000, training accuracy: 0.320312 validation accuracy: 0.827639 Steps 9000, training accuracy: 0.421875 validation accuracy: 0.857726 Training completed with test accuracy : 0.8238238096237183 Training model with Four layers started for step: 10000 batchsize: 128 learning_rate: 0.0001 keep_prob: 0.8 Steps 0, training accuracy: 0 validation accuracy: 0.00917899 Steps 1000, training accuracy: 0.1875 validation accuracy: 0.303417 Steps 2000, training accuracy: 0.25 validation accuracy: 0.486996 Steps 3000, training accuracy: 0.242188 validation accuracy: 0.618562 Steps 4000, training accuracy: 0.421875 validation accuracy: 0.704233 Steps 5000, training accuracy: 0.320312 validation accuracy: 0.753697 Steps 6000, training accuracy: 0.234375 validation accuracy: 0.781234 Steps 7000, training accuracy: 0.359375 validation accuracy: 0.840387 Steps 8000, training accuracy: 0.359375 validation accuracy: 0.881183 Steps 9000, training accuracy: 0.429688 validation accuracy: 0.873534 Training completed with test accuracy : 0.8328328132629395 Best Validation Accuracy : 0.9119119048118591 Best parameters: steps:10000 batches:64 learning_rate:0.001 keep_prob:0.5

In [ ]:
# This is common code to train all the defined models
curr_path = os.getcwd()
model_path = curr_path +'/models/model_multi.ckpt'

# Define train or test model
train = True
# Use additional data for training
use_jitter = False
# Num of training iterations
steps = 100000
# Batch processing size
batch_size = 96
# Drop off probability
k_prob = 0.5
# Learning rate
learning_rate = 0.001
# Data preparation 
if not use_jitter:
    norm_train_features = preprocess_images(train_features)
else:
    norm_train_features = train_features
    
norm_valid_features = preprocess_images(valid_features)
norm_test_features = preprocess_images(X_test)
n_test = norm_test_features.shape[0]
    
valid_feed_dict = {features: norm_valid_features, labels : valid_labels, keep_prob: k_prob, lr_value: learning_rate}

# Change this line to train the different models
# with tf.Session(graph=graph_model_simple) as sess:
# with tf.Session(graph=graph_model_multi) as sess:
with tf.Session(graph=graph_model_multi2) as sess:
    if train:
        sess.run(init)
        for step in range(steps):
            batch_start = np.random.choice(norm_train_features.shape[0],batch_size)
            batch_features = norm_train_features[batch_start]
            batch_labels = train_labels[batch_start]

            if use_jitter:
                batch_features, batch_labels = jitter_image_data(batch_features, batch_labels, batch_size)
                # Run optimizer
            loss_value = sess.run(train_op, feed_dict={features: batch_features, labels: batch_labels, keep_prob: k_prob, lr_value: learning_rate})

            if step%1000 == 0:
                train_accuracy = sess.run([acc], feed_dict={features:batch_features, labels: batch_labels, keep_prob: 1, lr_value: learning_rate})
                valid_accuracy = sess.run([acc], feed_dict=valid_feed_dict)
                batch_count = int(math.ceil(n_test/batch_size))
                total = 0
                for i in range(batch_count):
                    batch_start = i*batch_size
                    test_batch_features = norm_test_features[batch_start:batch_start + batch_size]
                    test_batch_labels = test_labels[batch_start:batch_start + batch_size]
                    total += sess.run(acc, feed_dict={features:test_batch_features, labels: test_batch_labels, keep_prob: 1, lr_value: learning_rate})
                test_accuracy = total / batch_count
                print("Steps {}, training accuracy: {}  validation accuracy: {} test accuracy: {}".format(step, train_accuracy, valid_accuracy, test_accuracy))
                
                # This is just a hack to do decay learning rate
                if(test_accuracy > 0.96):
                    learning_rate = 0.0001

            if ((step == (steps-1)) or (test_accuracy > 0.98)):
                batch_count = int(math.ceil(n_test/batch_size))
                total = 0
                for i in range(batch_count):
                    batch_start = i*batch_size
                    test_batch_features = norm_test_features[batch_start:batch_start + batch_size]
                    test_batch_labels = test_labels[batch_start:batch_start + batch_size]
                    total += sess.run(acc, feed_dict={features:test_batch_features, labels: test_batch_labels, keep_prob: 1, lr_value:learning_rate})
                test_accuracy = total / batch_count
                print('Final test accuracy: {}'.format(test_accuracy))
                save_path = saver.save(sess,model_path)
                print("Model saved.")
    else:
        # Here's where you're restoring the variables w and b.
        # Note that the graph is exactly as it was when the variables were
        # saved in a prior training run.
        #ckpt = tf.train.get_checkpoint_state(check_dir)
        #if ckpt and ckpt.model_checkpoint_path:
        #saver.restore(sess, ckpt.model_checkpoint_path)
        #print('Model Restored..')
        saver.restore(sess,model_path)
        batch_count = int(math.ceil(n_test/batch_size))
        total = 0
        for i in range(batch_count):
            batch_start = i*batch_size
            test_batch_features = norm_test_features[batch_start:batch_start + batch_size]
            test_batch_labels = test_labels[batch_start:batch_start + batch_size]
            total += sess.run(acc, feed_dict={features:test_batch_features, labels: test_batch_labels, keep_prob: 1, lr_value: learning_rate})
        test_accuracy = total / batch_count
        print('Restored test accuracy: {}'.format(test_accuracy))
**Multi scale model training log** Steps 0, training accuracy: [0.10416667] validation accuracy: [0.033146352] test accuracy: 0.05853675827242886 Steps 1000, training accuracy: [0.90625] validation accuracy: [0.7317695] test accuracy: 0.8414264470338821 Steps 2000, training accuracy: [0.96875006] validation accuracy: [0.9071902] test accuracy: 0.9226904777866421 Steps 3000, training accuracy: [0.97916669] validation accuracy: [0.94390613] test accuracy: 0.9377280067313801 Steps 4000, training accuracy: [1.0] validation accuracy: [0.95869452] test accuracy: 0.9473730750156172 Steps 5000, training accuracy: [1.0] validation accuracy: [0.97348279] test accuracy: 0.9520114700902592 Steps 6000, training accuracy: [1.0] validation accuracy: [0.96940327] test accuracy: 0.9488373714866061 Steps 7000, training accuracy: [1.0] validation accuracy: [0.96736354] test accuracy: 0.9479517738024393 Steps 8000, training accuracy: [1.0] validation accuracy: [0.97705257] test accuracy: 0.9518711788184715 Steps 9000, training accuracy: [1.0] validation accuracy: [0.97807229] test accuracy: 0.9544578072699633 Steps 10000, training accuracy: [1.0] validation accuracy: [0.98062217] test accuracy: 0.9553697059551874 Steps 11000, training accuracy: [1.0] validation accuracy: [0.98113203] test accuracy: 0.9566323310136795 Steps 12000, training accuracy: [1.0] validation accuracy: [0.98164195] test accuracy: 0.9572811763394963 Steps 13000, training accuracy: [1.0] validation accuracy: [0.98317182] test accuracy: 0.9593329497359016 Steps 14000, training accuracy: [1.0] validation accuracy: [0.97909224] test accuracy: 0.9566937070904356 Steps 15000, training accuracy: [1.0] validation accuracy: [0.98725128] test accuracy: 0.9588419280268929 Steps 16000, training accuracy: [1.0] validation accuracy: [0.98215199] test accuracy: 0.9597888936599096 Steps 17000, training accuracy: [1.0] validation accuracy: [0.98215187] test accuracy: 0.9599642627166979 Steps 18000, training accuracy: [1.0] validation accuracy: [0.98164201] test accuracy: 0.9644448337229815 Steps 19000, training accuracy: [1.0] validation accuracy: [0.99235088] test accuracy: 0.9682151733925848 Steps 20000, training accuracy: [1.0] validation accuracy: [0.99235076] test accuracy: 0.968136263164607 Steps 21000, training accuracy: [1.0] validation accuracy: [0.99286073] test accuracy: 0.967031460368272 Steps 22000, training accuracy: [1.0] validation accuracy: [0.99133086] test accuracy: 0.9680573443571726 Steps 23000, training accuracy: [1.0] validation accuracy: [0.99337071] test accuracy: 0.9694777989026272 Steps 24000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.9682151733925848 Steps 25000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.9693199721249667 Steps 26000, training accuracy: [1.0] validation accuracy: [0.99286067] test accuracy: 0.9692410573814855 Steps 27000, training accuracy: [1.0] validation accuracy: [0.99031103] test accuracy: 0.9695742523128336 Steps 28000, training accuracy: [1.0] validation accuracy: [0.99337065] test accuracy: 0.9693199689641143 Steps 29000, training accuracy: [1.0] validation accuracy: [0.99337065] test accuracy: 0.9713717364903652 Steps 30000, training accuracy: [1.0] validation accuracy: [0.99286073] test accuracy: 0.9698723708138322 Steps 31000, training accuracy: [1.0] validation accuracy: [0.99439061] test accuracy: 0.9677416885441 Steps 32000, training accuracy: [1.0] validation accuracy: [0.99031103] test accuracy: 0.9704247699542479 Steps 33000, training accuracy: [1.0] validation accuracy: [0.99235082] test accuracy: 0.9718452249512528 Steps 34000, training accuracy: [1.0] validation accuracy: [0.99286073] test accuracy: 0.9704247699542479 Steps 35000, training accuracy: [1.0] validation accuracy: [0.99439055] test accuracy: 0.969319969867215 Steps 36000, training accuracy: [1.0] validation accuracy: [0.99286067] test accuracy: 0.971292827165488 Steps 37000, training accuracy: [1.0] validation accuracy: [0.99490052] test accuracy: 0.9710560806772925 Steps 38000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9702844768762589 Steps 39000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.97066151192694 Steps 40000, training accuracy: [1.0] validation accuracy: [0.9954105] test accuracy: 0.9711349945176732 Steps 41000, training accuracy: [1.0] validation accuracy: [0.99541038] test accuracy: 0.9704247708573486 Steps 42000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.970126649647048 Steps 43000, training accuracy: [1.0] validation accuracy: [0.99490052] test accuracy: 0.971152530023546 Steps 44000, training accuracy: [1.0] validation accuracy: [0.99439061] test accuracy: 0.9697320790904941 Steps 45000, training accuracy: [1.0] validation accuracy: [0.9964304] test accuracy: 0.9702055648420796 Steps 46000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.9694164214712201 Steps 47000, training accuracy: [1.0] validation accuracy: [0.99337065] test accuracy: 0.9708193391561508 Steps 48000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9707404284766226 Steps 49000, training accuracy: [1.0] validation accuracy: [0.99439061] test accuracy: 0.9704247690511473 Steps 50000, training accuracy: [1.0] validation accuracy: [0.99694031] test accuracy: 0.970661510572289 Steps 51000, training accuracy: [1.0] validation accuracy: [0.99490047] test accuracy: 0.9712928253592867 Steps 52000, training accuracy: [1.0] validation accuracy: [0.99592042] test accuracy: 0.9703458561138674 Steps 53000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9705036851492795 Steps 54000, training accuracy: [1.0] validation accuracy: [0.99439055] test accuracy: 0.9678206041906819 Steps 55000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9678995180310626 Steps 56000, training accuracy: [1.0] validation accuracy: [0.9954105] test accuracy: 0.9686097444006891 Steps 57000, training accuracy: [1.0] validation accuracy: [0.99592042] test accuracy: 0.9690043158603437 Steps 58000, training accuracy: [1.0] validation accuracy: [0.99745017] test accuracy: 0.9694777989026272 Steps 59000, training accuracy: [1.0] validation accuracy: [0.9954105] test accuracy: 0.9698723685560804 Steps 60000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9713717405543183 Steps 61000, training accuracy: [1.0] validation accuracy: [0.99745023] test accuracy: 0.9723976249947692 Steps 62000, training accuracy: [1.0] validation accuracy: [0.99439055] test accuracy: 0.9708982584151354 Steps 63000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.972064430063421 Steps 64000, training accuracy: [1.0] validation accuracy: [0.99490058] test accuracy: 0.9702669409188357 Steps 65000, training accuracy: [1.0] validation accuracy: [0.99592036] test accuracy: 0.972239794604706 Steps 66000, training accuracy: [1.0] validation accuracy: [0.99286079] test accuracy: 0.9687061973593452 Steps 67000, training accuracy: [1.0] validation accuracy: [0.9954105] test accuracy: 0.9688640214277037 Steps 68000, training accuracy: [1.0] validation accuracy: [0.99643028] test accuracy: 0.9706615110238394 Steps 69000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9705825985381098 Steps 70000, training accuracy: [1.0] validation accuracy: [0.9964304] test accuracy: 0.9698723717169329 Steps 71000, training accuracy: [1.0] validation accuracy: [0.99592042] test accuracy: 0.9701880225629518 Steps 72000, training accuracy: [1.0] validation accuracy: [0.99388069] test accuracy: 0.9703458538561156 Steps 73000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9693988855137969 Steps 74000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9695567127430078 Steps 75000, training accuracy: [1.0] validation accuracy: [0.99592042] test accuracy: 0.9703458538561156 Steps 76000, training accuracy: [1.0] validation accuracy: [0.9964304] test accuracy: 0.9697934578765522 Steps 77000, training accuracy: [1.0] validation accuracy: [0.99745023] test accuracy: 0.9705212161396489 Steps 78000, training accuracy: [1.0] validation accuracy: [0.99592042] test accuracy: 0.9706615137331414 Steps 79000, training accuracy: [1.0] validation accuracy: [0.99439061] test accuracy: 0.9714331139217723 Steps 80000, training accuracy: [1.0] validation accuracy: [0.99643028] test accuracy: 0.9694777966448755 Steps 81000, training accuracy: [1.0] validation accuracy: [0.99745023] test accuracy: 0.9713541996298414 Steps 82000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9709596317825895 Steps 83000, training accuracy: [1.0] validation accuracy: [0.9959203] test accuracy: 0.9718452231450514 Steps 84000, training accuracy: [1.0] validation accuracy: [0.99592042] test accuracy: 0.9708193405108019 Steps 85000, training accuracy: [1.0] validation accuracy: [0.99439055] test accuracy: 0.9735024205662988 Steps 86000, training accuracy: [1.0] validation accuracy: [0.99694031] test accuracy: 0.9716260207421852 Steps 87000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.9694777975479761 Steps 88000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9704247699542479 Steps 89000, training accuracy: [1.0] validation accuracy: [0.99592036] test accuracy: 0.9703458525014647 Steps 90000, training accuracy: [1.0] validation accuracy: [0.99541056] test accuracy: 0.9712928249077364 Steps 91000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9723187075419859 Steps 92000, training accuracy: [1.0] validation accuracy: [0.99694031] test accuracy: 0.9717663097562212 Steps 93000, training accuracy: [1.0] validation accuracy: [0.99235082] test accuracy: 0.9712139124220068 Steps 94000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9708193346406474 Steps 95000, training accuracy: [1.0] validation accuracy: [0.99592036] test accuracy: 0.9723187039295832 Steps 96000, training accuracy: [1.0] validation accuracy: [0.99694031] test accuracy: 0.9705825926679553 Steps 97000, training accuracy: [1.0] validation accuracy: [0.99490052] test accuracy: 0.9709157898570552 Steps 98000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9714506507822962 Steps 99000, training accuracy: [1.0] validation accuracy: [0.99388063] test accuracy: 0.9714506525884975 Final test accuracy: 0.9727132785500903 Model saved.
**Multi scale model training log with additional jitter data ** Steps 0, training accuracy: [0.072916672] validation accuracy: [0.043855175] test accuracy: 0.04409546980803663 Steps 1000, training accuracy: [0.36458334] validation accuracy: [0.35237125] test accuracy: 0.42724994547439343 Steps 2000, training accuracy: [0.67708337] validation accuracy: [0.70984191] test accuracy: 0.7681415130694708 Steps 3000, training accuracy: [0.86458337] validation accuracy: [0.8327384] test accuracy: 0.8737812480240157 Steps 4000, training accuracy: [0.91666669] validation accuracy: [0.90005094] test accuracy: 0.918358973029888 Steps 5000, training accuracy: [0.91666669] validation accuracy: [0.91024983] test accuracy: 0.9272499657941587 Steps 6000, training accuracy: [0.94791669] validation accuracy: [0.9418664] test accuracy: 0.9419542993559982 Steps 7000, training accuracy: [0.89583337] validation accuracy: [0.94441605] test accuracy: 0.9404110881415281 Steps 8000, training accuracy: [0.97916669] validation accuracy: [0.94849563] test accuracy: 0.9366056706869241 Steps 9000, training accuracy: [0.94791669] validation accuracy: [0.96277404] test accuracy: 0.9460139924829657 Steps 10000, training accuracy: [0.96875] validation accuracy: [0.95818454] test accuracy: 0.9411651595975413 Steps 11000, training accuracy: [0.97916675] validation accuracy: [0.97246289] test accuracy: 0.950415647391117 Steps 12000, training accuracy: [0.9375] validation accuracy: [0.96634364] test accuracy: 0.9403935512810042 Steps 13000, training accuracy: [0.97916669] validation accuracy: [0.97195297] test accuracy: 0.9517045811270223 Steps 14000, training accuracy: [0.92708337] validation accuracy: [0.96889335] test accuracy: 0.9557291960174387 Steps 15000, training accuracy: [0.94791669] validation accuracy: [0.97909224] test accuracy: 0.95700936516126 Steps 16000, training accuracy: [0.98958337] validation accuracy: [0.97348297] test accuracy: 0.9546594791340105 Steps 17000, training accuracy: [0.96875] validation accuracy: [0.97246301] test accuracy: 0.9548173077178724 Steps 18000, training accuracy: [0.98958337] validation accuracy: [0.97756243] test accuracy: 0.95876301102566 Steps 19000, training accuracy: [0.98958337] validation accuracy: [0.98062199] test accuracy: 0.96146362703858 Steps 20000, training accuracy: [0.94791669] validation accuracy: [0.98215187] test accuracy: 0.9614899325551409 Steps 21000, training accuracy: [1.0] validation accuracy: [0.98164195] test accuracy: 0.9669350074096159 Steps 22000, training accuracy: [0.95833337] validation accuracy: [0.98164195] test accuracy: 0.9632435797741918 Steps 23000, training accuracy: [0.96875] validation accuracy: [0.98572159] test accuracy: 0.9612093482053641 Steps 24000, training accuracy: [0.96875006] validation accuracy: [0.98521161] test accuracy: 0.963261119795568 Steps 25000, training accuracy: [0.96875] validation accuracy: [0.98215187] test accuracy: 0.9616477624936537 Steps 26000, training accuracy: [0.98958337] validation accuracy: [0.98419166] test accuracy: 0.9622176926244389 Steps 27000, training accuracy: [0.94791675] validation accuracy: [0.9790923] test accuracy: 0.9546419391126344 Steps 28000, training accuracy: [1.0] validation accuracy: [0.97909218] test accuracy: 0.9635943093083121 Steps 29000, training accuracy: [0.97916675] validation accuracy: [0.98419166] test accuracy: 0.9611918117963907 Steps 30000, training accuracy: [0.98958337] validation accuracy: [0.98521155] test accuracy: 0.9653742624954744 Steps 31000, training accuracy: [1.0] validation accuracy: [0.98725134] test accuracy: 0.9659266629905412 Steps 32000, training accuracy: [0.95833337] validation accuracy: [0.98011214] test accuracy: 0.9628490110238394 Steps 33000, training accuracy: [1.0] validation accuracy: [0.98572153] test accuracy: 0.9697145426815207 Steps 34000, training accuracy: [0.97916669] validation accuracy: [0.98674142] test accuracy: 0.9672068253611074 Steps 35000, training accuracy: [0.97916675] validation accuracy: [0.98776132] test accuracy: 0.960955075693853 Steps 36000, training accuracy: [0.95833337] validation accuracy: [0.99031103] test accuracy: 0.9693024370706442 Steps 37000, training accuracy: [1.0] validation accuracy: [0.9836818] test accuracy: 0.9639713457136443 Steps 38000, training accuracy: [0.98958337] validation accuracy: [0.98929107] test accuracy: 0.9671717516400598 Steps 39000, training accuracy: [0.96875] validation accuracy: [0.98827124] test accuracy: 0.9666544271237922 Steps 40000, training accuracy: [0.97916669] validation accuracy: [0.98521161] test accuracy: 0.9634978617682601 Steps 41000, training accuracy: [1.0] validation accuracy: [0.9872514] test accuracy: 0.9659617358084881 Steps 42000, training accuracy: [0.95833337] validation accuracy: [0.98521161] test accuracy: 0.9684343789563035 Steps 43000, training accuracy: [0.98958337] validation accuracy: [0.9872514] test accuracy: 0.9655320906277859 Steps 44000, training accuracy: [0.94791669] validation accuracy: [0.98725134] test accuracy: 0.9662247878132444 Steps 45000, training accuracy: [0.98958337] validation accuracy: [0.98368174] test accuracy: 0.9640502627148773 Steps 46000, training accuracy: [1.0] validation accuracy: [0.99031103] test accuracy: 0.9663212335470951 Steps 47000, training accuracy: [0.98958337] validation accuracy: [0.98980105] test accuracy: 0.9674260291186246 Steps 48000, training accuracy: [0.96875006] validation accuracy: [0.99082094] test accuracy: 0.9700477344520164 Steps 49000, training accuracy: [1.0] validation accuracy: [0.98623145] test accuracy: 0.9697145413268696 Steps 50000, training accuracy: [1.0] validation accuracy: [0.98878109] test accuracy: 0.9693375071792891 Steps 51000, training accuracy: [0.97916675] validation accuracy: [0.98776126] test accuracy: 0.9710560815803932 Steps 52000, training accuracy: [0.96875006] validation accuracy: [0.98776126] test accuracy: 0.9640327240481521 Steps 53000, training accuracy: [1.0] validation accuracy: [0.98827124] test accuracy: 0.9666193520480936 Steps 54000, training accuracy: [0.97916675] validation accuracy: [0.99082094] test accuracy: 0.9679608923016172 Steps 55000, training accuracy: [0.97916669] validation accuracy: [0.9872514] test accuracy: 0.9694602656545062 Steps 56000, training accuracy: [0.97916669] validation accuracy: [0.98878115] test accuracy: 0.9695391781402357 Steps 57000, training accuracy: [0.98958337] validation accuracy: [0.9882713] test accuracy: 0.9678995135155591 Steps 58000, training accuracy: [0.96875] validation accuracy: [0.98878121] test accuracy: 0.9700301953337409 Steps 59000, training accuracy: [0.98958337] validation accuracy: [0.9882713] test accuracy: 0.9600081019329302 Steps 60000, training accuracy: [0.98958337] validation accuracy: [0.99286073] test accuracy: 0.9687851075873231 Steps 61000, training accuracy: [0.96875006] validation accuracy: [0.98980111] test accuracy: 0.9657074565237219 Steps 62000, training accuracy: [0.98958337] validation accuracy: [0.98623145] test accuracy: 0.9654531799482576 Steps 63000, training accuracy: [0.98958337] validation accuracy: [0.98623145] test accuracy: 0.9686272799065618 Steps 64000, training accuracy: [0.97916675] validation accuracy: [0.99235082] test accuracy: 0.971389273350889 Steps 65000, training accuracy: [0.96875] validation accuracy: [0.99031097] test accuracy: 0.9690832337646773 Steps 66000, training accuracy: [0.98958337] validation accuracy: [0.990821] test accuracy: 0.9697145413268696 Steps 67000, training accuracy: [1.0] validation accuracy: [0.98980111] test accuracy: 0.968785106232672 Steps 68000, training accuracy: [0.94791675] validation accuracy: [0.99133092] test accuracy: 0.9712928217468839 Steps 69000, training accuracy: [0.98958337] validation accuracy: [0.98929107] test accuracy: 0.9686886555317676 Steps 70000, training accuracy: [0.98958337] validation accuracy: [0.98878115] test accuracy: 0.972020587234786 Steps 71000, training accuracy: [0.98958337] validation accuracy: [0.98623151] test accuracy: 0.9696356216163347 Steps 72000, training accuracy: [1.0] validation accuracy: [0.99235082] test accuracy: 0.971748772444147 Steps 73000, training accuracy: [1.0] validation accuracy: [0.99082094] test accuracy: 0.9691007647550467 Steps 74000, training accuracy: [0.98958337] validation accuracy: [0.98878115] test accuracy: 0.972459000619975 Steps 75000, training accuracy: [0.98958337] validation accuracy: [0.99388063] test accuracy: 0.9730289339116125 Steps 76000, training accuracy: [0.98958337] validation accuracy: [0.99235082] test accuracy: 0.9615250098885912 Steps 77000, training accuracy: [0.96875006] validation accuracy: [0.99337065] test accuracy: 0.9691007647550467 Steps 78000, training accuracy: [0.98958337] validation accuracy: [0.98878115] test accuracy: 0.9636556930614241 Steps 79000, training accuracy: [1.0] validation accuracy: [0.99286073] test accuracy: 0.9701091100772222 Steps 80000, training accuracy: [0.96875] validation accuracy: [0.98623151] test accuracy: 0.9690043117963907 Steps 81000, training accuracy: [0.98958337] validation accuracy: [0.99082088] test accuracy: 0.9720819642146429 Steps 82000, training accuracy: [0.96875] validation accuracy: [0.98980105] test accuracy: 0.9733445924339872 Steps 83000, training accuracy: [0.98958337] validation accuracy: [0.99388063] test accuracy: 0.9715120286652537 Steps 84000, training accuracy: [1.0] validation accuracy: [0.99184084] test accuracy: 0.9723187084450866 Steps 85000, training accuracy: [0.95833337] validation accuracy: [0.98878121] test accuracy: 0.9686711200258948 Steps 86000, training accuracy: [1.0] validation accuracy: [0.99337071] test accuracy: 0.975063163674239 Steps 87000, training accuracy: [1.0] validation accuracy: [0.99490047] test accuracy: 0.9752209945158525 Steps 88000, training accuracy: [0.96875006] validation accuracy: [0.99643028] test accuracy: 0.9748264239592985 Steps 89000, training accuracy: [1.0] validation accuracy: [0.99490052] test accuracy: 0.9749053346388268 Steps 90000, training accuracy: [0.98958337] validation accuracy: [0.99439061] test accuracy: 0.9756155650724064 Steps 91000, training accuracy: [1.0] validation accuracy: [0.99337071] test accuracy: 0.9748264235077482 Steps 92000, training accuracy: [0.98958337] validation accuracy: [0.99337077] test accuracy: 0.9754577382947459 Steps 93000, training accuracy: [1.0] validation accuracy: [0.99541038] test accuracy: 0.9741161998474237 Steps 94000, training accuracy: [0.98958337] validation accuracy: [0.9954105] test accuracy: 0.9752999106139848 Steps 95000, training accuracy: [1.0] validation accuracy: [0.99643034] test accuracy: 0.9743529400139144 Steps 96000, training accuracy: [1.0] validation accuracy: [0.99592036] test accuracy: 0.9748264257654999 Steps 97000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.974510768146226 Steps 98000, training accuracy: [0.98958337] validation accuracy: [0.99490052] test accuracy: 0.974510768146226 Steps 99000, training accuracy: [1.0] validation accuracy: [0.99541044] test accuracy: 0.9749053355419275 Final test accuracy: 0.9744318506934426 Model saved.

Question 4

How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)

Answer:

With both simple and multiscale model defined, next step is the training. Usually training happen for steps or epoches. Here in order to keep randomized data, steps has been consider rather than epoch. Also, training happen for longer time and get the hyperparameters right is challenging. In order to avoid that, initial training with smaller step size is done varying hyperparameters. Even though this might not give best observation (especially for small learning rate), this method was used to get some inituation about hyperparameters.

Based on above approach, hyperparameters were selected individual models with and without additioanal data.

Mostly these were the parameters used:

Optimizer: Adam
Batch Size: 64 or 128
epoch/steps : 1,00,0000
learning rate: 0.001
Dropout prob: 0.5 or 0.65

Question 5

What approach did you take in coming up with a solution to this problem?

Answer:

First I implemented a simple two layer convolutional model. Used a small testing with various hyperparameters and finalized best hyperparameters. Later trained models for longer iterations. With these initial step I was able to get around 94% accuracy on test set.

In order to further improve, I used additional data as mentioned in Lecun's paper. This produced result with 96%.

Later, I implemented multi-scale model to improve the accuracy further to 97%


In [ ]:
#If you encounter error, please run graph_model_simple definition cell before runnning this cell
norm_test_features = preprocess_images(X_test)
n_test = norm_test_features.shape[0]
batch_size = 128
best_test_accuracy = 0.0
best_model = []
for model_name in glob.glob('models/*.ckpt'):
    if 'simple' in model_name:
        sess = tf.Session(graph=graph_model_simple)
    
        sess.run(init)
        saver.restore(sess,model_name)
        batch_count = int(math.ceil(n_test/batch_size))
        total = 0
        for i in range(batch_count):
            batch_start = i*batch_size
            test_batch_features = norm_test_features[batch_start:batch_start + batch_size]
            test_batch_labels = test_labels[batch_start:batch_start + batch_size]
            total += sess.run(acc, feed_dict={features:test_batch_features, labels: test_batch_labels, keep_prob: 1})
        test_accuracy = total / batch_count
        print('Final Test Accuracy of {} is {}'.format(model_name,test_accuracy))
        if test_accuracy > best_test_accuracy:
            best_test_accuracy = test_accuracy
            best_model = model_name
        sess.close()
        
print('Best Test Accuracy is {} using {}'.format(best_test_accuracy, best_model))

In [13]:
#If you encounter error, please run graph_model_multi2 definition cell before runnning this cell.
# This seems to an issue, since varaibles are given same name, even with session separated out, you cannot run without model definition
norm_test_features = preprocess_images(X_test)
n_test = norm_test_features.shape[0]
batch_size = 128
best_test_accuracy = 0.0
best_model = []
for model_name in glob.glob('models/*.ckpt'):
    if 'multi' in model_name:
        sess = tf.Session(graph=graph_model_multi2)
    
        sess.run(init)
        saver.restore(sess,model_name)
        batch_count = int(math.ceil(n_test/batch_size))
        total = 0
        for i in range(batch_count):
            batch_start = i*batch_size
            test_batch_features = norm_test_features[batch_start:batch_start + batch_size]
            test_batch_labels = test_labels[batch_start:batch_start + batch_size]
            total += sess.run(acc, feed_dict={features:test_batch_features, labels: test_batch_labels, keep_prob: 1})
        test_accuracy = total / batch_count
        print('Final Test Accuracy of {} is {}'.format(model_name,test_accuracy))
        if test_accuracy > best_test_accuracy:
            best_test_accuracy = test_accuracy
            best_model = model_name
        sess.close()
        
print('Best Test Accuracy is {} using {}'.format(best_test_accuracy, best_model))


Final Test Accuracy of models/model_multi_jitter_small_iter.ckpt is 0.9493811660342746
Final Test Accuracy of models/model_multi.ckpt is 0.9727360816917034
Final Test Accuracy of models/model_multi_jitter.ckpt is 0.9743932786614004
Best Test Accuracy is 0.9743932786614004 using models/model_multi_jitter.ckpt

Step 3: Test a Model on New Images

Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.


In [14]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
#http://stackoverflow.com/questions/10388462/matplotlib-different-size-subplots
import glob
import skimage.io as sk_io
from matplotlib import gridspec
name = pd.read_csv('signnames.csv')
curr_path = os.getcwd()
model_path = curr_path +'/models/model_multi_jitter.ckpt'
def read_new_images():
    processed_images = []
    for image_name in glob.glob('images/*.jpg'):
        image = sk_io.imread(image_name)
        resize_image = skimage_tf.resize(image,(32,32))
        processed_images.append(resize_image)

    return preprocess_images(np.float32(np.array(processed_images))), np.array(processed_images)
    
new_test_images, original_test_images = read_new_images()
with tf.Session(graph=graph_model_multi2) as sess:
    sess.run(init)
    saver.restore(sess,model_path)
    pred_value, pred_class_value = sess.run([pred,pred_class],feed_dict={features:new_test_images,keep_prob:1})
    #print(pred_class_value)
    top_3_classes = tf.nn.top_k(pred_value,k=3)
    top_3_classes_value = sess.run(top_3_classes)
    #print((top_3_classes_value))


num_new_test_images = pred_value.shape[0]
num_class_labels = pred_value.shape[1]
# Due to memory constraint, displaying only 5 images
num_display = 7
#rand_idx = np.random.choice(num_new_test_images,num_display)
rand_idx = [0,3,1,8,10,14]
sample_prob = pred_value[rand_idx]
sample_class = pred_class_value.indices[rand_idx]
sample_test_images = new_test_images[rand_idx]
sample_orig_images = original_test_images[rand_idx]
sample_top3_class = top_3_classes_value.indices[rand_idx]
index = np.arange(num_class_labels)
for idx in range(num_display-1):
    fig = plt.figure(figsize=(10, 10)) 
    gs = gridspec.GridSpec(1, 2, height_ratios=[1, 5])
    ax0 = plt.subplot(gs[0])
    #ax0.set_title('Predicted ClassId %d : %s'%(sample_class[idx][0], name.SignName[sample_class[idx][0]]))
    #ax0.set_xlabel('Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][1],name.SignName[sample_top3_class[idx][1]]))
    ax0.text(-30,40,'Predicted ClassId %d : %s'%(sample_class[idx][0], name.SignName[sample_class[idx][0]]))
    ax0.text(-30,45,'Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][1],name.SignName[sample_top3_class[idx][1]]))
    ax0.text(-30,50,'Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][2],name.SignName[sample_top3_class[idx][2]]))
    ax0.imshow(sample_orig_images[idx])
    ax0.set_xticks([])
    ax0.set_yticks([])
    ax1 = plt.subplot(gs[1])
    ax1.bar(index, sample_prob[idx])
plt.tight_layout()


Question 6

Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It would be helpful to plot the images in the notebook.

Answer:

Images used for testing has been obtained from Traffic Signs UAH Dataset (http://agamenon.tsc.uah.es/Investigacion/gram/traffic_signs.html). Since these images contains traffic sign as a part of an image, manual cropping has been done to extract the signs.

These images might contains following difficulties while performing classification

1) Change in Illumination 2) Occlusion 3) Shadow 4) Deformation


In [ ]:
### Run the predictions here.
### Feel free to use as many code cells as needed.

Question 7

Is your model able to perform equally well on captured pictures or a live camera stream when compared to testing on the dataset?

Answer:

With limited testing done so far, overall the model was able to perform well on new images. This can be seen with seen image 2 and image 3 in the display above

Image 2 --> Double curve --> Detected with high probability
Image 3 --> Yield --> Detected with high probability even in occluded condition

The problem with model is that, for those images not available in the class label, it still predicts a higher probability.

Interesting observations:
Image 4 --> Although it resembles "keep left" (Keep left has slant white line in the middle) the model predicted as "no entry". This was due to horizontal line in both the cases. But background color of the sign is actually blue for "keep left" and red for "no entry". If color informations can be utilized properly, then we should get much better model.

For images taken at night, the model did not predicted properly as seen in image 1 and 2

Note on Performance Since there is a need to manually label the data to analyse the performance which is hectic, visualization has been used to study the performance.


In [15]:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
norm_test_features = preprocess_images(X_test)
n_test = norm_test_features.shape[0]
batch_size = 128
best_test_accuracy = 0.0
best_model = []
is_correct_list = []
model_name = 'models/model_multi_jitter.ckpt'
sess = tf.Session(graph=graph_model_multi2)
sess.run(init)
saver.restore(sess,model_name)
feed_dict={features:norm_test_features, labels: test_labels, keep_prob: 1}
is_correct = sess.run(tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)), feed_dict=feed_dict)
misclassified_idxs = np.arange(len(norm_test_features))[~is_correct]
sess.close()

In [16]:
model_name = 'models/model_multi_jitter.ckpt'
misclassified_images_ = X_test[misclassified_idxs]
misclassified_labels = test_labels[misclassified_idxs]
misclassified_images = preprocess_images(misclassified_images_)
with tf.Session(graph=graph_model_multi2) as sess:
    sess.run(init)
    saver.restore(sess,model_name)
    pred_value, pred_class_value = sess.run([pred,pred_class],feed_dict={features:misclassified_images,keep_prob:1})
    top_3_classes = tf.nn.top_k(pred_value,k=3)
    top_3_classes_value = sess.run(top_3_classes)
num_display = 6
num_new_test_images = misclassified_images.shape[0]
rand_idx = np.random.choice(num_new_test_images,num_display)
rand_idx = [55, 61, 110, 113, 121]
sample_prob = pred_value[rand_idx]
sample_class = pred_class_value.indices[rand_idx]
actual_class = np.argmax(misclassified_labels,1)[rand_idx]
sample_test_images = misclassified_images[rand_idx]
sample_orig_images = misclassified_images_[rand_idx]
sample_top3_class = top_3_classes_value.indices[rand_idx]
index = np.arange(43)
for idx in range(num_display-1):
    fig = plt.figure(figsize=(10, 10)) 
    gs = gridspec.GridSpec(1, 2, height_ratios=[1, 5])
    ax0 = plt.subplot(gs[0])
    #ax0.set_title('Predicted ClassId %d : %s'%(sample_class[idx][0], name.SignName[sample_class[idx][0]]))
    #ax0.set_xlabel('Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][1],name.SignName[sample_top3_class[idx][1]]))
    ax0.text(-30,35,'Actual ClassId %d : %s'%(actual_class[idx], name.SignName[actual_class[idx]]))
    ax0.text(-30,40,'Predicted ClassId %d : %s'%(sample_class[idx][0], name.SignName[sample_class[idx][0]]))
    ax0.text(-30,45,'Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][1],name.SignName[sample_top3_class[idx][1]]))
    ax0.text(-30,50,'Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][2],name.SignName[sample_top3_class[idx][2]]))
    ax0.imshow(sample_orig_images[idx])
    ax0.set_xticks([])
    ax0.set_yticks([])
    ax1 = plt.subplot(gs[1])
    ax1.bar(index, sample_prob[idx])
plt.tight_layout()


Question 8

Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)

Answer:

Since for all the collected images belonging to trained label, the model was able to get top prediction. So, in order to answer this question, images from test sets which were wrongly classified were taken and top prediction has been displayed. As seen from the sample, even though model failed for some cases, it was still able to make prediction within top three values

Question 9

If necessary, provide documentation for how an interface was built for your model to load and classify newly-acquired images.


In [ ]:
name = pd.read_csv('signnames.csv')
curr_path = os.getcwd()
model_path = curr_path +'/models/model_multi_jitter.ckpt'
def read_new_images():
    processed_images = []
    for image_name in glob.glob('test_images/*.jpg'):
        image = sk_io.imread(image_name)
        resize_image = skimage_tf.resize(image,(32,32))
        processed_images.append(resize_image)

    return preprocess_images(np.float32(np.array(processed_images))), np.array(processed_images)
    
new_test_images, original_test_images = read_new_images()
with tf.Session(graph=graph_model_multi2) as sess:
    sess.run(init)
    saver.restore(sess,model_path)
    pred_value, pred_class_value = sess.run([pred,pred_class],feed_dict={features:new_test_images,keep_prob:1})
    #print(pred_class_value)
    top_3_classes = tf.nn.top_k(pred_value,k=3)
    top_3_classes_value = sess.run(top_3_classes)
    #print((top_3_classes_value))


num_new_test_images = pred_value.shape[0]
num_class_labels = pred_value.shape[1]
# Due to memory constraint, displaying only 5 images
num_display = 7
rand_idx = np.random.choice(num_new_test_images,num_display)
sample_prob = pred_value[rand_idx]
sample_class = pred_class_value.indices[rand_idx]
sample_test_images = new_test_images[rand_idx]
sample_orig_images = original_test_images[rand_idx]
sample_top3_class = top_3_classes_value.indices[rand_idx]
index = np.arange(num_class_labels)
for idx in range(num_display-1):
    fig = plt.figure(figsize=(10, 10)) 
    gs = gridspec.GridSpec(1, 2, height_ratios=[1, 5])
    ax0 = plt.subplot(gs[0])
    #ax0.set_title('Predicted ClassId %d : %s'%(sample_class[idx][0], name.SignName[sample_class[idx][0]]))
    #ax0.set_xlabel('Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][1],name.SignName[sample_top3_class[idx][1]]))
    ax0.text(-30,40,'Predicted ClassId %d : %s'%(sample_class[idx][0], name.SignName[sample_class[idx][0]]))
    ax0.text(-30,45,'Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][1],name.SignName[sample_top3_class[idx][1]]))
    ax0.text(-30,50,'Next Best Prediction ClassId %d : %s'%(sample_top3_class[idx][2],name.SignName[sample_top3_class[idx][2]]))
    ax0.imshow(sample_orig_images[idx])
    ax0.set_xticks([])
    ax0.set_yticks([])
    ax1 = plt.subplot(gs[1])
    ax1.bar(index, sample_prob[idx])
plt.tight_layout()

Answer:

In order to evaluate model, the interface code has been reproduced above. To test the model, copy images in test_images folder and the run the above code. The code is restricted to display 5 images, but if required it can be changed.

Model has been trained and store on the disk. Later the model is restore using model name. Later by using this restored model, predictions has been performed.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Conclusion

These are the conclusion derived while working with this traffic sign classification problem

  1. Y channel performed better than color channel.
  2. Additional image seems to improve the performance but in smaller range of [1,2]%
  3. Addition of more layers does not improve the performance. It was seen when two layer model outperformed four layer convolutional model. May be for smaller image problem like this, smaller network architecture would suffice
  4. Effectiveness of Convolution Layers can be easily observed. It can reach more 90% with few iterations

Future Work

  1. Need to further analyse image preprocessing. This seems to be the real different between Lecun's paper and this implementation. Especially mean subtraction might play and major role.
  2. Color image can be carefully analysed, to see if improvements can be acheived. HSV can be a good starting point.
  3. Training ran the complete iterations rather than stopping early if the accuracy decreased to avoid overfitting. This should be handled and also decaying learning rate has to be implemented

References

http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
https://www.tensorflow.org/versions/r0.11/api_docs/index.html
http://stackoverflow.com/questions/33759623/tensorflow-how-to-restore-a-previously-saved-model-python
http://stackoverflow.com/questions/34727431/tensorflow-on-jupyter-cant-restore-variables
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/mnist
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/models/image/cifar10


In [ ]: