Deep Learning

Assignment 1

The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.

This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.


In [1]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle

First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.


In [2]:
url = 'http://yaroslavvb.com/upload/notMNIST/'

def maybe_download(filename, expected_bytes, force=False):
  """Download a file if not present, and make sure it's the right size."""
  if force or not os.path.exists(filename):
    filename, _ = urlretrieve(url + filename, filename)
  statinfo = os.stat(filename)
  if statinfo.st_size == expected_bytes:
    print('Found and verified', filename)
  else:
    raise Exception(
      'Failed to verify' + filename + '. Can you get to it with a browser?')
  return filename

train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)


Found and verified notMNIST_large.tar.gz
Found and verified notMNIST_small.tar.gz

Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.


In [3]:
num_classes = 10
np.random.seed(133)

def maybe_extract(filename, force=False):
  root = os.path.splitext(os.path.splitext(filename)[0])[0]  # remove .tar.gz
  if os.path.isdir(root) and not force:
    # You may override by setting force=True.
    print('%s already present - Skipping extraction of %s.' % (root, filename))
  else:
    print('Extracting data for %s. This may take a while. Please wait.' % root)
    tar = tarfile.open(filename)
    sys.stdout.flush()
    tar.extractall()
    tar.close()
  data_folders = [
    os.path.join(root, d) for d in sorted(os.listdir(root))
    if os.path.isdir(os.path.join(root, d))]
  if len(data_folders) != num_classes:
    raise Exception(
      'Expected %d folders, one per class. Found %d instead.' % (
        num_classes, len(data_folders)))
  print(data_folders)
  return data_folders
  
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)


notMNIST_large already present - Skipping extraction of notMNIST_large.tar.gz.
['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J']
notMNIST_small already present - Skipping extraction of notMNIST_small.tar.gz.
['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J']

Problem 1

Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.


Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.

We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.

A few images might not be readable, we'll just skip them.


In [4]:
image_size = 28  # Pixel width and height.
pixel_depth = 255.0  # Number of levels per pixel.

def load_letter(folder, min_num_images):
  """Load the data for a single letter label."""
  image_files = os.listdir(folder)
  dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
                         dtype=np.float32)
  image_index = 0
  print(folder)
  for image in os.listdir(folder):
    image_file = os.path.join(folder, image)
    try:
      image_data = (ndimage.imread(image_file).astype(float) - 
                    pixel_depth / 2) / pixel_depth
      if image_data.shape != (image_size, image_size):
        raise Exception('Unexpected image shape: %s' % str(image_data.shape))
      dataset[image_index, :, :] = image_data
      image_index += 1
    except IOError as e:
      print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
    
  num_images = image_index
  dataset = dataset[0:num_images, :, :]
  if num_images < min_num_images:
    raise Exception('Many fewer images than expected: %d < %d' %
                    (num_images, min_num_images))
    
  print('Full dataset tensor:', dataset.shape)
  print('Mean:', np.mean(dataset))
  print('Standard deviation:', np.std(dataset))
  return dataset
        
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
  dataset_names = []
  for folder in data_folders:
    set_filename = folder + '.pickle'
    dataset_names.append(set_filename)
    if os.path.exists(set_filename) and not force:
      # You may override by setting force=True.
      print('%s already present - Skipping pickling.' % set_filename)
    else:
      print('Pickling %s.' % set_filename)
      dataset = load_letter(folder, min_num_images_per_class)
      try:
        with open(set_filename, 'wb') as f:
          pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
      except Exception as e:
        print('Unable to save data to', set_filename, ':', e)
  
  return dataset_names

train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)


notMNIST_large/A.pickle already present - Skipping pickling.
notMNIST_large/B.pickle already present - Skipping pickling.
notMNIST_large/C.pickle already present - Skipping pickling.
notMNIST_large/D.pickle already present - Skipping pickling.
notMNIST_large/E.pickle already present - Skipping pickling.
notMNIST_large/F.pickle already present - Skipping pickling.
notMNIST_large/G.pickle already present - Skipping pickling.
notMNIST_large/H.pickle already present - Skipping pickling.
notMNIST_large/I.pickle already present - Skipping pickling.
notMNIST_large/J.pickle already present - Skipping pickling.
notMNIST_small/A.pickle already present - Skipping pickling.
notMNIST_small/B.pickle already present - Skipping pickling.
notMNIST_small/C.pickle already present - Skipping pickling.
notMNIST_small/D.pickle already present - Skipping pickling.
notMNIST_small/E.pickle already present - Skipping pickling.
notMNIST_small/F.pickle already present - Skipping pickling.
notMNIST_small/G.pickle already present - Skipping pickling.
notMNIST_small/H.pickle already present - Skipping pickling.
notMNIST_small/I.pickle already present - Skipping pickling.
notMNIST_small/J.pickle already present - Skipping pickling.

Problem 2

Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.



In [5]:
import matplotlib.cm as cm

import string
import random

#http://stackoverflow.com/questions/25812255/row-and-column-headers-in-matplotlibs-subplots

def pick_random_samples():
    """
    This function reads each of the pickled datasets and selects 10 
    random images for each label and stores them in the form of a 
    dictionary where key is the label.
    returns the dictionary
    { "A": [image1,image2...image10],"B": [image1,image2...image10],.... "J": [image1,image2...image10]}
    """
    lable_list = string.ascii_uppercase[:10]
    lable_image_samples = {}
    for l in lable_list:
        #print("notMNIST_large/{}.pickle".format(l))
        data = pickle.load( open( "notMNIST_large/{}.pickle".format(l), "rb" ) )
        lable_image_samples[l] = []
        for i in [random.randint(1,data.shape[0]) for f in range(10)]: 
            image = data[i]*255
            image = image + 255/2.0            
            lable_image_samples[l].append(image)       
    return lable_image_samples         
        
def plot_samples(sample_dict):
    """
    Displaying a sample of the labels and images from the ndarray 
    """
    #lable_list = string.ascii_uppercase[:10]    
    fig, axes = plt.subplots(nrows=10, ncols=10 , figsize=(10, 10))
    
    plt.figure(1)
    count = 0
    row = 0  
    lable_list = []
    #print(sample_dict.items())
    for lable_name,images in sample_dict.items():
        col = 0
        #print("label: {},".format(label))
        lable_list.append(lable_name)
        for image in images:
            axes[row,col].imshow(image,cmap = cm.Greys_r)
            axes[row,col].xaxis.set_ticklabels([])
            axes[row,col].yaxis.set_ticklabels([])
            col += 1
            count += 1
        row += 1        
    for ax, row in zip(axes[:,0], lable_list):
        ax.set_ylabel(row, rotation=0, size='xx-large')
    fig.tight_layout()
    plt.show()
        
sample_dict = pick_random_samples()
plot_samples(sample_dict)



Problem 3

Another check: we expect the data to be balanced across classes. Verify that.



In [6]:
def balanceCheck(dataset):
    """
    Check the number of images under each class or label
    and plot a histogram to see if the dataset is balanced
    """
    size_list =[]
    for f in dataset:
        data = pickle.load( open( f, "rb" ) )
        print("Size of data for {} is {}".format(f,len(data)))
        size_list.append(len(data))
    
    plt.figure()
    fig, axes = plt.subplots(nrows=1, ncols=1)
    axes.bar(range(10),size_list)
    labels = string.uppercase[:10]
    axes.set_xticks([x + 0.5 for x in range(10)])
    axes.set_xticklabels(labels)
    axes.set_ylabel('Image Count')
    axes.set_xlabel('Image Classes')
    axes.set_title('Histogram of dataset')
    plt.show()
                      
balanceCheck(train_datasets)


Size of data for notMNIST_large/A.pickle is 52909
Size of data for notMNIST_large/B.pickle is 52911
Size of data for notMNIST_large/C.pickle is 52912
Size of data for notMNIST_large/D.pickle is 52911
Size of data for notMNIST_large/E.pickle is 52912
Size of data for notMNIST_large/F.pickle is 52912
Size of data for notMNIST_large/G.pickle is 52912
Size of data for notMNIST_large/H.pickle is 52912
Size of data for notMNIST_large/I.pickle is 52912
Size of data for notMNIST_large/J.pickle is 52911
<matplotlib.figure.Figure at 0x111f7e590>

Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.

Also create a validation dataset for hyperparameter tuning.


In [7]:
def make_arrays(nb_rows, img_size):
  if nb_rows:
    dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
    labels = np.ndarray(nb_rows, dtype=np.int32)
  else:
    dataset, labels = None, None
  return dataset, labels

def merge_datasets(pickle_files, train_size, valid_size=0):
  num_classes = len(pickle_files)
  valid_dataset, valid_labels = make_arrays(valid_size, image_size)
  train_dataset, train_labels = make_arrays(train_size, image_size)
  vsize_per_class = valid_size // num_classes
  tsize_per_class = train_size // num_classes
    
  start_v, start_t = 0, 0
  end_v, end_t = vsize_per_class, tsize_per_class
  end_l = vsize_per_class+tsize_per_class
  for label, pickle_file in enumerate(pickle_files):    
    try:
      with open(pickle_file, 'rb') as f:
        letter_set = pickle.load(f)
        # let's shuffle the letters to have random validation and training set
        np.random.shuffle(letter_set)
        if valid_dataset is not None:
          valid_letter = letter_set[:vsize_per_class, :, :]
          valid_dataset[start_v:end_v, :, :] = valid_letter
          valid_labels[start_v:end_v] = label
          start_v += vsize_per_class
          end_v += vsize_per_class
                    
        train_letter = letter_set[vsize_per_class:end_l, :, :]
        train_dataset[start_t:end_t, :, :] = train_letter
        train_labels[start_t:end_t] = label
        start_t += tsize_per_class
        end_t += tsize_per_class
    except Exception as e:
      print('Unable to process data from', pickle_file, ':', e)
      raise
    
  return valid_dataset, valid_labels, train_dataset, train_labels
            
            
train_size = 200000
valid_size = 10000
test_size = 10000

valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
  train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)

print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)


Training: (200000, 28, 28) (200000,)
Validation: (10000, 28, 28) (10000,)
Testing: (10000, 28, 28) (10000,)

Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.


In [8]:
permutation = np.random.permutation(10)

print(permutation)


[1 7 5 4 3 6 9 0 8 2]

In [9]:
def randomize(dataset, labels):
  permutation = np.random.permutation(labels.shape[0])
  shuffled_dataset = dataset[permutation,:,:]
  shuffled_labels = labels[permutation]
  return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)

Problem 4

Convince yourself that the data is still good after shuffling!



In [10]:
def random_check(datast, datast_label):
    random_sample_dict = {}
    sample_count = 0
    for dat,lab in zip(datast, datast_label):
        if lab in random_sample_dict.keys():
            if len(random_sample_dict[lab]) >= 10:
                #Confused with this logic and yes this is my own code, 
                # It works but .. don't know why it is like this
                pass
            else:
                random_sample_dict[lab].append(dat)
                sample_count += 1
        else:
            random_sample_dict[lab] = [dat]
            sample_count += 1
        if sample_count == 100:
            break
    return random_sample_dict
    
plot_samples(random_check(train_dataset, train_labels))


Finally, let's save the data for later reuse:


In [11]:
pickle_file = 'notMNIST.pickle'

try:
  f = open(pickle_file, 'wb')
  save = {
    'train_dataset': train_dataset,
    'train_labels': train_labels,
    'valid_dataset': valid_dataset,
    'valid_labels': valid_labels,
    'test_dataset': test_dataset,
    'test_labels': test_labels,
    }
  pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
  f.close()
except Exception as e:
  print('Unable to save data to', pickle_file, ':', e)
  raise

In [12]:
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)


Compressed pickle size: 690800441

Problem 5

By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. Measure how much overlap there is between training, validation and test samples.

Optional questions:

  • What about near duplicates between datasets? (images that are almost identical)
  • Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.


In [13]:
import hashlib
import time

def comp_l1(im1,im2):
    """
    Using L1 distance
    """
    distance = np.sum(np.abs(im1 - im2))
    return distances
    
def comp_l2(im1,im2):
    """
    Using L2 distance
    """
    distance = np.sqrt(np.sum(np.square(im1 - im2)))
    return distance

def unpickle_file():
    """
    Unpickling the notMNIST.pickle
    """
    try:
        with open(pickle_file, 'rb') as f:
            all_data = pickle.load(f)                        
    finally:
        f.close()
    return all_data 
        
def check_dups():
    """
    Checking overlapping images between training, testing 
    and validation data using L1 and L2 distances
    
    This is time intensive method. But can be use for finding
    close matches
    """
    count = 0    
    all_data = unpickle_file()
    start_time = time.time()
    for im_1 in all_data['train_dataset'][:10000]:
        for im_2 in all_data['test_dataset'][:10000]:
            if comp_l1(im_1,im_2) == 0.0:
                count += 1
    print("Dup count : {} in {} secs".format(count,time.time() - start_time ))
    #return count
    
def check_dups_using_hash():
    """
    Checking overlapping images between training, testing 
    and validation data using hash
    References:
        - https://discussions.udacity.com/t/assignment-1-problem-5/45657/10?u=mkumar2301
        - https://docs.python.org/2/library/hashlib.html
    """
    
    all_data = unpickle_file()
    all_data['train_dataset'].flags.writeable=False
    all_data['test_dataset'].flags.writeable=False
    all_data['valid_dataset'].flags.writeable=False
    
    start_time = time.time()
    hash_training = set([hash(ima1.data) for ima1 in all_data['train_dataset']])
    hash_testing = set([hash(ima2.data) for ima2 in all_data['test_dataset']])
    hash_validation = set([hash(ima2.data) for ima2 in all_data['valid_dataset']])
    
    print("Training and Testing overlap count : {} in {} secs".format(len(set.intersection(hash_training,hash_testing)),time.time() - start_time ))
    print("Training and Validation overlap count : {} in {} secs".format(len(set.intersection(hash_training,hash_validation)),time.time() - start_time ))
    print("Validation and Testing overlap count count : {} in {} secs".format(len(set.intersection(hash_testing,hash_validation)),time.time() - start_time ))
    
    
                
check_dups_using_hash()            
#check_dups()


Training and Testing overlap count : 1153 in 0.816904067993 secs
Training and Validation overlap count : 953 in 0.817692041397 secs
Validation and Testing overlap count count : 55 in 0.817934989929 secs

Problem 6

Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.

Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.

Optional question: train an off-the-shelf model on all the data!