Deep Learning

Assignment 1

The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.

This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.


In [15]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
import tarfile
import urllib
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
import cPickle as pickle

First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.


In [5]:
url = 'http://yaroslavvb.com/upload/notMNIST/'

def maybe_download(filename, expected_bytes):
  """Download a file if not present, and make sure it's the right size."""
  if not os.path.exists(filename):
    filename, _ = urllib.urlretrieve(url + filename, filename)
  statinfo = os.stat(filename)
  if statinfo.st_size == expected_bytes:
    print 'Found and verified', filename
  else:
    raise Exception(
      'Failed to verify' + filename + '. Can you get to it with a browser?')
  return filename

train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)


Found and verified notMNIST_large.tar.gz
Found and verified notMNIST_small.tar.gz

Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.


In [3]:
num_classes = 10

def extract(filename):
  tar = tarfile.open(filename)
  tar.extractall()
  tar.close()
  root = os.path.splitext(os.path.splitext(filename)[0])[0]  # remove .tar.gz
  data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))]
  if len(data_folders) != num_classes:
    raise Exception(
      'Expected %d folders, one per class. Found %d instead.' % (
        num_folders, len(data_folders)))
  print data_folders
  return data_folders
  
train_folders = extract(train_filename)
test_folders = extract(test_filename)


['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J']
['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J']

Problem 1

Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.



In [59]:
!nvidia-smi


Sat Jan 30 11:52:51 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 352.63     Driver Version: 352.63         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro K620         Off  | 0000:03:00.0      On |                  N/A |
| 34%   36C    P8     1W /  30W |   1882MiB /  2047MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1317    G   /usr/bin/X                                     211MiB |
|    0      6688    G   compiz                                          96MiB |
|    0     12035    C   /usr/bin/python                               1565MiB |
+-----------------------------------------------------------------------------+

In [10]:
path = "notMNIST_large/A"
list_of_images = os.listdir(path)
print list_of_images[0:10]


['U2FsdGVyIE1lZGl1bS50dGY=.png', 'U2hvdGd1bkJsYW5rc0JULVJlZ3VsYXIub3Rm.png', 'Q2xpZmZvcmRTaXgtUm9tYW5MRi5vdGY=.png', 'QXNpYSBFeHRlbmRlZCBCb2xkLnR0Zg==.png', 'SmVzc2ljYS1NZWRpdW0ub3Rm.png', 'Qm9vayBKYWNrZXQudHRm.png', 'TWFpYW5kcmEgRGVtaSBCb2xkLnBmYg==.png', 'U3RpY2tlZS1JdGFsaWMudHRm.png', 'UG9wcGxQb250aWZleEJRLUJvbGQub3Rm.png', 'MTIgQ29uY29yZGUgQm9sZCAwNzMxOS50dGY=.png']

In [13]:
for imageName in list_of_images[0:10]:
    display(Image(filename="{0}/{1}".format(path, imageName)))


Now let's load the data in a more manageable format.

We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9.

A few images might not be readable, we'll just skip them.


In [6]:
image_size = 28  # Pixel width and height.
pixel_depth = 255.0  # Number of levels per pixel.

def load(data_folders, min_num_images, max_num_images):
  dataset = np.ndarray(
    shape=(max_num_images, image_size, image_size), dtype=np.float32)
  labels = np.ndarray(shape=(max_num_images), dtype=np.int32)
  label_index = 0
  image_index = 0
  for folder in data_folders:
    print folder
    for image in os.listdir(folder):
      if image_index >= max_num_images:
        raise Exception('More images than expected: %d >= %d' % (
          num_images, max_num_images))
      image_file = os.path.join(folder, image)
      try:
        image_data = (ndimage.imread(image_file).astype(float) -
                      pixel_depth / 2) / pixel_depth
        if image_data.shape != (image_size, image_size):
          raise Exception('Unexpected image shape: %s' % str(image_data.shape))
        dataset[image_index, :, :] = image_data
        labels[image_index] = label_index
        image_index += 1
      except IOError as e:
        print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.'
    label_index += 1
  num_images = image_index
  dataset = dataset[0:num_images, :, :]
  labels = labels[0:num_images]
  if num_images < min_num_images:
    raise Exception('Many fewer images than expected: %d < %d' % (
        num_images, min_num_images))
  print 'Full dataset tensor:', dataset.shape
  print 'Mean:', np.mean(dataset)
  print 'Standard deviation:', np.std(dataset)
  print 'Labels:', labels.shape
  return dataset, labels
train_dataset, train_labels = load(train_folders, 450000, 550000)
test_dataset, test_labels = load(test_folders, 18000, 20000)


notMNIST_large/A
Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping.
Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping.
Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping.
notMNIST_large/B
Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping.
notMNIST_large/C
notMNIST_large/D
Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping.
notMNIST_large/E
notMNIST_large/F
notMNIST_large/G
notMNIST_large/H
notMNIST_large/I
notMNIST_large/J
Full dataset tensor: (529114, 28, 28)
Mean: -0.0816596
Standard deviation: 0.454233
Labels: (529114,)
notMNIST_small/A
Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping.
notMNIST_small/B
notMNIST_small/C
notMNIST_small/D
notMNIST_small/E
notMNIST_small/F
Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping.
notMNIST_small/G
notMNIST_small/H
notMNIST_small/I
notMNIST_small/J
Full dataset tensor: (18724, 28, 28)
Mean: -0.0746363
Standard deviation: 0.458622
Labels: (18724,)

Problem 2

Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.



In [22]:
import random

%matplotlib inline

def showProcessedRandom(dataset,labels,n): # shows size of the sample
    indices=random.sample(range(0,labels.shape[0]),n)
    fig=plt.figure()
    for i in range(n):
        a=fig.add_subplot(1,n,i+1)
        plt.imshow(dataset[indices[i],:,:])
        a.set_title(chr(labels[indices[i]]+ord('A')))
        a.axes.get_xaxis().set_visible(False)
        a.axes.get_yaxis().set_visible(False)
    plt.show()
    
showProcessedRandom(train_dataset,train_labels,5) 
showProcessedRandom(test_dataset,test_labels,5)


Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.


In [23]:
np.random.seed(133)
def randomize(dataset, labels):
  permutation = np.random.permutation(labels.shape[0])
  shuffled_dataset = dataset[permutation,:,:]
  shuffled_labels = labels[permutation]
  return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)

Problem 3

Convince yourself that the data is still good after shuffling!



In [36]:
import pandas as pd 
train_labels_series = pd.Series(train_labels)
train_labels_series.value_counts()


Out[36]:
8    52912
7    52912
6    52912
5    52912
4    52912
2    52912
9    52911
3    52911
1    52911
0    52909
dtype: int64

In [30]:
train_labels_series.value_counts().plot(kind='bar')


Out[30]:
<matplotlib.axes.AxesSubplot at 0x7f91543f4210>

Problem 4

Another check: we expect the data to be balanced across classes. Verify that.


  • Count the number of labels for each dataset
  • Plot the histogram

In [27]:
from collections import Counter
Counter(train_labels), Counter(test_labels)


Out[27]:
(Counter({0: 52909,
          1: 52911,
          2: 52912,
          3: 52911,
          4: 52912,
          5: 52912,
          6: 52912,
          7: 52912,
          8: 52912,
          9: 52911}),
 Counter({0: 1872,
          1: 1873,
          2: 1873,
          3: 1873,
          4: 1873,
          5: 1872,
          6: 1872,
          7: 1872,
          8: 1872,
          9: 1872}))

Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.

Also create a validation dataset for hyperparameter tuning.


In [37]:
train_size = 200000
valid_size = 10000

valid_dataset = train_dataset[:valid_size,:,:]
valid_labels = train_labels[:valid_size]
train_dataset = train_dataset[valid_size:valid_size+train_size,:,:]
train_labels = train_labels[valid_size:valid_size+train_size]
print 'Training', train_dataset.shape, train_labels.shape
print 'Validation', valid_dataset.shape, valid_labels.shape


Training (200000, 28, 28) (200000,)
Validation (10000, 28, 28) (10000,)

Finally, let's save the data for later reuse:


In [38]:
pickle_file = 'notMNIST.pickle'

try:
  f = open(pickle_file, 'wb')
  save = {
    'train_dataset': train_dataset,
    'train_labels': train_labels,
    'valid_dataset': valid_dataset,
    'valid_labels': valid_labels,
    'test_dataset': test_dataset,
    'test_labels': test_labels,
    }
  pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
  f.close()
except Exception as e:
  print 'Unable to save data to', pickle_file, ':', e
  raise

In [39]:
statinfo = os.stat(pickle_file)
print 'Compressed pickle size:', statinfo.st_size


Compressed pickle size: 718193801

Problem 5

By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. Measure how much overlap there is between training, validation and test samples. Optional questions:

  • What about near duplicates between datasets? (images that are almost identical)
  • Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.


In [ ]:


Problem 6

Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.

Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.

Optional question: train an off-the-shelf model on all the data!



In [52]:
num_samples = 5000
n_classes = 10

(samples, width, height) = train_dataset.shape
X = np.reshape(train_dataset,(samples, width*height))[0:num_samples]
y = train_labels[0:num_samples]
print X
print y


[[-0.5        -0.5        -0.5        ..., -0.5        -0.5        -0.5       ]
 [-0.5        -0.5        -0.5        ..., -0.49215686 -0.5        -0.5       ]
 [-0.02156863  0.5         0.48431373 ..., -0.5        -0.5        -0.5       ]
 ..., 
 [ 0.00588235  0.11176471  0.17843138 ...,  0.5         0.28823531
  -0.00196078]
 [ 0.5         0.5         0.5        ..., -0.49607843 -0.5        -0.5       ]
 [-0.5        -0.5        -0.5        ..., -0.5        -0.5        -0.49607843]]
[1 9 5 ..., 8 1 2]

In [53]:
# This gives a nice image of a letter
example = X.reshape(num_samples, width, height)[2]
plt.imshow(example)


Out[53]:
<matplotlib.image.AxesImage at 0x7f915958f550>

In [55]:
lr = LogisticRegression()
lr.fit(X, y)

(samples, width, height) = test_dataset.shape
X_test = np.reshape(test_dataset, (samples, width*height))
y_test = test_labels
lr.score(X_test, y_test)


Out[55]:
0.84960478530228578

In [58]:
print X_test, y_test


[[-0.0254902   0.37843138  0.31568629 ..., -0.5        -0.5        -0.5       ]
 [-0.5        -0.5        -0.5        ..., -0.5        -0.5        -0.5       ]
 [-0.5        -0.5        -0.5        ..., -0.5        -0.5        -0.5       ]
 ..., 
 [-0.38627452  0.21372549  0.46862745 ...,  0.46862745  0.18235295
  -0.4137255 ]
 [-0.5        -0.5        -0.5        ..., -0.49607843 -0.5        -0.5       ]
 [-0.5        -0.5        -0.5        ..., -0.5        -0.5        -0.5       ]] [3 9 6 ..., 1 9 8]

In [57]:
# This gives a totally random looking image, but I expect it should 
# look like a fuzzy kind of letter.  
filter_a = lr.coef_.reshape(n_classes, width, height)[2]
plt.imshow(filter_a)


Out[57]:
<matplotlib.image.AxesImage at 0x7f915940c250>