The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
In [3]:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
In [4]:
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
In [5]:
print(type(train_filename))
Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.
In [6]:
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
In [7]:
#Problem 1, Tong's solution -> done
def display_image(folder_index = 0, image_index = 0):
try:
sample_folder = train_folders[folder_index]
image_files = os.listdir(sample_folder)
sample_image = os.path.join(sample_folder, image_files[image_index])
print('Displaying image: ', sample_image)
display(Image(filename = sample_image ))
except:
print('Indices out of bound.')
display_image(1, 5)
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
In [8]:
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
In [9]:
#Problem 2, Tong's solution -> done
def display_nd_image(folder_index = 0, image_index = 0):
try:
folder = train_datasets[folder_index]
print("Display image in folder: ", folder)
with open(folder, 'rb') as f:
sample_dataset = pickle.load(f)
img = sample_dataset[image_index, :, :]
plt.imshow(img, cmap = "Greys")
plt.show()
except:
print('Something is wrong.')
display_nd_image(1, 5)
In [10]:
#Problem 3, Tong's solution -> done
print(train_datasets)
sizes = []
for dataset in train_datasets:
with open(dataset, 'rb') as f:
data = pickle.load(f)
sizes.append(data.shape[0])
print("The samples sizes for each class are: ")
print(sizes)
print("Average: ", np.average(sizes))
print("Stdev: ", np.std(sizes))
print("Sum: ", np.sum(sizes))
#Very balanced
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size
as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
In [11]:
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
In [12]:
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
In [13]:
#Problem 4, Tong's solution -> done
#Print some random images and lables from each set, see if they match
def check_data(dataset, lables, index=0):
labelset = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I','J']
img = dataset[index, :, :]
label = labelset[lables[index]]
print("Image:")
plt.imshow(img, cmap = "Greys")
plt.show()
print('Lable: ', label)
check_data(train_dataset, train_labels, index = 1001)
check_data(valid_dataset, valid_labels, index = 11)
check_data(test_dataset, test_labels, index = 9)
#LGTM
In [22]:
print(train_labels[1:100])
Finally, let's save the data for later reuse:
In [14]:
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
In [15]:
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. Measure how much overlap there is between training, validation and test samples.
Optional questions:
In [17]:
#Problem 5, Tong's solution -> done
#Why is there overlap??!
train_indices = train_dataset[0]
print(train_dataset.shape[0])
print(train_dataset.item(100, 27, 6))
#Brute-force checking how many rows are identical between train and valid
def overlap_rate(a_dataset, b_dataset, sample_size = 1000):
identical_count = 0
test_size = min(a_dataset.shape[0], sample_size)
for i in range(test_size):
a_record = a_dataset[i, :, :]
for j in range(b_dataset.shape[0]):
b_record = b_dataset[j, :, :]
if np.array_equal(a_record, b_record):
identical_count += 1
print('Sample size:', str(test_size))
print('Percent of a dataset that is overlaped in b dataset', str(identical_count*1.0/test_size))
overlap_rate(train_dataset, valid_dataset) #39%, surprisingly high!
overlap_rate(train_dataset, test_dataset) #58%, even higher
"""
Optioanl questions:
-consider using np.allclose for near duplicates
-sanitized validation and test set: leave for later.."""
Out[17]:
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
In [ ]:
%%capture
#Learn reshape
#l = np.ndarray(range(27), shape=(3, 3, 3))
a = np.arange(27).reshape((3, 3, 3))
b = a.reshape(3, 9)
print(a);
print(b);
In [ ]:
#Problem 6, Tong's solution Version 1: no tuning of hyperparameters
#Take subset of training data, reshape for regression
train_size = 1000
train = train_dataset[:train_size, :, :]
test = test_dataset.reshape(test_dataset.shape[0], image_size * image_size)
X = train.reshape(train_size, image_size * image_size)
Y = train_labels[:train_size]
#Build regression graph
logreg = LogisticRegression(C=1.0)
#Fit the model
logreg.fit(X, Y)
#Test predictions on test set
Z = logreg.predict(test)
#Evaluate
np.mean(Z == test_labels) #Accurary 85%
In [ ]:
#V2: tune hyperparameters with the validation set. First do this 'by hand'
valid = valid_dataset.reshape(valid_dataset.shape[0], image_size * image_size)
Cs = np.logspace(0.001, 10, num=50)
Accuracys = []
for C in Cs:
logregC = LogisticRegression(C=C)
logregC.fit(X, Y)
pred = logregC.predict(valid)
acc = np.mean(pred == valid_labels)
Accuracys.append(acc)
In [ ]:
Accuracys = np.array(Accuracys)
plt.plot(Cs, Accuracys)
#Looks like changing C doesn't matter all that much. Why?