In this notebook we will take a look at some indicative applications of machine learning techniques. We will cover content from learning.py
, for chapter 18 from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach. Execute the cell below to get started:
In [94]:
from learning import *
from probabilistic_learning import *
from notebook import *
The MNIST Digits database, available from this page, is a large database of handwritten digits that is commonly used for training and testing/validating in Machine learning.
The dataset has 60,000 training images each of size 28x28 pixels with labels and 10,000 testing images of size 28x28 pixels with labels.
In this section, we will use this database to compare performances of different learning algorithms.
It is estimated that humans have an error rate of about 0.2% on this problem. Let's see how our algorithms perform!
NOTE: We will be using external libraries to load and visualize the dataset smoothly (numpy for loading and matplotlib for visualization). You do not need previous experience of the libraries to follow along.
In [95]:
train_img, train_lbl, test_img, test_lbl = load_MNIST()
Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
Each 28x28 pixel image is flattened to a 784x1 array and we should have 60,000 of them in training data. Similarly, we should have 10,000 of those 784x1 arrays in testing data.
In [96]:
print("Training images size:", train_img.shape)
print("Training labels size:", train_lbl.shape)
print("Testing images size:", test_img.shape)
print("Testing labels size:", test_lbl.shape)
In [97]:
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img)
In [98]:
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img)
Let's have a look at the average of all the images of training and testing data.
In [99]:
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img)
In [100]:
print(train_img.shape, train_lbl.shape)
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
print(training_examples.shape)
Now, we will initialize a DataSet with our training examples, so we can use it in our algorithms.
In [101]:
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
Moving forward we can use MNIST_DataSet
to test our algorithms.
In [102]:
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
In [103]:
%matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
Out[103]:
It is obvious that this Learner is not very efficient. In fact, it will guess correctly in only 1135/10000 of the samples, roughly 10%. It is very fast though, so it might have its use as a quick first guess.
In [104]:
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous = False)
print(nBD(test_img[0]))
To make sure that the output we got is correct, let's plot that image along with its label.
In [105]:
%matplotlib inline
print("Actual class of test image:", test_lbl[0])
plt.imshow(test_img[0].reshape((28,28)))
Out[105]:
In [106]:
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
To make sure that the output we got is correct, let's plot that image along with its label.
In [107]:
%matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
Out[107]:
Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset.
Another dataset in the same format is MNIST Fashion. This dataset, instead of digits contains types of apparel (t-shirts, trousers and others). As with the Digits dataset, it is split into training and testing images, with labels from 0 to 9 for each of the ten types of apparel present in the dataset. The below table shows what each label means:
Label | Description |
---|---|
0 | T-shirt/top |
1 | Trouser |
2 | Pullover |
3 | Dress |
4 | Coat |
5 | Sandal |
6 | Shirt |
7 | Sneaker |
8 | Bag |
9 | Ankle boot |
Since both the MNIST datasets follow the same format, the code we wrote for loading and visualizing the Digits dataset will work for Fashion too! The only difference is that we have to let the functions know which dataset we're using, with the fashion
argument. Let's start by loading the training and testing images:
In [108]:
train_img, train_lbl, test_img, test_lbl = load_MNIST(fashion=True)
In [109]:
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img, fashion=True)
In [110]:
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img, fashion=True)
Let's now see how many times each class appears in the training and testing data:
In [111]:
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img, fashion=True)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img, fashion=True)
Unlike Digits, in Fashion all items appear the same number of times.
In [112]:
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
In [113]:
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
In [114]:
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
In [115]:
%matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
Out[115]:
In [120]:
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous = False)
print(nBD(test_img[24]))
Let's check if we got the right output.
In [121]:
%matplotlib inline
print("Actual class of test image:", test_lbl[24])
plt.imshow(test_img[24].reshape((28,28)))
Out[121]:
In [122]:
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
The output is 1, which means the item at index 211 is a trouser. Let's see if the prediction is correct:
In [123]:
%matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
Out[123]:
Indeed, the item was a trouser! The algorithm classified the item correctly.