This notebook shows SystemML Deep Learning functionality to map images of single digit numbers to their corresponding numeric representations. See Getting Started with Deep Learning and Python for an explanation of the used deep learning concepts and assumptions.
The downloaded MNIST dataset contains labeled images of handwritten digits, where each example is a 28x28 pixel image of grayscale values in the range [0,255] stretched out as 784 pixels, and each label is one of 10 possible digits in [0,9]. We download 60,000 training examples, and 10,000 test examples, where the format is "label, pixel_1, pixel_2, ..., pixel_n". We train a SystemML LeNet model. The results of the learning algorithms have an accuracy of 98 percent.
In [ ]:
!pip show systemml
In [ ]:
from systemml import MLContext, dml
ml = MLContext(sc)
print ("Spark Version:" + sc.version)
print ("SystemML Version:" + ml.version())
print ("SystemML Built-Time:" + ml.buildTime())
In [ ]:
import warnings
warnings.filterwarnings("ignore")
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import classification_report
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#import matplotlib.image as mpimg
%matplotlib inline
Create data directory.
In [ ]:
%%sh
mkdir -p data/mnist/
cd data/mnist/
Download the MNIST data from the MLData repository, and then split and save.
In [ ]:
mnist = datasets.fetch_mldata("MNIST Original")
print ("Mnist data features:" + str(mnist.data.shape))
print ("Mnist data label:" + str(mnist.target.shape))
trainX, testX, trainY, testY = train_test_split(mnist.data, mnist.target.astype("int0"), test_size = 0.142857)
trainD = np.concatenate((trainY.reshape(trainY.size, 1), trainX),axis=1)
testD = np.concatenate((testY.reshape (testY.size, 1), testX),axis=1)
print ("Images for training:" + str(trainD.shape))
print ("Images used for testing:" + str(testD.shape))
pix = int(np.sqrt(trainD.shape[1]))
print ("Each image is: " + str(pix) + " by " + str(pix) + " pixels")
np.savetxt('data/mnist/mnist_train.csv', trainD, fmt='%u', delimiter=",")
np.savetxt('data/mnist/mnist_test.csv', testD, fmt='%u', delimiter=",")
Alternatively get the data from here. (Uncomment curl commands from following cell if you want to download using following approach)
In [ ]:
%%sh
cd data/mnist
# curl -O https://pjreddie.com/media/files/mnist_train.csv
# curl -O https://pjreddie.com/media/files/mnist_test.csv
wc -l mnist*
In [ ]:
trainData = np.genfromtxt('data/mnist/mnist_train.csv', delimiter=",")
testData = np.genfromtxt('data/mnist/mnist_test.csv', delimiter=",")
print ("Training data: " + str(trainData.shape))
print ("Test data: " + str(testData.shape))
In [ ]:
pd.set_option('display.max_columns', 200)
pd.DataFrame(testData[1:10,],dtype='uint')
In [ ]:
!svn --force export https://github.com/apache/incubator-systemml/trunk/scripts/nn
(on a Mac Book, this takes approx. 5-6 mins for 1 epoch)
In [ ]:
script = """
source("nn/examples/mnist_lenet.dml") as mnist_lenet
# Bind training data
n = nrow(data)
# Extract images and labels
images = data[,2:ncol(data)]
labels = data[,1]
# Scale images to [-1,1], and one-hot encode the labels
images = (images / 255.0) * 2 - 1
labels = table(seq(1, n), labels+1, n, 10)
# Split into training (55,000 examples) and validation (5,000 examples)
X = images[5001:nrow(images),]
X_val = images[1:5000,]
y = labels[5001:nrow(images),]
y_val = labels[1:5000,]
# Train the model using channel, height, and width to produce weights/biases.
[W1, b1, W2, b2, W3, b3, W4, b4] = mnist_lenet::train(X, y, X_val, y_val, C, Hin, Win, epochs)
"""
rets = ('W1', 'b1','W2','b2','W3','b3','W4','b4')
script = (dml(script).input(data=trainData, epochs=1, C=1, Hin=28, Win=28)
.output(*rets))
W1, b1, W2, b2, W3, b3, W4, b4 = (ml.execute(script).get(*rets))
Use trained model and predict on test data, and evaluate the quality of the predictions for each digit.
In [ ]:
scriptPredict = """
source("nn/examples/mnist_lenet.dml") as mnist_lenet
# Separate images from lables and scale images to [-1,1]
X_test = data[,2:ncol(data)]
X_test = (X_test / 255.0) * 2 - 1
# Predict
probs = mnist_lenet::predict(X_test, C, Hin, Win, W1, b1, W2, b2, W3, b3, W4, b4)
predictions = rowIndexMax(probs) - 1
"""
script = (dml(scriptPredict).input(data=testData, C=1, Hin=28, Win=28, W1=W1, b1=b1, W2=W2, b2=b2, W3=W3, b3=b3, W4=W4, b4=b4)
.output("predictions"))
predictions = ml.execute(script).get("predictions").toNumPy()
print (classification_report(testData[:,0], predictions))
Define a function that randomly selects a test image, display the image, and scores it.
In [ ]:
img_size = int(np.sqrt(testData.shape[1] - 1))
def displayImage(i):
image = (testData[i,1:]).reshape((img_size, img_size)).astype("uint8")
imgplot = plt.imshow(image, cmap='gray')
In [ ]:
def predictImage(i):
image = testData[i,:].reshape(1,testData.shape[1])
prog = dml(scriptPredict).input(data=image, C=1, Hin=28, Win=28, W1=W1, b1=b1, W2=W2, b2=b2, W3=W3, b3=b3, W4=W4, b4=b4) \
.output("predictions")
result = ml.execute(prog)
return (result.get("predictions").toNumPy())[0]
In [ ]:
i = np.random.choice(np.arange(0, len(testData)), size = (1,))
p = predictImage(i)
print ("Image " + str(i) + "\nPredicted digit: " + str(p) + "\nActual digit: " + str(testData[i,0]) + "\nResult: " + str(p == testData[i,0]))
p
displayImage(i)
In [ ]:
pd.set_option('display.max_columns', 28)
pd.DataFrame((testData[i,1:]).reshape(img_size, img_size),dtype='uint')