MXNet

MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavours of symbolic programming and imperative programming to maximize efficiency and productivity. In its core, a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.

Some key takeaways:

  • Flexible and efficient GPU computing and state-of-art deep learning to Julia.
  • Flexible symbolic manipulation to composite and construct state-of-the-art deep learning models.
  • Extreme performance gains of using GPUs.

Digit Recognition on MNIST :

The MNIST dataset consists of 60000 labelled data for digits 0-9. The test data comprises of 10000 labelled images. The digits have all been size-normalised and centered in a fixed-size image.

To view the images :

The Julia package MNIST.jl is very convenient to handle and view all the images in the dataset. The training data comprises of trainX and trainY(labels).


In [40]:
#=
using MNIST, Images
trainX, trainY = traindata()
testX, testY = testdata()

function showtestdigit(d::Int64)
    a = zeros(28,28)
    j=1
    for i=1:28:length(testX[:,d])-27
        a[:,j]= testX[:,d][i:i+27]
        j = j+1
    end
    grayim(a)
end

function showtraindigit(d::Int64)
    a = zeros(28,28)
    j=1
    for i=1:28:length(trainX[:,d])-27
        a[:,j]= trainX[:,d][i:i+27]
        j = j+1
    end
    grayim(a)
end
=#

To view the images :

The training data consists of 60k labeled digit images, the images could be rotated by any angle.


In [42]:
# 1<n<60000
n=60000
@show trainY[n]
showtraindigit(n)


trainY[n] = 8.0
Out[42]:

In [11]:
# GPU specific configurations
ENV["MXNET_HOME"] = joinpath(Pkg.dir("MXNet"), "deps", "usr", "lib")
Base.compilecache("MXNet")
using MXNet

Create a placeholder for the data.


In [12]:
data = mx.Variable(:data)


Out[12]:
MXNet.mx.SymbolicNode(MXNet.mx.MX_SymbolHandle(Ptr{Void} @0x0000000006d6bf20))

This is a 3 layer cascaded fully-connected network. Note the architecture looks like,

Input --> 128 units (ReLU) --> 64 units (ReLU) --> 10 units


In [13]:
fc1  = mx.FullyConnected(data = data, name=:fc1, num_hidden=128)
act1 = mx.Activation(data = fc1, name=:relu1, act_type=:relu)
fc2  = mx.FullyConnected(data = act1, name=:fc2, num_hidden=64)
act2 = mx.Activation(data = fc2, name=:relu2, act_type=:relu)
fc3  = mx.FullyConnected(data = act2, name=:fc3, num_hidden=10)


Out[13]:
MXNet.mx.SymbolicNode(MXNet.mx.MX_SymbolHandle(Ptr{Void} @0x0000000006eea650))

We then add a final SoftmaxOutput operation to turn the 10-dimensional prediction to proper probability values for the 10 classes.


In [14]:
mlp  = mx.SoftmaxOutput(data = fc3, name=:softmax)


Out[14]:
MXNet.mx.SymbolicNode(MXNet.mx.MX_SymbolHandle(Ptr{Void} @0x0000000006ee0a50))

As we can see, the MLP is just a chain of layers. For this case, we can also use the mx.chain macro. The same architecture above can be defined as,


In [15]:
mlp = @mx.chain mx.Variable(:data)             =>
  mx.FullyConnected(name=:fc1, num_hidden=128) =>
  mx.Activation(name=:relu1, act_type=:relu)   =>
  mx.FullyConnected(name=:fc2, num_hidden=64)  =>
  mx.Activation(name=:relu2, act_type=:relu)   =>
  mx.FullyConnected(name=:fc3, num_hidden=10)  =>
  mx.SoftmaxOutput(name=:softmax)


Out[15]:
MXNet.mx.SymbolicNode(MXNet.mx.MX_SymbolHandle(Ptr{Void} @0x000000000763afb0))

After defining the architecture, we are ready to load the MNIST data. MXNet.jl provide built-in data providers for the MNIST dataset, which could automatically download the dataset into Pkg.dir("MXNet")/data/mnist if necessary.


In [16]:
batch_size = 100
include(Pkg.dir("MXNet", "examples", "mnist", "mnist-data.jl"))
train_provider, eval_provider = get_mnist_providers(batch_size)


Out[16]:
(MXNet.mx.MXDataProvider(MXNet.mx.MX_DataIterHandle(Ptr{Void} @0x00000000076ad770),Tuple{Symbol,Tuple}[(:data,(784,100))],Tuple{Symbol,Tuple}[(:softmax_label,(100,))],100,true,true),MXNet.mx.MXDataProvider(MXNet.mx.MX_DataIterHandle(Ptr{Void} @0x00000000076fb640),Tuple{Symbol,Tuple}[(:data,(784,100))],Tuple{Symbol,Tuple}[(:softmax_label,(100,))],100,true,true))

Given the architecture and data, we can instantiate an model to do the actual training. mx.FeedForward is the built-in model that is suitable for most feed-forward architectures. When constructing the model, we also specify the context on which the computation should be carried out.


In [17]:
model = mx.FeedForward(mlp, context=mx.gpu())


Out[17]:
MXNet.mx.FeedForward(MXNet.mx.SymbolicNode(MXNet.mx.MX_SymbolHandle(Ptr{Void} @0x000000000763afb0)),[GPU0],#undef,#undef,#undef)

The last thing we need to specify is the optimization algorithm (a.k.a. optimizer) to use. We use the basic SGD with a fixed learning rate 0.1 and momentum 0.9:


In [18]:
optimizer = mx.SGD(lr=0.1, momentum=0.9, weight_decay=0.00001)


Out[18]:
MXNet.mx.SGD(MXNet.mx.SGDOptions(0.1,0.9,0,1.0e-5,MXNet.mx.LearningRate.Fixed(0.1),MXNet.mx.Momentum.Fixed(0.9)),#undef)

Now we can do the training. Here the n_epoch parameter specifies that we want to train for 20 epochs. We also supply a eval_data to monitor validation accuracy on the validation set.


In [19]:
@time mx.fit(model, optimizer, train_provider, n_epoch=20, eval_data=eval_provider)


INFO: Start training on [GPU0]
INFO: Initializing parameters...
INFO: Creating KVStore...
INFO: TempSpace: Total 0 MB allocated on GPU0
INFO: Start training...
INFO: == Epoch 001 ==========
INFO: ## Training summary
INFO:           accuracy = 0.7599
INFO:               time = 1.5975 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9536
INFO: == Epoch 002 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9586
INFO:               time = 1.0707 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9662
INFO: == Epoch 003 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9726
INFO:               time = 1.0556 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9702
INFO: == Epoch 004 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9782
INFO:               time = 1.0560 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9712
INFO: == Epoch 005 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9812
INFO:               time = 1.0217 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9721
INFO: == Epoch 006 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9842
INFO:               time = 1.0574 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9658
INFO: == Epoch 007 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9864
INFO:               time = 0.9816 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9723
INFO: == Epoch 008 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9869
INFO:               time = 1.0509 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9742
INFO: == Epoch 009 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9889
INFO:               time = 1.1034 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9754
INFO: == Epoch 010 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9904
INFO:               time = 1.1009 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9732
INFO: == Epoch 011 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9903
INFO:               time = 1.0905 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9763
INFO: == Epoch 012 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9915
INFO:               time = 1.0924 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9759
INFO: == Epoch 013 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9927
INFO:               time = 1.1022 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9775
INFO: == Epoch 014 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9925
INFO:               time = 1.1519 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9731
INFO: == Epoch 015 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9937
INFO:               time = 1.1751 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9753
INFO: == Epoch 016 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9938
INFO:               time = 1.1087 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9771
INFO: == Epoch 017 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9940
INFO:               time = 1.0956 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9762
INFO: == Epoch 018 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9942
INFO:               time = 1.0902 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9788
INFO: == Epoch 019 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9944
INFO:               time = 1.0624 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9790
INFO: == Epoch 020 ==========
INFO: ## Training summary
INFO:           accuracy = 0.9952
INFO:               time = 1.0821 seconds
INFO: ## Validation summary
INFO:           accuracy = 0.9739
 30.557070 seconds (28.37 M allocations: 1.023 GB, 2.41% gc time)

In [20]:
probs = mx.predict(model, eval_provider)


INFO: TempSpace: Total 0 MB allocated on GPU0
Out[20]:
10x10000 Array{Float32,2}:
 4.36324e-17  1.13791e-24  8.0871e-12   …  6.42151e-22  6.37284e-17
 1.41083e-13  4.56133e-13  0.999887        1.50884e-31  1.53987e-22
 9.61103e-14  1.0          7.29982e-8      4.43302e-26  4.23287e-21
 3.73706e-13  8.51213e-21  4.75446e-11     8.80196e-19  4.81301e-24
 1.26437e-13  4.20521e-22  4.38132e-7      8.93717e-31  2.82327e-20
 3.51822e-17  2.69263e-21  4.78921e-9   …  1.0          7.11326e-14
 3.13054e-21  5.0751e-25   1.00955e-8      6.54754e-22  1.0        
 1.0          2.82068e-13  2.68171e-5      4.11324e-24  2.01875e-30
 8.37736e-18  8.08167e-24  8.53508e-5      5.2885e-15   6.72185e-17
 2.00951e-12  9.15443e-29  4.69886e-9      3.26214e-22  1.57289e-18

In [21]:
# collect all labels from eval data
labels = Array[]
for batch in eval_provider
    push!(labels, copy(mx.get(eval_provider, batch, :softmax_label)))
end
labels = cat(1, labels...)


Out[21]:
10000-element Array{Float32,1}:
 7.0
 2.0
 1.0
 0.0
 4.0
 1.0
 4.0
 9.0
 5.0
 9.0
 0.0
 6.0
 9.0
 ⋮  
 5.0
 6.0
 7.0
 8.0
 9.0
 0.0
 1.0
 2.0
 3.0
 4.0
 5.0
 6.0

In [22]:
# Now we use compute the accuracy
correct = 0
for i = 1:length(labels)
    # labels are 0...9
    if indmax(probs[:,i]) == labels[i]+1
        correct += 1
    end
end
accuracy = 100correct/length(labels)
println(mx.format("Accuracy on eval set: {1:.2f}%", accuracy))


Accuracy on eval set: 97.39%

Lets actually see how good it is :

The labels are the predictions, cross verify this with the actual images using the funtion, showtestdigit(n), where n is the index.


In [37]:
n=144
println(Int(labels[n]))
showtestdigit(n)


1
Out[37]:

Performance :


In [39]:
t = [1639,23]
using DataFrames
df = DataFrame(Names=["CPU", "GPU"], Time=t, Accuracy=[97.31, 97.69])
using Gadfly
p1=Gadfly.plot( x=df[:Names],y=df[:Time],  Guide.ylabel("Time in sec"), Geom.bar, Guide.title("Performance."))
#p2=Gadfly.plot( x=df[:Names],y=df[:Accuracy],  Guide.ylabel("Time in sec"), Geom.bar, Guide.title("Accuracy measure."))


Out[39]:
x CPU GPU -2500 -2000 -1500 -1000 -500 0 500 1000 1500 2000 2500 3000 3500 4000 4500 -2000 -1900 -1800 -1700 -1600 -1500 -1400 -1300 -1200 -1100 -1000 -900 -800 -700 -600 -500 -400 -300 -200 -100 0 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000 -2000 0 2000 4000 -2000 -1800 -1600 -1400 -1200 -1000 -800 -600 -400 -200 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000 3200 3400 3600 3800 4000 Time in sec Performance.

In [ ]: