Machine Learning at Scale, Part I

KMeans clustering at scale

Training models with data that fits in memory is very limiting. But minibatch learners can easily work with data directly from disk.

We'll use the MNIST data set, which has 8 million images (about 17 GB). The dataset has been partition into groups of 100k images (using the unix split command) and saved in compressed lz4 files. This dataset is very large and doesnt get loaded by default by getdata.sh. You have to load it explicitly by calling getmnist.sh from the scripts directory. The script automatically splits the data into files that are small enough to be loaded into memory.

Let's load BIDMat/BIDMach


In [1]:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,Image,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,HMat,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.Plotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA}
import BIDMach.datasources.{DataSource,MatDS,FilesDS,SFilesDS}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}

Mat.checkMKL
Mat.checkCUDA
if (Mat.hasCUDA > 0) GPUmem


1 CUDA device found, CUDA version 6.5
Out[1]:
(0.8701763,3737378816,4294967296)

And define the root directory for this dataset.


In [2]:
val mdir = "../data/MNIST8M/parts/"



Out[2]:
../data/MNIST8M/parts/

The files we need are named "alls00.fmat.lz4", "alls01.fmat.lz4" etc. We can create a learner using a pattern for accessing these files:


In [3]:
val (mm, opts) = KMeans.learner(mdir+"alls%02d.fmat.lz4",1024)



Out[3]:
BIDMach.models.KMeans$fsopts@50b0399b

The string "%02d" is a C/Scala format string that expands into a two-digit ASCII number to help with the enumeration.

There are several new options that can tailor a files datasource, but we'll mostly use the defaults. One thing we will do is define the last file to use for training (number 70). This leaves us with some held-out files to use for testing.


In [4]:
opts.nend = 70



Out[4]:
70

Note that the training data include image data and labels (0-9). K-Means is an unsupervised algorithm and if we used image data only KMeans will often build clusters containing different digit images. To produce cleaner clusters, and to facilitate classification later on, the alls data includes both labels in the first 10 rows, and image data in the remaining rows. The label features are scaled by a large constant factor. That means that images of different digits will be far apart in feature space. It effectively prevents different digits occuring in the same cluster.

Tuning Options

The following options are the important ones for tuning. For KMeans, batchSize has no effect on accracy since the algorithm uses all the data instances to perform an update. So you're free to tune it for best speed. Generally larger is better, as long as you dont use too much GPU ram.

npasses is the number of passes over the dataset. Larger is typically better, but the model may overfit at some point.


In [5]:
opts.batchSize = 20000
opts.npasses = 4



Out[5]:
4

You invoke the learner the same way as before. You can change the options above after each run to optimize performance.


In [6]:
mm.train


pass= 0
First pass random centroid initialization
 1.00%, ll=0.00000, gf=43.144, secs=0.8, GB=0.25, MB/s=336.08, GPUmem=0.78
 3.00%, ll=0.00000, gf=47.791, secs=1.4, GB=0.83, MB/s=604.95, GPUmem=0.77
 6.00%, ll=0.00000, gf=47.045, secs=2.1, GB=1.52, MB/s=732.92, GPUmem=0.77
10.00%, ll=0.00000, gf=45.990, secs=2.8, GB=2.22, MB/s=783.64, GPUmem=0.77
13.00%, ll=0.00000, gf=45.851, secs=3.6, GB=2.92, MB/s=821.46, GPUmem=0.77
16.00%, ll=0.00000, gf=46.093, secs=4.2, GB=3.62, MB/s=852.72, GPUmem=0.77
19.00%, ll=0.00000, gf=46.062, secs=5.0, GB=4.32, MB/s=871.37, GPUmem=0.77
22.00%, ll=0.00000, gf=46.177, secs=5.7, GB=5.02, MB/s=888.00, GPUmem=0.77
25.00%, ll=0.00000, gf=46.318, secs=6.3, GB=5.72, MB/s=901.99, GPUmem=0.77
28.00%, ll=0.00000, gf=46.182, secs=7.1, GB=6.42, MB/s=908.33, GPUmem=0.77
32.00%, ll=0.00000, gf=46.267, secs=7.8, GB=7.11, MB/s=917.37, GPUmem=0.77
35.00%, ll=0.00000, gf=46.327, secs=8.4, GB=7.81, MB/s=924.72, GPUmem=0.77
38.00%, ll=0.00000, gf=46.247, secs=9.2, GB=8.51, MB/s=928.31, GPUmem=0.78
41.00%, ll=0.00000, gf=46.249, secs=9.9, GB=9.21, MB/s=932.79, GPUmem=0.78
44.00%, ll=0.00000, gf=46.294, secs=10.6, GB=9.91, MB/s=937.56, GPUmem=0.78
47.00%, ll=0.00000, gf=46.345, secs=11.3, GB=10.61, MB/s=942.00, GPUmem=0.78
50.00%, ll=0.00000, gf=46.438, secs=11.9, GB=11.31, MB/s=946.87, GPUmem=0.78
53.00%, ll=0.00000, gf=46.539, secs=12.6, GB=12.01, MB/s=951.59, GPUmem=0.78
57.00%, ll=0.00000, gf=46.608, secs=13.3, GB=12.70, MB/s=955.40, GPUmem=0.78
60.00%, ll=0.00000, gf=46.425, secs=14.1, GB=13.40, MB/s=953.79, GPUmem=0.78
63.00%, ll=0.00000, gf=46.506, secs=14.7, GB=14.10, MB/s=957.39, GPUmem=0.78
66.00%, ll=0.00000, gf=46.516, secs=15.4, GB=14.80, MB/s=959.37, GPUmem=0.78
69.00%, ll=0.00000, gf=46.540, secs=16.1, GB=15.50, MB/s=961.47, GPUmem=0.78
72.00%, ll=0.00000, gf=46.545, secs=16.8, GB=16.20, MB/s=963.05, GPUmem=0.78
76.00%, ll=0.00000, gf=46.595, secs=17.5, GB=16.90, MB/s=965.45, GPUmem=0.78
79.00%, ll=0.00000, gf=46.498, secs=18.2, GB=17.60, MB/s=964.69, GPUmem=0.76
82.00%, ll=0.00000, gf=46.495, secs=18.9, GB=18.29, MB/s=965.78, GPUmem=0.76
85.00%, ll=0.00000, gf=46.508, secs=19.6, GB=18.99, MB/s=967.13, GPUmem=0.76
88.00%, ll=0.00000, gf=46.527, secs=20.3, GB=19.69, MB/s=968.53, GPUmem=0.76
91.00%, ll=0.00000, gf=46.507, secs=21.0, GB=20.39, MB/s=969.06, GPUmem=0.76
94.00%, ll=0.00000, gf=46.506, secs=21.7, GB=21.09, MB/s=969.90, GPUmem=0.76
97.00%, ll=0.00000, gf=46.480, secs=22.5, GB=21.79, MB/s=970.18, GPUmem=0.76
100.00%, ll=0.00000, gf=47.203, secs=22.8, GB=22.23, MB/s=974.92, GPUmem=0.76
pass= 1
 1.00%, ll=-1528.99402, gf=55.761, secs=23.4, GB=22.49, MB/s=961.03, GPUmem=0.75
 3.00%, ll=-1530.00891, gf=76.531, secs=24.3, GB=23.06, MB/s=949.19, GPUmem=0.75
 6.00%, ll=-1527.56726, gf=100.185, secs=25.4, GB=23.76, MB/s=935.59, GPUmem=0.75
10.00%, ll=-1528.47314, gf=121.867, secs=26.5, GB=24.46, MB/s=923.05, GPUmem=0.75
13.00%, ll=-1532.21558, gf=141.811, secs=27.6, GB=25.15, MB/s=911.47, GPUmem=0.75
16.00%, ll=-1527.96777, gf=160.212, secs=28.7, GB=25.85, MB/s=900.73, GPUmem=0.75
19.00%, ll=-1529.12122, gf=177.277, secs=29.8, GB=26.55, MB/s=890.93, GPUmem=0.75
22.00%, ll=-1531.83630, gf=193.110, secs=30.9, GB=27.25, MB/s=881.74, GPUmem=0.75
25.00%, ll=-1525.00244, gf=207.877, secs=32.0, GB=27.95, MB/s=873.29, GPUmem=0.75
28.00%, ll=-1528.44751, gf=221.650, secs=33.1, GB=28.65, MB/s=865.35, GPUmem=0.75
32.00%, ll=-1528.99402, gf=234.522, secs=34.2, GB=29.35, MB/s=857.88, GPUmem=0.75
35.00%, ll=-1530.00891, gf=246.556, secs=35.3, GB=30.04, MB/s=850.75, GPUmem=0.75
38.00%, ll=-1527.56726, gf=257.815, secs=36.4, GB=30.74, MB/s=843.91, GPUmem=0.75
41.00%, ll=-1528.47314, gf=268.499, secs=37.5, GB=31.44, MB/s=837.77, GPUmem=0.75
44.00%, ll=-1532.21558, gf=278.552, secs=38.6, GB=32.14, MB/s=831.92, GPUmem=0.75
47.00%, ll=-1527.96777, gf=288.068, secs=39.7, GB=32.84, MB/s=826.45, GPUmem=0.75
50.00%, ll=-1529.12122, gf=297.086, secs=40.8, GB=33.54, MB/s=821.32, GPUmem=0.76
53.00%, ll=-1531.83630, gf=305.617, secs=41.9, GB=34.24, MB/s=816.42, GPUmem=0.76
57.00%, ll=-1525.00244, gf=313.725, secs=43.0, GB=34.94, MB/s=811.80, GPUmem=0.76
60.00%, ll=-1528.44751, gf=321.423, secs=44.1, GB=35.63, MB/s=807.40, GPUmem=0.76
63.00%, ll=-1528.99402, gf=328.753, secs=45.2, GB=36.33, MB/s=803.23, GPUmem=0.76
66.00%, ll=-1530.00891, gf=335.736, secs=46.3, GB=37.03, MB/s=799.26, GPUmem=0.76
69.00%, ll=-1527.56726, gf=342.410, secs=47.4, GB=37.73, MB/s=795.51, GPUmem=0.76
72.00%, ll=-1528.47314, gf=348.767, secs=48.5, GB=38.43, MB/s=791.89, GPUmem=0.76
76.00%, ll=-1532.21558, gf=354.843, secs=49.6, GB=39.13, MB/s=788.43, GPUmem=0.76
79.00%, ll=-1527.96777, gf=360.627, secs=50.7, GB=39.83, MB/s=785.06, GPUmem=0.76
82.00%, ll=-1529.12122, gf=366.214, secs=51.8, GB=40.53, MB/s=781.94, GPUmem=0.76
85.00%, ll=-1531.83630, gf=371.514, secs=52.9, GB=41.22, MB/s=778.83, GPUmem=0.76
88.00%, ll=-1525.00244, gf=376.633, secs=54.0, GB=41.92, MB/s=775.92, GPUmem=0.76
91.00%, ll=-1528.44751, gf=381.574, secs=55.1, GB=42.62, MB/s=773.19, GPUmem=0.76
94.00%, ll=-1528.99402, gf=386.296, secs=56.2, GB=43.32, MB/s=770.50, GPUmem=0.76
97.00%, ll=-1530.00891, gf=390.823, secs=57.3, GB=44.02, MB/s=767.89, GPUmem=0.76
100.00%, ll=-1525.00244, gf=393.503, secs=58.0, GB=44.46, MB/s=766.46, GPUmem=0.76
pass= 2
 1.00%, ll=-1248.15735, gf=391.952, secs=58.8, GB=44.72, MB/s=760.20, GPUmem=0.76
 3.00%, ll=-1243.15515, gf=395.215, secs=59.7, GB=45.29, MB/s=758.10, GPUmem=0.76
 6.00%, ll=-1237.85303, gf=399.109, secs=60.9, GB=45.99, MB/s=755.47, GPUmem=0.76
10.00%, ll=-1242.98755, gf=402.563, secs=62.1, GB=46.69, MB/s=752.38, GPUmem=0.76
13.00%, ll=-1242.79675, gf=406.099, secs=63.2, GB=47.39, MB/s=749.79, GPUmem=0.76
16.00%, ll=-1240.73352, gf=409.624, secs=64.3, GB=48.08, MB/s=747.50, GPUmem=0.76
19.00%, ll=-1241.05334, gf=412.996, secs=65.5, GB=48.78, MB/s=745.24, GPUmem=0.76
22.00%, ll=-1241.83521, gf=416.253, secs=66.6, GB=49.48, MB/s=743.05, GPUmem=0.76
25.00%, ll=-1240.82996, gf=419.420, secs=67.7, GB=50.18, MB/s=740.97, GPUmem=0.76
28.00%, ll=-1243.81873, gf=422.452, secs=68.9, GB=50.88, MB/s=738.90, GPUmem=0.76
32.00%, ll=-1238.86072, gf=425.422, secs=70.0, GB=51.58, MB/s=736.97, GPUmem=0.76
35.00%, ll=-1241.27917, gf=428.256, secs=71.1, GB=52.28, MB/s=735.02, GPUmem=0.76
38.00%, ll=-1245.20618, gf=431.024, secs=72.3, GB=52.98, MB/s=733.18, GPUmem=0.76
41.00%, ll=-1241.23145, gf=433.690, secs=73.4, GB=53.67, MB/s=731.36, GPUmem=0.76
44.00%, ll=-1239.37659, gf=436.274, secs=74.5, GB=54.37, MB/s=729.60, GPUmem=0.76
47.00%, ll=-1244.69141, gf=438.816, secs=75.7, GB=55.07, MB/s=727.94, GPUmem=0.76
50.00%, ll=-1242.65308, gf=441.271, secs=76.8, GB=55.77, MB/s=726.32, GPUmem=0.76
53.00%, ll=-1236.02136, gf=443.638, secs=77.9, GB=56.47, MB/s=724.72, GPUmem=0.76
57.00%, ll=-1244.31873, gf=445.926, secs=79.1, GB=57.17, MB/s=723.14, GPUmem=0.76
60.00%, ll=-1243.58557, gf=448.182, secs=80.2, GB=57.87, MB/s=721.67, GPUmem=0.76
63.00%, ll=-1239.40906, gf=450.370, secs=81.3, GB=58.57, MB/s=720.22, GPUmem=0.76
66.00%, ll=-1243.23572, gf=452.498, secs=82.4, GB=59.26, MB/s=718.82, GPUmem=0.76
69.00%, ll=-1242.51355, gf=454.580, secs=83.6, GB=59.96, MB/s=717.47, GPUmem=0.76
72.00%, ll=-1243.47778, gf=456.579, secs=84.7, GB=60.66, MB/s=716.11, GPUmem=0.76
76.00%, ll=-1245.33215, gf=458.530, secs=85.8, GB=61.36, MB/s=714.80, GPUmem=0.76
79.00%, ll=-1242.01514, gf=460.447, secs=87.0, GB=62.06, MB/s=713.54, GPUmem=0.76
82.00%, ll=-1241.60693, gf=462.299, secs=88.1, GB=62.76, MB/s=712.30, GPUmem=0.76
85.00%, ll=-1244.99585, gf=464.129, secs=89.2, GB=63.46, MB/s=711.12, GPUmem=0.76
88.00%, ll=-1240.08557, gf=465.899, secs=90.4, GB=64.16, MB/s=709.96, GPUmem=0.76
91.00%, ll=-1236.23792, gf=467.624, secs=91.5, GB=64.85, MB/s=708.82, GPUmem=0.76
94.00%, ll=-1244.45532, gf=469.303, secs=92.6, GB=65.55, MB/s=707.70, GPUmem=0.76
97.00%, ll=-1243.38721, gf=470.956, secs=93.8, GB=66.25, MB/s=706.63, GPUmem=0.76
100.00%, ll=-1244.94849, gf=471.929, secs=94.5, GB=66.70, MB/s=706.06, GPUmem=0.76
pass= 3
 1.00%, ll=-1227.92102, gf=470.342, secs=95.3, GB=66.95, MB/s=702.77, GPUmem=0.76
 3.00%, ll=-1223.15125, gf=471.671, secs=96.2, GB=67.52, MB/s=702.09, GPUmem=0.76
 6.00%, ll=-1220.60315, gf=473.329, secs=97.3, GB=68.22, MB/s=701.26, GPUmem=0.76
10.00%, ll=-1225.07642, gf=474.925, secs=98.4, GB=68.92, MB/s=700.41, GPUmem=0.76
13.00%, ll=-1225.36536, gf=476.490, secs=99.5, GB=69.62, MB/s=699.59, GPUmem=0.76
16.00%, ll=-1220.97375, gf=478.016, secs=100.6, GB=70.32, MB/s=698.79, GPUmem=0.76
19.00%, ll=-1221.56299, gf=479.504, secs=101.7, GB=71.02, MB/s=697.99, GPUmem=0.76
22.00%, ll=-1223.73022, gf=480.941, secs=102.9, GB=71.71, MB/s=697.18, GPUmem=0.76
25.00%, ll=-1218.80872, gf=482.379, secs=104.0, GB=72.41, MB/s=696.44, GPUmem=0.76
28.00%, ll=-1224.81958, gf=483.791, secs=105.1, GB=73.11, MB/s=695.72, GPUmem=0.76
32.00%, ll=-1221.86414, gf=485.170, secs=106.2, GB=73.81, MB/s=695.01, GPUmem=0.76
35.00%, ll=-1222.98792, gf=486.496, secs=107.3, GB=74.51, MB/s=694.28, GPUmem=0.76
38.00%, ll=-1225.75171, gf=487.828, secs=108.4, GB=75.21, MB/s=693.61, GPUmem=0.76
41.00%, ll=-1223.07959, gf=489.096, secs=109.5, GB=75.91, MB/s=692.90, GPUmem=0.76
44.00%, ll=-1225.36536, gf=490.334, secs=110.7, GB=76.61, MB/s=692.20, GPUmem=0.76
47.00%, ll=-1220.97375, gf=491.570, secs=111.8, GB=77.30, MB/s=691.55, GPUmem=0.76
50.00%, ll=-1221.56299, gf=492.781, secs=112.9, GB=78.00, MB/s=690.91, GPUmem=0.76
53.00%, ll=-1223.73022, gf=493.951, secs=114.0, GB=78.70, MB/s=690.25, GPUmem=0.76
57.00%, ll=-1218.80872, gf=495.120, secs=115.1, GB=79.40, MB/s=689.64, GPUmem=0.76
60.00%, ll=-1223.39124, gf=496.279, secs=116.2, GB=80.10, MB/s=689.06, GPUmem=0.76
63.00%, ll=-1220.53271, gf=497.386, secs=117.4, GB=80.80, MB/s=688.45, GPUmem=0.76
66.00%, ll=-1225.93494, gf=498.486, secs=118.5, GB=81.50, MB/s=687.87, GPUmem=0.76
69.00%, ll=-1223.09875, gf=499.581, secs=119.6, GB=82.19, MB/s=687.32, GPUmem=0.76
72.00%, ll=-1224.59460, gf=500.636, secs=120.7, GB=82.89, MB/s=686.76, GPUmem=0.76
76.00%, ll=-1225.36536, gf=501.675, secs=121.8, GB=83.59, MB/s=686.21, GPUmem=0.76
79.00%, ll=-1220.97375, gf=502.536, secs=123.0, GB=84.29, MB/s=685.45, GPUmem=0.76
82.00%, ll=-1221.56299, gf=503.536, secs=124.1, GB=84.99, MB/s=684.92, GPUmem=0.76
85.00%, ll=-1223.73022, gf=504.505, secs=125.2, GB=85.69, MB/s=684.37, GPUmem=0.76
88.00%, ll=-1218.80872, gf=505.469, secs=126.3, GB=86.39, MB/s=683.86, GPUmem=0.76
91.00%, ll=-1219.96936, gf=506.445, secs=127.4, GB=87.09, MB/s=683.39, GPUmem=0.76
94.00%, ll=-1225.16943, gf=507.355, secs=128.6, GB=87.78, MB/s=682.87, GPUmem=0.76
97.00%, ll=-1223.77039, gf=508.282, secs=129.7, GB=88.48, MB/s=682.39, GPUmem=0.76
100.00%, ll=-1218.80872, gf=508.809, secs=130.4, GB=88.93, MB/s=682.15, GPUmem=0.76
Time=130.3660 secs, gflops=508.80

Now lets extract the model as a Floating-point matrix. We included the category features for clustering to make sure that each cluster is a subset of images for one digit.


In [7]:
val modelmat = FMat(mm.modelmat)



Out[7]:
      0  10000      0      0      0      0      0      0      0      0...
      0      0      0      0      0      0      0      0      0  10000...
      0      0      0  10000      0      0      0      0      0      0...
      0  10000      0      0      0      0      0      0      0      0...
      0      0      0      0      0      0      0      0  10000      0...
      0      0  10000      0      0      0      0      0      0      0...
      0      0      0      0      0      0      0      0      0  10000...
      0  10000      0      0      0      0      0      0      0      0...
     ..     ..     ..     ..     ..     ..     ..     ..     ..     ..

Next we build a 30 x 10 array of images to view the first 300 cluster centers as images.


In [8]:
val nx = 30
val ny = 10
val im = zeros(28,28)
val allim = zeros(28*nx,28*ny)
for (i<-0 until nx) {
    for (j<-0 until ny) {
        val slice = modelmat(i+nx*j,10->794)
        im(?) = slice(?)
        allim((28*i)->(28*(i+1)), (28*j)->(28*(j+1))) = im
    }
}
Image.show(allim kron ones(2,2))



Out[8]:
javax.swing.JFrame[frame0,0,0,1696x599,layout=java.awt.BorderLayout,title=Image 0,resizable,normal,defaultCloseOperation=HIDE_ON_CLOSE,rootPane=javax.swing.JRootPane[,8,31,1680x560,layout=javax.swing.JRootPane$RootLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=16777673,maximumSize=,minimumSize=,preferredSize=],rootPaneCheckingEnabled=true]

We'll predict using the closest cluster (or 1-NN if you like). First we read some data directly. We could also try to do evaluation directly from disk, but this would usually be overkill.


In [9]:
val test = loadFMat(mdir+"alls70.fmat.lz4")   // Load a test data file
val testdata = test.copy                      // copy it
testdata(0->10,?) = 0                         // and remove the digit labels
val preds = izeros(1, test.ncols)             // make a container to hold the predictions
1                                             // avoids a monster data cell being printed



Out[9]:
1

Next we define a predictor from the just-computed model and the testdata, with the preds matrix to catch the predictions.


In [10]:
val (pp, popts) = KMeans.predictor(mm.model, testdata, preds)



Out[10]:
BIDMach.models.KMeans$memopts@3853e5ab

Lets run the predictor


In [11]:
pp.predict


Predicting
 3.00%, ll=-10076.46875, gf=33.567, secs=0.2, GB=0.01, MB/s=65.45, GPUmem=0.86
 6.00%, ll=-10076.96875, gf=61.099, secs=0.2, GB=0.02, MB/s=119.12, GPUmem=0.86
10.00%, ll=-10079.02344, gf=83.659, secs=0.2, GB=0.03, MB/s=163.11, GPUmem=0.86
13.00%, ll=-10075.34473, gf=103.087, secs=0.2, GB=0.04, MB/s=200.99, GPUmem=0.86
16.00%, ll=-10076.34277, gf=119.251, secs=0.2, GB=0.05, MB/s=232.50, GPUmem=0.86
20.00%, ll=-10075.40137, gf=133.718, secs=0.2, GB=0.06, MB/s=260.71, GPUmem=0.86
23.00%, ll=-10075.05762, gf=145.843, secs=0.3, GB=0.07, MB/s=284.35, GPUmem=0.86
26.00%, ll=-10077.00195, gf=157.619, secs=0.3, GB=0.08, MB/s=307.31, GPUmem=0.86
30.00%, ll=-10077.29102, gf=167.605, secs=0.3, GB=0.10, MB/s=326.78, GPUmem=0.86
33.00%, ll=-10077.83984, gf=176.553, secs=0.3, GB=0.11, MB/s=344.22, GPUmem=0.86
36.00%, ll=-10077.64258, gf=184.618, secs=0.3, GB=0.12, MB/s=359.95, GPUmem=0.86
40.00%, ll=-10076.89453, gf=191.924, secs=0.3, GB=0.13, MB/s=374.19, GPUmem=0.86
43.00%, ll=-10077.04785, gf=198.017, secs=0.4, GB=0.14, MB/s=386.07, GPUmem=0.86
46.00%, ll=-10075.60059, gf=204.650, secs=0.4, GB=0.15, MB/s=399.00, GPUmem=0.86
50.00%, ll=-10078.08008, gf=210.226, secs=0.4, GB=0.16, MB/s=409.88, GPUmem=0.86
53.00%, ll=-10077.05566, gf=215.360, secs=0.4, GB=0.17, MB/s=419.89, GPUmem=0.86
56.00%, ll=-10076.69824, gf=220.629, secs=0.4, GB=0.18, MB/s=430.16, GPUmem=0.86
60.00%, ll=-10078.28906, gf=225.014, secs=0.4, GB=0.19, MB/s=438.71, GPUmem=0.86
63.00%, ll=-10076.39355, gf=229.089, secs=0.5, GB=0.20, MB/s=446.65, GPUmem=0.86
66.00%, ll=-10077.28125, gf=232.884, secs=0.5, GB=0.21, MB/s=454.05, GPUmem=0.86
70.00%, ll=-10079.35547, gf=236.919, secs=0.5, GB=0.22, MB/s=461.92, GPUmem=0.86
73.00%, ll=-10075.48633, gf=240.226, secs=0.5, GB=0.23, MB/s=468.37, GPUmem=0.86
76.00%, ll=-10075.48340, gf=243.802, secs=0.5, GB=0.24, MB/s=475.34, GPUmem=0.86
80.00%, ll=-10075.16699, gf=246.708, secs=0.5, GB=0.25, MB/s=481.00, GPUmem=0.86
83.00%, ll=-10075.18457, gf=248.986, secs=0.5, GB=0.27, MB/s=485.45, GPUmem=0.86
86.00%, ll=-10076.59961, gf=251.573, secs=0.6, GB=0.28, MB/s=490.49, GPUmem=0.86
90.00%, ll=-10077.46777, gf=254.457, secs=0.6, GB=0.29, MB/s=496.11, GPUmem=0.86
93.00%, ll=-10077.57422, gf=257.195, secs=0.6, GB=0.30, MB/s=501.45, GPUmem=0.86
96.00%, ll=-10077.71875, gf=259.798, secs=0.6, GB=0.31, MB/s=506.53, GPUmem=0.86
100.00%, ll=-10077.27441, gf=260.132, secs=0.6, GB=0.32, MB/s=507.18, GPUmem=0.85
Time=0.6270 secs, gflops=260.13

The preds matrix now contains the numbers of the best-matching cluster centers. We still need to look up the category label for each one. We also need to look up the category for each of the test inputs.


In [12]:
val (vmax, predcat) = maxi2(modelmat(preds,0->10).t)   // Lookup the cat for the matching cluster
val (wmax, truecat) = maxi2(test(0->10,?))             // Reference cats for test items
val inds = predcat.t \ truecat.t                       // Concatenate them into a two-column matrix



Out[12]:
   7   7
   7   7
   0   0
   0   0
   1   1
   3   3
   9   9
   1   1
  ..  ..

From the actual and predicted categories, we can compute a confusion matrix:


In [13]:
val conf = accum(inds, 1f, 10, 10)  // accumulate the (estimate,exact) ids into a matrix
conf ~ conf / sum(conf)             // normalize



Out[13]:
     0.97358           0   0.0072713   0.0025349   0.0015432   0.0057694...
  0.00030254     0.99089   0.0089881   0.0036073    0.015021   0.0038833...
   0.0036305   0.0038386     0.94587    0.011894   0.0029835   0.0013314...
   0.0014119  0.00062489   0.0044435     0.90904  0.00010288    0.022745...
  0.00090762  0.00089270   0.0017168  0.00019499     0.89259   0.0011095...
   0.0023195  8.9270e-05   0.0014139    0.029541  0.00041152     0.92833...
   0.0099839  0.00080343   0.0020198   0.0012674   0.0058642    0.015866...
   0.0019161   0.0010712    0.016158   0.0093595   0.0065844   0.0024409...
          ..          ..          ..          ..          ..          ..

Now lets create an image by multiplying each confusion matrix cell by a white square:


In [14]:
Image.show((conf * 250f)  ones(64,64))



Out[14]:
javax.swing.JFrame[frame1,0,0,656x679,layout=java.awt.BorderLayout,title=Image 1,resizable,normal,defaultCloseOperation=HIDE_ON_CLOSE,rootPane=javax.swing.JRootPane[,8,31,640x640,layout=javax.swing.JRootPane$RootLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=16777673,maximumSize=,minimumSize=,preferredSize=],rootPaneCheckingEnabled=true]

Its useful to isolate the correct classification rate by digit, which is:


In [15]:
val dacc = getdiag(conf).t



Out[15]:
0.97358,0.99089,0.94587,0.90904,0.89259,0.92833,0.97600,0.93618,0.91028,0.90734

We can take the mean of the diagonal accuracies to get an overall accuracy for this model.


In [16]:
mean(dacc)



Out[16]:
0.93701

Run the experiment again with a larger number of clusters (3000, then 30000). You should reduce the batchSize option to 20000 to avoid memory problems.

Include the training time output by the call to nn.train but not the evaluation time (the evaluation code above is not using the GPU). Rerun and fill out the table below:

KMeans Clusters Training time Avg. gflops Accuracy
300 ... ... ...
3000 ... ... ...
30000 ... ... ...