Training models with data that fits in memory is very limiting. But minibatch learners can easily work with data directly from disk.
We'll use the MNIST data set, which has 8 million images (about 17 GB). The dataset has been partition into groups of 100k images (using the unix split command) and saved in compressed lz4 files. This dataset is very large and doesnt get loaded by default by getdata.sh
. You have to load it explicitly by calling getmnist.sh
from the scripts directory. The script automatically splits the data into files that are small enough to be loaded into memory.
Let's load BIDMat/BIDMach
In [1]:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,Image,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,HMat,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.Plotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA}
import BIDMach.datasources.{DataSource,MatDS,FilesDS,SFilesDS}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}
Mat.checkMKL
Mat.checkCUDA
if (Mat.hasCUDA > 0) GPUmem
Out[1]:
And define the root directory for this dataset.
In [2]:
val mdir = "../data/MNIST8M/parts/"
Out[2]:
The files we need are named "alls00.fmat.lz4", "alls01.fmat.lz4" etc. We can create a learner using a pattern for accessing these files:
In [3]:
val (mm, opts) = KMeans.learner(mdir+"alls%02d.fmat.lz4",1024)
Out[3]:
The string "%02d" is a C/Scala format string that expands into a two-digit ASCII number to help with the enumeration.
There are several new options that can tailor a files datasource, but we'll mostly use the defaults. One thing we will do is define the last file to use for training (number 70). This leaves us with some held-out files to use for testing.
In [4]:
opts.nend = 70
Out[4]:
Note that the training data include image data and labels (0-9). K-Means is an unsupervised algorithm and if we used image data only KMeans will often build clusters containing different digit images. To produce cleaner clusters, and to facilitate classification later on, the alls
data includes both labels in the first 10 rows, and image data in the remaining rows. The label features are scaled by a large constant factor. That means that images of different digits will be far apart in feature space. It effectively prevents different digits occuring in the same cluster.
The following options are the important ones for tuning. For KMeans, batchSize has no effect on accracy since the algorithm uses all the data instances to perform an update. So you're free to tune it for best speed. Generally larger is better, as long as you dont use too much GPU ram.
npasses is the number of passes over the dataset. Larger is typically better, but the model may overfit at some point.
In [5]:
opts.batchSize = 20000
opts.npasses = 4
Out[5]:
You invoke the learner the same way as before. You can change the options above after each run to optimize performance.
In [6]:
mm.train
Now lets extract the model as a Floating-point matrix. We included the category features for clustering to make sure that each cluster is a subset of images for one digit.
In [7]:
val modelmat = FMat(mm.modelmat)
Out[7]:
Next we build a 30 x 10 array of images to view the first 300 cluster centers as images.
In [8]:
val nx = 30
val ny = 10
val im = zeros(28,28)
val allim = zeros(28*nx,28*ny)
for (i<-0 until nx) {
for (j<-0 until ny) {
val slice = modelmat(i+nx*j,10->794)
im(?) = slice(?)
allim((28*i)->(28*(i+1)), (28*j)->(28*(j+1))) = im
}
}
Image.show(allim kron ones(2,2))
Out[8]:
We'll predict using the closest cluster (or 1-NN if you like). First we read some data directly. We could also try to do evaluation directly from disk, but this would usually be overkill.
In [9]:
val test = loadFMat(mdir+"alls70.fmat.lz4") // Load a test data file
val testdata = test.copy // copy it
testdata(0->10,?) = 0 // and remove the digit labels
val preds = izeros(1, test.ncols) // make a container to hold the predictions
1 // avoids a monster data cell being printed
Out[9]:
Next we define a predictor from the just-computed model and the testdata, with the preds matrix to catch the predictions.
In [10]:
val (pp, popts) = KMeans.predictor(mm.model, testdata, preds)
Out[10]:
Lets run the predictor
In [11]:
pp.predict
The preds
matrix now contains the numbers of the best-matching cluster centers. We still need to look up the category label for each one. We also need to look up the category for each of the test inputs.
In [12]:
val (vmax, predcat) = maxi2(modelmat(preds,0->10).t) // Lookup the cat for the matching cluster
val (wmax, truecat) = maxi2(test(0->10,?)) // Reference cats for test items
val inds = predcat.t \ truecat.t // Concatenate them into a two-column matrix
Out[12]:
From the actual and predicted categories, we can compute a confusion matrix:
In [13]:
val conf = accum(inds, 1f, 10, 10) // accumulate the (estimate,exact) ids into a matrix
conf ~ conf / sum(conf) // normalize
Out[13]:
Now lets create an image by multiplying each confusion matrix cell by a white square:
In [14]:
Image.show((conf * 250f) ⊗ ones(64,64))
Out[14]:
Its useful to isolate the correct classification rate by digit, which is:
In [15]:
val dacc = getdiag(conf).t
Out[15]:
We can take the mean of the diagonal accuracies to get an overall accuracy for this model.
In [16]:
mean(dacc)
Out[16]:
Run the experiment again with a larger number of clusters (3000, then 30000). You should reduce the batchSize option to 20000 to avoid memory problems.
Include the training time output by the call to nn.train
but not the evaluation time (the evaluation code above is not using the GPU). Rerun and fill out the table below:
KMeans Clusters | Training time | Avg. gflops | Accuracy |
---|---|---|---|
300 | ... | ... | ... |
3000 | ... | ... | ... |
30000 | ... | ... | ... |