BIDMach: parameter tuning

In this notebook we'll explore automated parameter exploration by grid search.


In [1]:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,HMat,Image,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.Plotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,NMF,RandomForest,SFA}
import BIDMach.datasources.{MatDS,FilesDS,SFilesDS}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}

Mat.checkMKL
Mat.checkCUDA
if (Mat.hasCUDA > 0) GPUmem


1 CUDA device found, CUDA version 6.5
Out[1]:
(0.8837719,3795771392,4294967296)

Dataset: Reuters RCV1 V2

The dataset is the widely used Reuters news article dataset RCV1 V2. This dataset and several others are loaded by running the script getdata.sh from the BIDMach/scripts directory. The data include both train and test subsets, and train and test labels (cats).


In [2]:
var dir = "../data/rcv1/"             // adjust to point to the BIDMach/data/rcv1 directory
tic
val train = loadSMat(dir+"docs.smat.lz4")
val cats = loadFMat(dir+"cats.fmat.lz4")
val test = loadSMat(dir+"testdocs.smat.lz4")
val tcats = loadFMat(dir+"testcats.fmat.lz4")
toc



Out[2]:
1.811

First lets enumerate some parameter combinations for learning rate and time exponent of the optimizer (texp)


In [3]:
val lrates = col(0.03f, 0.1f, 0.3f, 1f)        // 4 values
val texps = col(0.3f, 0.4f, 0.5f, 0.6f, 0.7f)  // 5 values



Out[3]:
  0.30000
  0.40000
  0.50000
  0.60000
  0.70000

The next step is to enumerate all pairs of parameters. We can do this using the kron operator for now, this will eventually be a custom function:


In [4]:
val lrateparams = ones(texps.nrows, 1)  lrates
val texpparams = texps  ones(lrates.nrows,1)
lrateparams \ texpparams



Out[4]:
  0.030000   0.30000
   0.10000   0.30000
   0.30000   0.30000
         1   0.30000
  0.030000   0.40000
   0.10000   0.40000
   0.30000   0.40000
         1   0.40000
        ..        ..

Here's the learner again:


In [5]:
val (mm, opts) = GLM.learner(train, cats, GLM.logistic)



Out[5]:
BIDMach.models.GLM$LearnOptions@12f17837

To keep things simple, we'll focus on just one category and train many models for it. The "targmap" option specifies a mapping from the actual base categories to the model categories. We'll map from category six to all our models:


In [6]:
val nparams = lrateparams.length
val targmap = zeros(nparams, 103)
targmap(?,6) = 1



Out[6]:
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0...
  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..  ..

In [7]:
opts.targmap = targmap
opts.lrate = lrateparams
opts.texp = texpparams



Out[7]:
  0.30000
  0.30000
  0.30000
  0.30000
  0.40000
  0.40000
  0.40000
  0.40000
       ..

In [8]:
mm.train


corpus perplexity=5582.125391
pass= 0
 2.00%, ll=-0.69315, gf=0.855, secs=0.4, GB=0.02, MB/s=59.99, GPUmem=0.84
16.00%, ll=-0.40513, gf=4.028, secs=0.7, GB=0.13, MB/s=196.88, GPUmem=0.84
30.00%, ll=-0.37952, gf=5.155, secs=1.0, GB=0.25, MB/s=242.45, GPUmem=0.84
44.00%, ll=-0.31513, gf=5.813, secs=1.3, GB=0.36, MB/s=270.14, GPUmem=0.84
58.00%, ll=-0.33632, gf=5.274, secs=2.0, GB=0.48, MB/s=243.84, GPUmem=0.84
72.00%, ll=-0.23368, gf=5.638, secs=2.3, GB=0.59, MB/s=259.20, GPUmem=0.84
87.00%, ll=-0.29033, gf=5.885, secs=2.6, GB=0.70, MB/s=269.99, GPUmem=0.84
100.00%, ll=-0.23384, gf=6.076, secs=2.9, GB=0.81, MB/s=276.65, GPUmem=0.84
pass= 1
 2.00%, ll=-0.28232, gf=6.104, secs=3.0, GB=0.83, MB/s=279.56, GPUmem=0.84
16.00%, ll=-0.22606, gf=6.230, secs=3.3, GB=0.94, MB/s=284.92, GPUmem=0.84
30.00%, ll=-0.27916, gf=6.333, secs=3.6, GB=1.05, MB/s=289.11, GPUmem=0.84
44.00%, ll=-0.28040, gf=6.436, secs=4.0, GB=1.17, MB/s=293.60, GPUmem=0.84
58.00%, ll=-0.23271, gf=6.545, secs=4.3, GB=1.28, MB/s=298.47, GPUmem=0.83
72.00%, ll=-0.19472, gf=6.642, secs=4.6, GB=1.39, MB/s=302.53, GPUmem=0.83
87.00%, ll=-0.28296, gf=6.716, secs=4.9, GB=1.51, MB/s=305.86, GPUmem=0.83
100.00%, ll=-0.21991, gf=6.745, secs=5.3, GB=1.61, MB/s=306.17, GPUmem=0.83
Time=5.2660 secs, gflops=6.75

In [9]:
val preds = zeros(targmap.nrows, tcats.ncols)       // An array to hold the predictions
val (pp, popts) = GLM.predictor(mm.model, test, preds)



Out[9]:
BIDMach.models.GLM$LearnOptions@771ad286

And invoke the predict method on the predictor:


In [10]:
pp.predict


corpus perplexity=65579.335560
Predicting
 3.00%, ll=-4.08879, gf=0.020, secs=0.3, GB=0.00, MB/s= 1.91, GPUmem=0.87
 6.00%, ll=-1.78565, gf=0.038, secs=0.3, GB=0.00, MB/s= 3.66, GPUmem=0.87
10.00%, ll=-3.69535, gf=0.055, secs=0.3, GB=0.00, MB/s= 5.19, GPUmem=0.87
13.00%, ll=-3.79439, gf=0.074, secs=0.3, GB=0.00, MB/s= 7.06, GPUmem=0.87
16.00%, ll=-2.77067, gf=0.092, secs=0.3, GB=0.00, MB/s= 8.71, GPUmem=0.87
20.00%, ll=-2.79940, gf=0.109, secs=0.3, GB=0.00, MB/s=10.25, GPUmem=0.87
23.00%, ll=-3.64225, gf=0.125, secs=0.3, GB=0.00, MB/s=11.75, GPUmem=0.87
26.00%, ll=-3.11155, gf=0.142, secs=0.3, GB=0.00, MB/s=13.29, GPUmem=0.87
30.00%, ll=-3.26986, gf=0.159, secs=0.3, GB=0.00, MB/s=14.88, GPUmem=0.87
33.00%, ll=-2.60778, gf=0.176, secs=0.3, GB=0.01, MB/s=16.53, GPUmem=0.87
36.00%, ll=-2.68311, gf=0.192, secs=0.3, GB=0.01, MB/s=18.15, GPUmem=0.87
40.00%, ll=-2.45453, gf=0.208, secs=0.3, GB=0.01, MB/s=19.60, GPUmem=0.87
43.00%, ll=-2.99318, gf=0.224, secs=0.3, GB=0.01, MB/s=21.09, GPUmem=0.87
46.00%, ll=-2.64993, gf=0.242, secs=0.3, GB=0.01, MB/s=23.08, GPUmem=0.86
50.00%, ll=-3.15696, gf=0.257, secs=0.3, GB=0.01, MB/s=24.54, GPUmem=0.86
53.00%, ll=-2.48460, gf=0.272, secs=0.3, GB=0.01, MB/s=25.92, GPUmem=0.86
56.00%, ll=-3.76540, gf=0.287, secs=0.3, GB=0.01, MB/s=27.40, GPUmem=0.86
60.00%, ll=-2.61050, gf=0.301, secs=0.3, GB=0.01, MB/s=28.73, GPUmem=0.86
63.00%, ll=-2.89073, gf=0.316, secs=0.3, GB=0.01, MB/s=30.03, GPUmem=0.86
66.00%, ll=-3.84462, gf=0.331, secs=0.3, GB=0.01, MB/s=31.47, GPUmem=0.86
70.00%, ll=-3.13115, gf=0.345, secs=0.3, GB=0.01, MB/s=32.81, GPUmem=0.86
73.00%, ll=-2.31032, gf=0.360, secs=0.3, GB=0.01, MB/s=34.23, GPUmem=0.86
76.00%, ll=-3.60105, gf=0.373, secs=0.3, GB=0.01, MB/s=35.35, GPUmem=0.86
80.00%, ll=-2.51561, gf=0.388, secs=0.3, GB=0.01, MB/s=36.94, GPUmem=0.86
83.00%, ll=-2.95237, gf=0.401, secs=0.3, GB=0.01, MB/s=38.22, GPUmem=0.86
86.00%, ll=-4.13474, gf=0.415, secs=0.3, GB=0.01, MB/s=39.51, GPUmem=0.86
90.00%, ll=-3.39915, gf=0.428, secs=0.3, GB=0.01, MB/s=40.69, GPUmem=0.86
93.00%, ll=-1.85190, gf=0.425, secs=0.4, GB=0.01, MB/s=40.51, GPUmem=0.86
96.00%, ll=-3.48102, gf=0.438, secs=0.4, GB=0.02, MB/s=41.79, GPUmem=0.86
100.00%, ll=-1.80929, gf=0.450, secs=0.4, GB=0.02, MB/s=42.99, GPUmem=0.86
Time=0.3710 secs, gflops=0.45

Although ll values are printed above, they are not meaningful (there is no target to compare the prediction with).

We can now compare the accuracy of predictions (preds matrix) with ground truth (the tcats matrix).


In [11]:
val vcats = targmap * tcats                                          // create some virtual cats
val lls = mean(ln(1e-7f + vcats  preds + (1-vcats)  (1-preds)),2)  // actual logistic likelihood
mean(lls)



Out[11]:
-0.23868

A more thorough measure is ROC area:


In [12]:
val rocs = roc2(preds, vcats, 1-vcats, 100)   // Compute ROC curves for all categories



Out[12]:
        0        0        0        0        0        0        0        0...
  0.84498  0.83089  0.70786  0.68339  0.83812  0.83970  0.76794  0.72687...
  0.88921  0.88448  0.82171  0.80113  0.88726  0.88800  0.85490  0.82922...
  0.91925  0.91368  0.87920  0.86826  0.91730  0.91980  0.89681  0.88874...
  0.93529  0.93241  0.90859  0.90460  0.93325  0.93492  0.92101  0.91841...
  0.94632  0.94252  0.93065  0.92722  0.94484  0.94548  0.93427  0.93325...
  0.95299  0.95031  0.94252  0.93677  0.95216  0.95281  0.94669  0.94261...
  0.95800  0.95698  0.95160  0.94474  0.95744  0.95782  0.95337  0.94854...
       ..       ..       ..       ..       ..       ..       ..       ..

In [13]:
plot(rocs)



Out[13]:
ptolemy.plot.Plot[,0,0,484x239,layout=java.awt.FlowLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=16777225,maximumSize=,minimumSize=,preferredSize=]

In [14]:
val aucs = mean(rocs)



Out[14]:
0.97690,0.97633,0.97336,0.97058,0.97659,0.97681,0.97469,0.97249,0.97606,0.97700,0.97553,0.97189,0.97517,0.97694,0.97613,0.97292,0.97389,0.97664,0.97639,0.97353

The maxi2 function will find the max value and its index.


In [15]:
val (bestv, besti) = maxi2(aucs)



Out[15]:
9

And using the best index we can find the optimal parameters:


In [16]:
texpparams(besti) \ lrateparams(besti)



Out[16]:
0.50000,0.10000

Write the optimal values in the cell below:

Note: although our parameters lay in a square grid, we could have enumerated any sequence of pairs, and we could have searched over more parameters. The learner infrastructure supports more intelligent model optimization (e.g. Bayesian methods).