COCOA Method based on LBFGS Optimization

In this tutorial, we'll explore training and evaluation of L-BFGS method based Logitistic Regression Classifiers.

To start, we import the standard BIDMach class definitions.


In [1]:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,FMat,GMat,GIMat,GSMat,HMat,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.Plotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,LDA,LDAgibbs,NMF,SFA}
import BIDMach.datasources.{MatDS,FilesDS,SFilesDS}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}

Mat.checkMKL
Mat.checkCUDA
if (Mat.hasCUDA > 0) GPUmem


Cant find native HDF5 library
Couldnt load JCuda
Out[1]:
()

Now we load some training and test data, and some category labels. The data come from a news collection from Reuters, and is a "classic" test set for classification. Each article belongs to one or more of 103 categories. The articles are represented as Bag-of-Words (BoW) column vectors. For a data matrix A, element A(i,j) holds the count of word i in document j.

The category matrices have 103 rows, and a category matrix C has a one in position C(i,j) if document j is tagged with category i, or zero otherwise.

To reduce the computing time and memory footprint, the training data have been sampled. The full collection has about 700k documents. Our training set has 60k.

Since the document matrices contain counts of words, we use a min function to limit the count to "1", i.e. because we need binary features for naive Bayes.


In [3]:
val dict = "/Users/Anna/workspace/BIDMach/data/"
val rpath = "/Users/Anna/workspace/BIDMach/data/"

var index = 79; 
var index1 = 110;
var tnum = 1;
var tnum1 = 1;
//for(index <- 79 to 79){
 var aa = loadFMat(dict+index+"_train"+".txt")
// var c = loadFMat(dict+index+"_test"+".txt")
 var b = loadFMat(dict+index1+"_train.txt")

//var a = aa.t   
var atrain = aa(?,(239->aa.ncols))
//var atest = c(?,(239->c.ncols))
var atrain1 = b(?,(239->b.ncols))

tnum = (aa.nrows+1)/2
var ttrain = aa(0->tnum,(239->aa.ncols) )
var ttest = aa(tnum->aa.nrows,(239->aa.ncols) )
tnum1 = (b.nrows+1)/2
var ttrain1 = b(0->tnum1,(239->b.ncols) )
var ttest1 = b(tnum1->b.nrows,(239->b.ncols) )

var trainx = zeros( (ttrain.nrows+ttrain1.nrows),4096)
trainx(0->(ttrain.nrows),?)=ttrain
trainx( (ttrain.nrows)->(ttrain.nrows+ttrain1.nrows),?)=ttrain1

var testx= zeros( (ttest.nrows+ttest1.nrows),4096)
testx(0->(ttest.nrows),?)=ttest
testx( (ttest.nrows)->(ttest.nrows+ttest1.nrows),?)=ttest1

var ctrain1=zeros(2,(ttrain.nrows+ttrain1.nrows) ) 
ctrain1(0,(0->(ttrain.nrows) ) )= 1
ctrain1(1,(ttrain.nrows)->(ttrain.nrows+ttrain1.nrows)) = 1
//ctrain1(1,(0->(ttrain.nrows) ) )= -1
//ctrain1(0,(ttrain.nrows)->(ttrain.nrows+ttrain1.nrows)) = -1

var ctest=zeros(2,(ttest.nrows+ttest1.nrows) ) 
ctest(0,(0->(ttest.nrows) ) )= 1
ctest(1,(ttest.nrows)->(ttest.nrows+ttest1.nrows)) = 1
//ctest(1,(0->(ttest.nrows) ) )= -1
//ctest(0,(ttest.nrows)->(ttest.nrows+ttest1.nrows)) = -1

//max(atrain, 0.001, atrain)                       // the first "traindata" argument is the input, the other is out
tnum=testx.nrows
val cx=zeros(2,testx.nrows)
var maxx = maxi(maxi(trainx,1),2);
var minx = mini(mini(trainx,1),2);
trainx = (trainx-minx)/(maxx-minx);
testx = (testx-minx)/(maxx-minx);
val (mm,mopts,nn,nopts)=GLM.LBFGSlearner(trainx.t,ctrain1,testx.t,cx)

mopts.autoReset=false
mopts.useGPU=false
mopts.lrate=0.1
mopts.batchSize=50
mopts.dim=256
mopts.startBlock=0
mopts.npasses=40
mopts.updateAll=false

//problem:memory leak
mm.train;

nn.predict;

saveFMat(rpath+"r"+index+".txt",cx/tnum);
//}
 
min(cx, 1, cx)                       // the first "traindata" argument is the input, the other is output
max(cx, 0, cx) 
val p=ctest *@cx +(1-ctest) *@(1-cx)
var meanp=mean(p,2)
//saveFMat(rpath+"meanp"+index+".txt",meanp);
meanp


corpus perplexity=3454.134799
(2,4096)
pass= 0
(1,2)
(0.4494004249572754,    0.0013738
   -0.0022747
   -0.0014853
    0.0028352
    0.0040063
   0.00040735
  -9.7686e-05
    0.0015630
           ..
)
100.00%, ll=-1.20408, gf=1.805, secs=0.1, GB=0.00, MB/s=40.82
pass= 1
(1,2)
(0.13582171499729156,  -0.00083846
  -0.00096879
  -0.00038762
  -0.00017293
   -0.0063735
   -0.0014488
   -0.0011377
   5.1636e-05
           ..
)
100.00%, ll=-1.15363, gf=1.821, secs=0.1, GB=0.00, MB/s=41.18
pass= 2
(1,2)
(0.10177785158157349,  -0.00064910
  -0.00074999
  -0.00030008
  -0.00013387
   -0.0040396
  -0.00022718
   1.3670e-05
   3.9974e-05
           ..
)
100.00%, ll=-1.04952, gf=1.837, secs=0.2, GB=0.01, MB/s=41.55
pass= 3
(1,2)
(0.08384242653846741,  -0.00076133
  -0.00064364
  -0.00026043
  -0.00012276
   -0.0035598
  -0.00026415
  -0.00015153
   3.1218e-05
           ..
)
100.00%, ll=-0.96572, gf=1.854, secs=0.2, GB=0.01, MB/s=41.92
pass= 4
(1,2)
(0.07361403852701187,  -0.00069415
  -0.00056927
  -0.00022959
  -0.00022628
   -0.0037102
  -0.00025731
  -0.00014351
   2.1922e-05
           ..
)
100.00%, ll=-0.88716, gf=1.780, secs=0.3, GB=0.01, MB/s=40.25
pass= 5
(1,2)
(0.06625429540872574,  -3.8724e-06
  -0.00051308
  -0.00020761
  -9.7820e-05
   -0.0028376
  -0.00021056
  -0.00012079
   2.4887e-05
           ..
)
100.00%, ll=-0.81714, gf=1.754, secs=0.4, GB=0.01, MB/s=39.66
pass= 6
(1,2)
(0.060841139405965805,  -3.5610e-06
  -0.00047183
  -0.00019091
  -8.9954e-05
   -0.0026094
  -0.00019363
  -0.00011108
   2.2886e-05
           ..
)
100.00%, ll=-0.75846, gf=1.735, secs=0.4, GB=0.02, MB/s=39.24
pass= 7
(1,2)
(0.04746568948030472,   1.4183e-05
  -0.00041046
  -0.00010252
  -0.00013639
   -0.0018075
  -9.7930e-05
   2.9472e-05
   6.6703e-05
           ..
)
100.00%, ll=-0.71654, gf=1.715, secs=0.5, GB=0.02, MB/s=38.78
pass= 8
(1,2)
(0.04462983086705208,  -1.2259e-05
  -0.00038797
  -0.00012232
  -0.00026577
   -0.0020151
  -9.6101e-05
  -5.6293e-06
   3.3637e-05
           ..
)
100.00%, ll=-0.67233, gf=1.750, secs=0.5, GB=0.02, MB/s=39.58
pass= 9
(1,2)
(0.042011283338069916,   1.2602e-05
  -0.00036452
  -9.1039e-05
  -0.00012109
   -0.0016052
  -8.6969e-05
   2.6182e-05
   5.9244e-05
           ..
)
100.00%, ll=-0.63581, gf=1.738, secs=0.6, GB=0.02, MB/s=39.30
pass=10
(1,2)
(0.03719072788953781,   2.5574e-05
  -0.00032604
  -7.1105e-05
  -6.5141e-05
  -0.00089255
  -1.7536e-05
   6.2129e-05
   7.2703e-05
           ..
)
100.00%, ll=-0.60575, gf=1.768, secs=0.6, GB=0.03, MB/s=39.99
pass=11
(1,2)
(0.03507523611187935,  -5.7456e-05
  -0.00032190
  -9.5668e-05
  -0.00018120
   -0.0014317
  -4.9728e-05
  -3.1073e-05
  -1.7042e-05
           ..
)
100.00%, ll=-0.56851, gf=1.774, secs=0.7, GB=0.03, MB/s=40.11
pass=12
(1,2)
(0.034032877534627914,   1.6135e-05
  -0.00029966
  -8.6614e-05
  -0.00010236
  -0.00073635
  -4.8058e-06
   3.5424e-05
   4.6065e-05
           ..
)
100.00%, ll=-0.54505, gf=1.781, secs=0.8, GB=0.03, MB/s=40.27
pass=13
(1,2)
(0.032601725310087204,  -0.00011097
  -0.00030424
  -0.00013051
  -0.00016719
   -0.0015290
  -4.7339e-05
  -0.00014362
  -2.7424e-05
           ..
)
100.00%, ll=-0.51166, gf=1.814, secs=0.8, GB=0.03, MB/s=41.02
pass=14
(1,2)
(0.02952096424996853,   2.5115e-06
  -0.00025881
  -2.7834e-05
   5.7144e-05
  -0.00042910
  -9.1653e-07
   8.7167e-05
   2.7193e-05
           ..
)
100.00%, ll=-0.48724, gf=1.797, secs=0.9, GB=0.03, MB/s=40.63
pass=15
(1,2)
(0.027476126328110695,  -8.2974e-05
  -0.00025990
  -8.5634e-05
  -0.00011032
   -0.0011525
  -1.8472e-05
  -5.2984e-05
  -9.1072e-06
           ..
)
100.00%, ll=-0.46848, gf=1.797, secs=0.9, GB=0.04, MB/s=40.64
pass=16
(1,2)
(0.020297421142458916,   3.6148e-05
  -0.00021182
  -3.3674e-05
   8.9169e-05
  -0.00018126
   0.00019112
   0.00030066
   6.7775e-05
           ..
)
100.00%, ll=-0.45805, gf=1.799, secs=1.0, GB=0.04, MB/s=40.69
pass=17
(1,2)
(0.0196317657828331,   3.5094e-05
  -0.00020564
  -3.2692e-05
   8.6569e-05
  -0.00017597
   0.00018554
   0.00029189
   6.5799e-05
           ..
)
100.00%, ll=-0.44768, gf=1.807, secs=1.0, GB=0.04, MB/s=40.86
pass=18
(1,2)
(0.019119499251246452,   3.4127e-05
  -0.00019997
  -3.1791e-05
   8.4183e-05
  -0.00017112
   0.00018043
   0.00028384
   6.3985e-05
           ..
)
100.00%, ll=-0.43758, gf=1.803, secs=1.1, GB=0.04, MB/s=40.78
pass=19
(1,2)
(0.017947904765605927,   2.8854e-05
  -0.00021644
  -5.9883e-05
   6.6206e-05
  -0.00050825
   0.00014836
  -0.00017604
   2.9038e-05
           ..
)
100.00%, ll=-0.42498, gf=1.819, secs=1.1, GB=0.05, MB/s=41.14
pass=20
(1,2)
(0.0174519382417202,   2.8137e-05
  -0.00021106
  -5.8395e-05
   6.4561e-05
  -0.00049562
   0.00014468
   0.00014068
   2.8317e-05
           ..
)
100.00%, ll=-0.41656, gf=1.839, secs=1.2, GB=0.05, MB/s=41.58
pass=21
(1,2)
(0.01706722192466259,   2.7471e-05
  -0.00020606
  -5.7012e-05
   6.3032e-05
  -0.00048388
   0.00014125
  -0.00016765
   2.7646e-05
           ..
)
100.00%, ll=-0.40958, gf=1.849, secs=1.2, GB=0.05, MB/s=41.82
pass=22
(1,2)
(0.016667943447828293,   2.6849e-05
  -0.00020140
  -5.5722e-05
   6.1606e-05
  -0.00047293
   0.00013805
   0.00013428
   2.7021e-05
           ..
)
100.00%, ll=-0.40269, gf=1.856, secs=1.3, GB=0.05, MB/s=41.97
pass=23
(1,2)
(0.016294963657855988,   2.6268e-05
  -0.00019704
  -5.4516e-05
   6.0272e-05
  -0.00046269
   0.00013506
  -0.00016035
   2.6435e-05
           ..
)
100.00%, ll=-0.39603, gf=1.872, secs=1.3, GB=0.06, MB/s=42.33
pass=24
(1,2)
(0.015976089984178543,   2.5723e-05
  -0.00019295
  -5.3384e-05
   5.9021e-05
  -0.00045309
   0.00013226
   0.00012869
   2.5887e-05
           ..
)
100.00%, ll=-0.39159, gf=1.860, secs=1.4, GB=0.06, MB/s=42.06
pass=25
(1,2)
(0.014259561896324158,   0.00010124
  -0.00017630
  -2.1711e-05
   5.9500e-05
  -0.00022346
   0.00013997
  -0.00014550
   2.5370e-05
           ..
)
100.00%, ll=-0.38919, gf=1.842, secs=1.5, GB=0.06, MB/s=41.66
pass=26
(1,2)
(0.01397932693362236,   9.9295e-05
  -0.00017292
  -2.1294e-05
   5.8358e-05
  -0.00021917
   0.00013729
   0.00013201
   2.4884e-05
           ..
)
100.00%, ll=-0.38857, gf=1.832, secs=1.5, GB=0.06, MB/s=41.44
pass=27
(1,2)
(0.013728009536862373,   9.7459e-05
  -0.00016972
  -2.0900e-05
   5.7279e-05
  -0.00021512
  -0.00013493
  -0.00014011
   2.4424e-05
           ..
)
100.00%, ll=-0.38774, gf=1.834, secs=1.6, GB=0.07, MB/s=41.47
pass=28
(1,2)
(0.013436269015073776,   9.5721e-05
  -0.00016669
  -2.0528e-05
   5.6258e-05
  -0.00021128
   0.00013238
   0.00012729
   2.3988e-05
           ..
)
100.00%, ll=-0.38698, gf=1.852, secs=1.6, GB=0.07, MB/s=41.88
pass=29
(1,2)
(0.012738805264234543,   0.00010320
  -0.00015582
  -1.5725e-05
   0.00013445
   6.1910e-05
  -8.2682e-05
  -1.7871e-05
   3.5862e-05
           ..
)
100.00%, ll=-0.38785, gf=1.872, secs=1.6, GB=0.07, MB/s=42.33
pass=30
(1,2)
(0.012483472935855389,   0.00010148
  -0.00015323
  -1.5464e-05
   0.00013221
   6.0879e-05
   0.00017477
   0.00023850
   3.5264e-05
           ..
)
100.00%, ll=-0.38855, gf=1.879, secs=1.7, GB=0.07, MB/s=42.50
pass=31
(1,2)
(0.012241481803357601,  -0.00015214
  -0.00015076
  -1.5214e-05
   0.00013008
   5.9897e-05
  -8.0026e-05
  -1.7322e-05
   3.4696e-05
           ..
)
100.00%, ll=-0.38932, gf=1.899, secs=1.7, GB=0.07, MB/s=42.94
pass=32
(1,2)
(0.012043232098221779,   9.8312e-05
  -0.00014840
  -1.4976e-05
   0.00012804
   5.8961e-05
  -7.8775e-05
  -1.7051e-05
   3.4153e-05
           ..
)
100.00%, ll=-0.38838, gf=1.918, secs=1.8, GB=0.08, MB/s=43.38
pass=33
(1,2)
(0.011883669532835484,  -0.00014752
  -0.00014615
  -1.4749e-05
   0.00012610
   5.8067e-05
   0.00016676
  -1.6793e-05
   3.3636e-05
           ..
)
100.00%, ll=-0.38761, gf=1.934, secs=1.8, GB=0.08, MB/s=43.73
pass=34
(1,2)
(0.011717581190168858,   9.5425e-05
  -0.00014400
  -1.4532e-05
   0.00012425
   5.7212e-05
  -7.6469e-05
  -1.6546e-05
   3.3141e-05
           ..
)
100.00%, ll=-0.38687, gf=1.953, secs=1.8, GB=0.08, MB/s=44.16
pass=35
(1,2)
(0.011508142575621605,   9.4061e-05
  -0.00014194
  -1.4324e-05
   0.00012247
   5.6394e-05
  -7.5375e-05
  -1.6309e-05
   3.2667e-05
           ..
)
100.00%, ll=-0.38605, gf=1.972, secs=1.9, GB=0.08, MB/s=44.60
pass=36
(1,2)
(0.011358734220266342,  -0.00014133
  -0.00013996
  -1.4125e-05
   0.00012077
   5.5610e-05
  -7.4326e-05
  -1.6082e-05
   3.2212e-05
           ..
)
100.00%, ll=-0.38530, gf=1.988, secs=1.9, GB=0.09, MB/s=44.95
pass=37
(1,2)
(0.010829311795532703,   5.7009e-05
  -0.00019805
  -1.6955e-05
   9.8849e-05
  -2.7822e-05
   0.00013943
  -5.0797e-05
   2.2973e-05
           ..
)
100.00%, ll=-0.38299, gf=2.003, secs=2.0, GB=0.09, MB/s=45.29
pass=38
(1,2)
(0.010604741051793098,  -0.00017166
  -0.00019544
  -1.6732e-05
   9.7546e-05
  -2.7455e-05
  -9.0330e-05
  -5.0127e-05
   2.2670e-05
           ..
)
100.00%, ll=-0.38093, gf=2.018, secs=2.0, GB=0.09, MB/s=45.64
pass=39
(1,2)
(0.010461576282978058,   5.5560e-05
  -0.00019293
  -1.6517e-05
   9.6292e-05
  -2.7102e-05
   0.00013585
  -4.9483e-05
   2.2379e-05
           ..
)
100.00%, ll=-0.37885, gf=2.034, secs=2.0, GB=0.09, MB/s=46.00
Time=2.0230 secs, gflops=2.03
corpus perplexity=4056.873397
Predicting
 3.00%, ll=-1.31470, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
 7.00%, ll=-0.99733, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
11.00%, ll=-1.22826, gf=0.098, secs=0.0, GB=0.00, MB/s=202.78
14.00%, ll=-1.29930, gf=0.131, secs=0.0, GB=0.00, MB/s=273.26
18.00%, ll=-0.98444, gf=0.164, secs=0.0, GB=0.00, MB/s=336.98
22.00%, ll=-1.11491, gf=0.197, secs=0.0, GB=0.00, MB/s=401.00
25.00%, ll=-1.05693, gf=0.115, secs=0.0, GB=0.00, MB/s=244.46
29.00%, ll=-1.23920, gf=0.131, secs=0.0, GB=0.00, MB/s=280.49
33.00%, ll=-1.26194, gf=0.148, secs=0.0, GB=0.00, MB/s=313.34
37.00%, ll=-1.29142, gf=0.164, secs=0.0, GB=0.00, MB/s=350.53
40.00%, ll=-1.10857, gf=0.120, secs=0.0, GB=0.00, MB/s=262.25
44.00%, ll=-1.15771, gf=0.131, secs=0.0, GB=0.00, MB/s=285.78
48.00%, ll=-1.36074, gf=0.142, secs=0.0, GB=0.00, MB/s=303.89
51.00%, ll=-1.12294, gf=0.153, secs=0.0, GB=0.00, MB/s=330.42
55.00%, ll=-1.01137, gf=0.123, secs=0.0, GB=0.00, MB/s=272.17
59.00%, ll=-1.04847, gf=0.131, secs=0.0, GB=0.00, MB/s=290.97
62.00%, ll=-1.04026, gf=0.140, secs=0.0, GB=0.00, MB/s=313.44
66.00%, ll=-1.25139, gf=0.148, secs=0.0, GB=0.00, MB/s=334.81
70.00%, ll=-1.06949, gf=0.125, secs=0.0, GB=0.00, MB/s=286.08
74.00%, ll=-1.03980, gf=0.131, secs=0.0, GB=0.00, MB/s=304.44
77.00%, ll=-1.01378, gf=0.138, secs=0.0, GB=0.00, MB/s=317.27
81.00%, ll=-1.01423, gf=0.144, secs=0.0, GB=0.00, MB/s=336.24
85.00%, ll=-1.22062, gf=0.126, secs=0.0, GB=0.00, MB/s=295.68
88.00%, ll=-1.08572, gf=0.131, secs=0.0, GB=0.00, MB/s=311.88
92.00%, ll=-1.03444, gf=0.137, secs=0.0, GB=0.00, MB/s=328.01
96.00%, ll=-1.00704, gf=0.142, secs=0.0, GB=0.00, MB/s=344.00
100.00%, ll=-1.04893, gf=0.127, secs=0.0, GB=0.00, MB/s=307.44
Time=0.0070 secs, gflops=0.13
Out[3]:
  0.94644
  0.87480

Get the word and document counts from the data. This turns out to be equivalent to a matrix multiply. For a data matrix A and category matrix C, we want all (cat, word) pairs (i,j) such that C(i,k) and A(j,k) are both 1 - this means that document k contains word j, and is also tagged with category i. Summing over all documents gives us

$${\rm wordcatCounts(i,j)} = \sum_{k=1}^N C(i,k) A(j,k) = C * A^T$$

Because we are doing independent binary classifiers for each class, we need to construct the counts for words not in the class (negwcounts).

Finally, we add a smoothing count 0.5 to counts that could be zero.


In [45]:
val cx=zeros(ctest.nrows,ctest.ncols)
val (mm,mopts,nn,nopts)=GLM.learner(atrain,ctrain,atest,cx,0)
mopts.autoReset=false
mopts.useGPU=false
mopts.lrate=0.1
mopts.batchSize=2
mopts.dim=256
mopts.startBlock=0
mopts.npasses=10
mopts.updateAll=false
//mopts.what



Out[45]:
false

Now compute the probabilities

  • pwordcat = probability that a word is in a cat, given the cat.
  • pwordncat = probability of a word, given the complement of the cat.
  • pcat = probability that doc is in a given cat.
  • spcat = sum of pcat probabilities (> 1 because docs can be in multiple cats)

In [11]:
//mm.train
//nn.predict
tnum1



Out[11]:
27

Now take the logs of those probabilities. Here we're using the formula presented here to match Naive Bayes to Logistic Regression for independent data.

For each word, we compute the log of the ratio of the complementary word probability over the in-class word probability.

For each category, we compute the log of the ratio of the complementary category probability over the current category probability.

lpwordcat(j,i) represents $\log\left(\frac{{\rm Pr}(X_i|\neg c_j)}{{\rm Pr}(X_i|c_j)}\right)$

while lpcat(j) represents $\log\left(\frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\right)$


In [22]:
val cx1=cx
min(cx1, 1, cx1)                       // the first "traindata" argument is the input, the other is output
max(cx1, 0, cx1) 
val p=ctest *@cx1 +(1-ctest) *@(1-cx1)
mean(p,2)
//saveFMat(rpath+index+".txt",cx)
cx



java.lang.RuntimeException: dims incompatible
    BIDMat.DenseMat$mcF$sp.ggMatOpStrictv$mcF$sp(DenseMat.scala:963)
    BIDMat.DenseMat$mcF$sp.ggMatOpv$mcF$sp(DenseMat.scala:951)
    BIDMat.FMat.ffMatOpv(FMat.scala:206)
    BIDMat.FMat.$times$at(FMat.scala:799)

Here's where we apply Naive Bayes. The formula we're using is borrowed from here.

$${\rm Pr}(c|X_1,\ldots,X_k) = \frac{1}{1 + \frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\prod_{i-1}^k\frac{{\rm Pr}(X_i|\neg c)}{{\rm Pr}(X_i|c)}}$$

and we can rewrite

$$\frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\prod_{i-1}^k\frac{{\rm Pr}(X_i|\neg c)}{{\rm Pr}(X_i|c)}$$

as

$$\exp\left(\log\left(\frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\right) + \sum_{i=1}^k\log\left(\frac{{\rm Pr}(X_i|\neg c)}{{\rm Pr}(X_i|c)}\right)\right) = \exp({\rm lpcat(j)} + {\rm lpwordcat(j,?)} * X)$$

for class number j and an input column $X$. This follows because an input column $X$ is a sparse vector with ones in the positions of the input features. The product ${\rm lpwordcat(i,?)} * X$ picks out the features occuring in the input document and adds the corresponding logs from lpwordcat.

Finally, we take the exponential above and fold it into the formula $P(c_j|X_1,\ldots,X_k) = 1/(1+\exp(\cdots))$. This gives us a matrix of predictions. preds(i,j) = prediction of membership in category i for test document j.


In [6]:
val lacc = (cx1 ∙→ ctest + (1-cx1) ∙→ (1-ctest))/cx1.ncols
lacc.t
mean(lacc)



Out[6]:
0.76865

To measure the accuracy of the predictions above, we can compute the probability that the classifier outputs the right label. We used this formula in class for the expected accuracy for logistic regression. The "dot arrow" operator takes dot product along rows:


In [7]:
val model = mm.model
val (nn1, nopts1) = GLM.LBFGSpredictor(model, atest, cx)



Out[7]:
BIDMach.models.GLM$LearnLBFGSOptions@8eb62c4

Raw accuracy is not a good measure in most cases. When there are few positives (instances in the class vs. its complement), accuracy simply drives down false-positive rate at the expense of false-negative rate. In the worst case, the learner may always predict "no" and still achieve high accuracy.

ROC curves and ROC Area Under the Curve (AUC) are much better. Here we compute the ROC curves from the predictions above. We need:

  • scores - the predicted quality from the formula above.
  • good - 1 for positive instances, 0 for negative instances.
  • bad - complement of good.
  • npoints (100) - specifies the number of X-axis points for the ROC plot.

itest specifies which of the categories to plot for. We chose itest=6 because that category has one of the highest positive rates, and gives the most stable accuracy plots.


In [8]:
nn1.predict


corpus perplexity=1.970211
Predicting
 4.00%, ll=-0.88782, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
 8.00%, ll=-0.87371, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.19
12.00%, ll=-0.94405, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.14
16.00%, ll=-0.97745, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.19
20.00%, ll=-0.91692, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.24
24.00%, ll=-0.95906, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.19
28.00%, ll=-0.91178, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.22
32.00%, ll=-0.80047, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.26
36.00%, ll=-0.92667, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.22
40.00%, ll=-0.91742, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.24
44.00%, ll=-0.92421, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.26
48.00%, ll=-0.94550, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.23
51.00%, ll=-0.81677, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.25
56.00%, ll=-0.91331, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.27
60.00%, ll=-0.93091, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.24
64.00%, ll=-0.85525, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.26
68.00%, ll=-0.87134, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.27
72.00%, ll=-0.94563, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.25
76.00%, ll=-0.78031, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.26
80.00%, ll=-0.94823, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.27
83.00%, ll=-0.95096, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.29
88.00%, ll=-0.86657, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.26
92.00%, ll=-0.97759, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.28
96.00%, ll=-0.75901, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.29
100.00%, ll=-0.93326, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.27
Time=0.0090 secs, gflops=0.00

TODO 1: In the cell below, write an expression to derive the ROC Area under the curve (AUC) given the curve rr. rr gives the ROC curve y-coordinates at 100 evenly-spaced X-values from 0 to 1.0.


In [9]:
val cx1=cx*10
min(cx1, 1, cx1)                       // the first "traindata" argument is the input, the other is output
max(cx1, 0, cx1) 
val p=ctest *@cx1 +(1-ctest) *@(1-cx1)
mean(p,2)
val lacc = (cx1 ∙→ ctest + (1-cx1) ∙→ (1-ctest))/cx1.ncols
lacc.t
mean(lacc)



Out[9]:
0.81119

TODO 2: In the cell below, write the value of AUC returned by the expression above.


In [10]:
cx1



Out[10]:
        1  0.97167  0.32712        1        1        0  0.19741        1...
        0        0        0        0        0        1        0        0...

In [11]:
saveFMat(dict+"moonresult.fmat.txt",cx)



Logistic Regression

Now lets train a logistic classifier on the same data. BIDMach has an umbrella classifier called GLM for Generalized Linear Model. GLM includes linear regression, logistic regression (with log accuracy or direct accuracy optimization), and SVM.

The learner function accepts these arguments:

  • traindata: the training data in the same format as for Naive Bayes
  • traincats: the training category labels
  • testdata: the test input data
  • predcats: a container for the predictions generated by the model
  • modeltype (GLM.logistic here): an integer that specifies the type of model (0=linear, 1=logistic log accuracy, 2=logistic accuracy, 3=SVM).

We'll construct the learner and then look at its options:


In [2]:
val dict = "/Users/Anna/workspace/BIDMach/data/wildlife/"
val aa = loadFMat(dict+"1.txt")
val c = loadFMat(dict+"c1.txt")
val b = loadFMat("/Users/Anna/workspace/BIDMach/data/wltestall.txt")
val d = loadFMat("/Users/Anna/workspace/BIDMach/data/wltestcatall.txt")
//val dict = "/Users/Anna/workspace/BIDMach_1.0.0-full-linux-x86_64/data/uci/"
//val aa = loadFMat(dict+"arabic.fmat.lz4")
//val c = loadFMat(dict+"arabic_cats.fmat.lz4")
val a = aa *10  
val atrain = a //a(?,(100->a.ncols))
val atest =  a //a(?,(0->100))
val ctrain = c //c(?,(100->a.ncols))
val ctest = c //c(?,(0->100))
//max(atrain, 0.001, atrain)                       // the first "traindata" argument is the input, the other is output
//max(atest, 0.001, atest)
atrain



Out[2]:
    -0.33386    -0.33914    -0.29299    -0.40609    -0.43935    -0.55155...
     0.28568     0.18359  -0.0037000    -0.17297   -0.095350   -0.013070...
   -0.098630    -0.13269     0.41184     0.40252     0.24642     0.26601...
   -0.031890   -0.038550     0.38608     0.35432     0.21913     0.31026...
     0.17352     0.25341     0.37047     0.53088     0.40656     0.43656...
     0.45181     0.47072     0.50574     0.36032     0.33288     0.26337...
     0.24997    -0.11709     0.54020     0.33701     0.30424     0.15961...
     0.10230    0.082690    0.017330     0.46488     0.24725   -0.025730...
          ..          ..          ..          ..          ..          ..

The most important options are:

  • lrate: the learning rate
  • batchSize: the minibatch size
  • npasses: the number of passes over the dataset

We'll use the following parameters for this training run.


In [7]:
val cx=zeros(2,ctest.ncols)
val (mm,mopts,nn,nopts)=GLM.SVMlearner(atrain,ctrain,atest,cx)
mopts.autoReset=false
mopts.useGPU=false
mopts.lrate=1
mopts.batchSize=2
mopts.dim=256
mopts.startBlock=0
mopts.npasses=10
mopts.updateAll=false
mopts.what


Option Name       Type          Value
===========       ====          =====
addConstFeat      boolean       false
autoReset         boolean       false
batchSize         int           2
dim               int           256
doubleScore       boolean       false
epsilon           float         1.0E-5
evalStep          int           11
featThreshold     Mat           null
featType          int           1
hashFeatures      boolean       false
initsumsq         float         1.0E-5
iweight           FMat          null
lim               float         0.0
links             IMat             3
   3

lrate             FMat          1
mask              FMat          null
npasses           int           10
nzPerColumn       int           0
pstep             float         0.01
putBack           int           -1
r1nmats           int           1
r2nmats           int           1
reg1weight        FMat          1.0000e-07
reg2weight        FMat          1
resFile           String        null
rmask             FMat          null
sample            float         1.0
sizeMargin        float         3.0
startBlock        int           0
targets           FMat          null
targmap           FMat          null
texp              FMat          0.50000
updateAll         boolean       false
useCache          boolean       true
useDouble         boolean       false
useGPU            boolean       false
vexp              FMat          0.50000
waitsteps         int           2

In [8]:
mm.train
nn.predict
val cx1=cx
//min(cx1, 1, cx1)                       // the first "traindata" argument is the input, the other is output
//max(cx1, 0, cx1) 
//val p=ctest *@cx1 +(1-ctest) *@(1-cx1)
//mean(p,2)
cx


corpus perplexity=NaN
pass= 0
 6.00%, ll=-1.00000, gf=0.011, secs=0.0, GB=0.00, MB/s=12.00
41.00%, ll=0.00000, gf=0.010, secs=0.1, GB=0.00, MB/s= 5.11
77.00%, ll=0.00000, gf=0.018, secs=0.1, GB=0.00, MB/s= 8.73
100.00%, ll=0.00000, gf=0.023, secs=0.1, GB=0.00, MB/s=10.78
pass= 1
 6.00%, ll=0.00000, gf=0.023, secs=0.1, GB=0.00, MB/s=11.31
41.00%, ll=0.00000, gf=0.026, secs=0.1, GB=0.00, MB/s=12.14
77.00%, ll=0.00000, gf=0.031, secs=0.1, GB=0.00, MB/s=14.35
100.00%, ll=0.00000, gf=0.026, secs=0.1, GB=0.00, MB/s=12.30
pass= 2
 6.00%, ll=0.00000, gf=0.026, secs=0.1, GB=0.00, MB/s=12.49
41.00%, ll=0.00000, gf=0.030, secs=0.1, GB=0.00, MB/s=13.95
77.00%, ll=0.00000, gf=0.029, secs=0.2, GB=0.00, MB/s=13.76
100.00%, ll=0.00000, gf=0.031, secs=0.2, GB=0.00, MB/s=14.59
pass= 3
 6.00%, ll=0.00000, gf=0.031, secs=0.2, GB=0.00, MB/s=14.80
41.00%, ll=0.00000, gf=0.034, secs=0.2, GB=0.00, MB/s=16.00
77.00%, ll=0.00000, gf=0.034, secs=0.2, GB=0.00, MB/s=15.95
100.00%, ll=0.00000, gf=0.035, secs=0.2, GB=0.00, MB/s=16.53
pass= 4
 6.00%, ll=0.00000, gf=0.036, secs=0.2, GB=0.00, MB/s=16.80
41.00%, ll=0.00000, gf=0.038, secs=0.2, GB=0.00, MB/s=17.77
77.00%, ll=-0.21902, gf=0.038, secs=0.2, GB=0.00, MB/s=17.67
100.00%, ll=0.00000, gf=0.039, secs=0.2, GB=0.00, MB/s=18.15
pass= 5
 6.00%, ll=0.00000, gf=0.039, secs=0.2, GB=0.00, MB/s=18.38
41.00%, ll=0.00000, gf=0.041, secs=0.2, GB=0.00, MB/s=19.11
77.00%, ll=-0.14903, gf=0.043, secs=0.2, GB=0.00, MB/s=19.89
100.00%, ll=0.00000, gf=0.040, secs=0.2, GB=0.00, MB/s=18.45
pass= 6
 6.00%, ll=0.00000, gf=0.040, secs=0.2, GB=0.00, MB/s=18.57
41.00%, ll=0.00000, gf=0.041, secs=0.3, GB=0.00, MB/s=19.10
77.00%, ll=-0.10156, gf=0.042, secs=0.3, GB=0.01, MB/s=19.76
100.00%, ll=0.00000, gf=0.043, secs=0.3, GB=0.01, MB/s=20.19
pass= 7
 6.00%, ll=0.00000, gf=0.043, secs=0.3, GB=0.01, MB/s=20.29
41.00%, ll=0.00000, gf=0.042, secs=0.3, GB=0.01, MB/s=19.78
77.00%, ll=0.00000, gf=0.043, secs=0.3, GB=0.01, MB/s=20.22
100.00%, ll=0.00000, gf=0.044, secs=0.3, GB=0.01, MB/s=20.52
pass= 8
 6.00%, ll=0.00000, gf=0.044, secs=0.3, GB=0.01, MB/s=20.62
41.00%, ll=0.00000, gf=0.045, secs=0.3, GB=0.01, MB/s=21.16
77.00%, ll=0.00000, gf=0.045, secs=0.3, GB=0.01, MB/s=20.86
100.00%, ll=0.00000, gf=0.045, secs=0.3, GB=0.01, MB/s=21.19
pass= 9
 6.00%, ll=0.00000, gf=0.046, secs=0.3, GB=0.01, MB/s=21.27
41.00%, ll=0.00000, gf=0.047, secs=0.3, GB=0.01, MB/s=21.76
77.00%, ll=0.00000, gf=0.048, secs=0.3, GB=0.01, MB/s=22.17
100.00%, ll=0.00000, gf=0.048, secs=0.3, GB=0.01, MB/s=22.48
Time=0.3310 secs, gflops=0.05
corpus perplexity=NaN
Predicting
 4.00%, ll=-5.35505, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
 9.00%, ll=-5.56336, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
14.00%, ll=-5.39651, gf=0.036, secs=0.0, GB=0.00, MB/s=108.00
19.00%, ll=-3.47791, gf=0.048, secs=0.0, GB=0.00, MB/s=144.00
24.00%, ll=-4.42609, gf=0.060, secs=0.0, GB=0.00, MB/s=180.00
29.00%, ll=-5.00045, gf=0.036, secs=0.0, GB=0.00, MB/s=108.00
33.00%, ll=-4.28123, gf=0.042, secs=0.0, GB=0.00, MB/s=126.00
38.00%, ll=-4.85835, gf=0.048, secs=0.0, GB=0.00, MB/s=144.00
43.00%, ll=-4.95342, gf=0.036, secs=0.0, GB=0.00, MB/s=108.00
48.00%, ll=-5.43300, gf=0.040, secs=0.0, GB=0.00, MB/s=120.00
53.00%, ll=-4.59199, gf=0.044, secs=0.0, GB=0.00, MB/s=132.00
58.00%, ll=-3.04440, gf=0.036, secs=0.0, GB=0.00, MB/s=108.00
62.00%, ll=-4.28866, gf=0.039, secs=0.0, GB=0.00, MB/s=117.00
67.00%, ll=-4.49006, gf=0.042, secs=0.0, GB=0.00, MB/s=126.00
72.00%, ll=-6.24142, gf=0.036, secs=0.0, GB=0.00, MB/s=108.00
77.00%, ll=-5.79272, gf=0.039, secs=0.0, GB=0.00, MB/s=115.20
82.00%, ll=-3.15073, gf=0.041, secs=0.0, GB=0.00, MB/s=122.40
87.00%, ll=-4.36058, gf=0.044, secs=0.0, GB=0.00, MB/s=129.60
91.00%, ll=-7.38732, gf=0.038, secs=0.0, GB=0.00, MB/s=114.00
96.00%, ll=-10.42975, gf=0.040, secs=0.0, GB=0.00, MB/s=120.00
100.00%, ll=-6.19474, gf=0.042, secs=0.0, GB=0.00, MB/s=124.00
Time=0.0070 secs, gflops=0.04
Out[8]:
  -7.7981  -10.713  -10.620  -11.079  -10.499  -8.8026  -12.241  -12.910...
   7.7981   10.713   10.620   11.079   10.499   8.8026   12.241   12.910...

Since we have the accuracy scores for both Naive Bayes and Logistic regression, we can plot both of them on the same axes. Naive Bayes is red, Logistic regression is blue. The x-axis is the category number from 0 to 102. The y-axis is the absolute accuracy of the predictor for that category.


In [7]:
val lacc = (cx1 ∙→ ctest + (1-cx1) ∙→ (1-ctest))/cx1.ncols
lacc.t
mean(lacc)



java.lang.NoClassDefFoundError: Could not initialize class 

TODO 3: With the full training set (700k training documents), Logistic Regression is noticeably more accurate than Naive Bayes in every category. What do you observe in the plot above? Why do you think this is?

Next we'll compute the ROC plot and ROC area (AUC) for Logistic regression for category itest.


In [40]:
saveFMat(dict+"wildliferesults.fmat.txt",cx)



We computed the ROC curve for Naive Bayes earlier, so now we can plot them on the same axes. Naive Bayes is once again in red, Logistic regression in blue.


In [51]:
cx1



Out[51]:
  1.1398e-13  1.0409e-13  3.1763e-16  6.3486e-17  3.7628e-18  6.3368e-16...
  1.0488e-14  3.4730e-14  1.2205e-16  1.6546e-17  7.8444e-19  1.5367e-16...
  4.9068e-14  1.3641e-12  5.0059e-15  1.0316e-15  4.3107e-18  2.8967e-15...
  3.2656e-14  5.3654e-14  1.1086e-16  1.9517e-17  8.0842e-19  3.9087e-17...
  8.4080e-11  2.6030e-12  1.4020e-14  1.3571e-15  2.5275e-15  8.0566e-13...
  1.8410e-13  2.3721e-13  2.6570e-16  6.7162e-17  5.4204e-18  8.2176e-16...
  6.5161e-11  1.3945e-10  2.1820e-12  7.0878e-14  1.5250e-15  2.8615e-12...
  1.6085e-13  1.9523e-13  3.2423e-16  5.6627e-17  7.7785e-18  1.7874e-15...
          ..          ..          ..          ..          ..          ..

TODO 4: In the cell below, compute and plot lift curves from the ROC curves for Naive Bayes and Logistic regression. The lift curves should show the ratio of ROC y-values over a unit slope diagonal line (Y=X). The X-values should be the same as for the ROC plots, except that X=0 will be omitted since the lift will be undefined.


In [2]:
val dict = "/Users/Anna/workspace/BIDMach_1.0.0-full-linux-x86_64/data/"
//val a = loadFMat(dict+"arabic.fmat.lz4")
//val c = loadFMat(dict+"arabic_cats.fmat.lz4")
val aa = loadFMat(dict+"a.txt")
val c = loadFMat(dict+"alabel.txt")
val a = aa + 0.5 
val atrain =a(?,(100->a.ncols))
val atest =a(?,(0->100))
val ctrain =c(?,(100->a.ncols))
val ctest =c(?,(0->100))
max(atrain, 0.001, atrain)                       // the first "traindata" argument is the input, the other is output
max(atest, 0.001, atest)
atest



Out[2]:
   0.075172    0.42627    0.64251    0.36696    0.19400    0.58963...
    0.64074    0.42242    0.41518    0.87276    0.77227  0.0010000...

TODO 5: Experiment with different values for learning rate and batchSize to get the best performance for absolute accuracy and ROC area on category 6. Write your optimal values below:


In [41]:
val cx=zeros(ctest.nrows,ctest.ncols)
val (mm,mopts,nn,nopts)=GLM.SVMlearner(atrain,ctrain,atest,cx)
mopts.autoReset=false
mopts.useGPU=false
mopts.lrate=0.1
mopts.batchSize=2
mopts.dim=256
mopts.startBlock=0
mopts.npasses=10
mopts.updateAll=false
mopts.what


Option Name       Type          Value
===========       ====          =====
addConstFeat      boolean       false
autoReset         boolean       false
batchSize         int           2
dim               int           256
doubleScore       boolean       false
epsilon           float         1.0E-5
evalStep          int           11
featThreshold     Mat           null
featType          int           1
hashFeatures      boolean       false
initsumsq         float         1.0E-5
iweight           FMat          null
lim               float         0.0
links             IMat             3
   3

lrate             FMat          0.10000
mask              FMat          null
npasses           int           10
nzPerColumn       int           0
pstep             float         0.01
putBack           int           -1
r1nmats           int           1
r2nmats           int           1
reg1weight        FMat          1.0000e-07
reg2weight        FMat          1
resFile           String        null
rmask             FMat          null
sample            float         1.0
sizeMargin        float         3.0
startBlock        int           0
targets           FMat          null
targmap           FMat          null
texp              FMat          0.50000
updateAll         boolean       false
useCache          boolean       true
useDouble         boolean       false
useGPU            boolean       false
vexp              FMat          0.50000
waitsteps         int           2

In [42]:
mm.train
nn.predict


corpus perplexity=992.808970
pass= 0
 6.00%, ll=-1.00000, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
41.00%, ll=-0.28270, gf=0.079, secs=0.0, GB=0.00, MB/s=39.00
77.00%, ll=-0.03527, gf=0.081, secs=0.0, GB=0.00, MB/s=38.40
100.00%, ll=-0.20491, gf=0.083, secs=0.0, GB=0.00, MB/s=39.16
pass= 1
 6.00%, ll=-1.20125, gf=0.078, secs=0.0, GB=0.00, MB/s=37.71
41.00%, ll=-0.30951, gf=0.083, secs=0.0, GB=0.00, MB/s=39.11
77.00%, ll=-0.24529, gf=0.085, secs=0.0, GB=0.00, MB/s=40.00
100.00%, ll=-0.13970, gf=0.086, secs=0.0, GB=0.00, MB/s=40.22
pass= 2
 6.00%, ll=-1.47287, gf=0.085, secs=0.0, GB=0.00, MB/s=40.42
41.00%, ll=-0.22679, gf=0.087, secs=0.0, GB=0.00, MB/s=40.91
77.00%, ll=-0.39799, gf=0.090, secs=0.0, GB=0.00, MB/s=42.12
100.00%, ll=-0.17308, gf=0.090, secs=0.1, GB=0.00, MB/s=42.11
pass= 3
 6.00%, ll=-1.47907, gf=0.090, secs=0.1, GB=0.00, MB/s=42.22
41.00%, ll=-0.30906, gf=0.092, secs=0.1, GB=0.00, MB/s=43.12
77.00%, ll=-0.49180, gf=0.093, secs=0.1, GB=0.00, MB/s=43.20
100.00%, ll=-0.23331, gf=0.094, secs=0.1, GB=0.00, MB/s=43.76
pass= 4
 6.00%, ll=-1.46579, gf=0.093, secs=0.1, GB=0.00, MB/s=43.83
41.00%, ll=-0.37264, gf=0.094, secs=0.1, GB=0.00, MB/s=43.84
77.00%, ll=-0.55793, gf=0.094, secs=0.1, GB=0.00, MB/s=43.85
100.00%, ll=-0.28842, gf=0.095, secs=0.1, GB=0.00, MB/s=44.29
pass= 5
 6.00%, ll=-1.44678, gf=0.095, secs=0.1, GB=0.00, MB/s=44.33
41.00%, ll=-0.42261, gf=0.095, secs=0.1, GB=0.00, MB/s=44.31
77.00%, ll=-0.60707, gf=0.095, secs=0.1, GB=0.00, MB/s=44.29
100.00%, ll=-0.33395, gf=0.096, secs=0.1, GB=0.00, MB/s=44.64
pass= 6
 6.00%, ll=-1.42705, gf=0.095, secs=0.1, GB=0.00, MB/s=44.67
41.00%, ll=-0.46288, gf=0.097, secs=0.1, GB=0.00, MB/s=45.06
77.00%, ll=-0.64504, gf=0.097, secs=0.1, GB=0.01, MB/s=45.40
100.00%, ll=-0.37222, gf=0.098, secs=0.1, GB=0.01, MB/s=45.68
pass= 7
 6.00%, ll=-1.40822, gf=0.098, secs=0.1, GB=0.01, MB/s=45.70
41.00%, ll=-0.49608, gf=0.099, secs=0.1, GB=0.01, MB/s=46.39
77.00%, ll=-0.67528, gf=0.099, secs=0.1, GB=0.01, MB/s=46.27
100.00%, ll=-0.40490, gf=0.100, secs=0.1, GB=0.01, MB/s=46.50
pass= 8
 6.00%, ll=-1.39079, gf=0.100, secs=0.1, GB=0.01, MB/s=46.51
41.00%, ll=-0.52395, gf=0.100, secs=0.1, GB=0.01, MB/s=46.75
77.00%, ll=-0.69995, gf=0.101, secs=0.1, GB=0.01, MB/s=46.96
100.00%, ll=-0.43319, gf=0.101, secs=0.1, GB=0.01, MB/s=47.15
pass= 9
 6.00%, ll=-1.37481, gf=0.101, secs=0.1, GB=0.01, MB/s=47.16
41.00%, ll=-0.54772, gf=0.102, secs=0.1, GB=0.01, MB/s=47.67
77.00%, ll=-0.72047, gf=0.103, secs=0.2, GB=0.01, MB/s=47.84
100.00%, ll=-0.45795, gf=0.103, secs=0.2, GB=0.01, MB/s=48.00
Time=0.1550 secs, gflops=0.10
corpus perplexity=985.317557
Predicting
 4.00%, ll=-1.00000, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
 9.00%, ll=-1.00000, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
14.00%, ll=-1.00000, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
19.00%, ll=-1.00000, gf=Infinity, secs=0.0, GB=0.00, MB/s=Infinity
24.00%, ll=-1.00000, gf=0.060, secs=0.0, GB=0.00, MB/s=180.00
29.00%, ll=-1.00000, gf=0.073, secs=0.0, GB=0.00, MB/s=216.00
33.00%, ll=-1.00000, gf=0.085, secs=0.0, GB=0.00, MB/s=252.00
38.00%, ll=-1.00000, gf=0.097, secs=0.0, GB=0.00, MB/s=288.00
43.00%, ll=-1.00000, gf=0.109, secs=0.0, GB=0.00, MB/s=324.00
48.00%, ll=-1.00000, gf=0.121, secs=0.0, GB=0.00, MB/s=360.00
53.00%, ll=-1.00000, gf=0.066, secs=0.0, GB=0.00, MB/s=198.00
58.00%, ll=-1.00000, gf=0.073, secs=0.0, GB=0.00, MB/s=216.00
62.00%, ll=-1.00000, gf=0.079, secs=0.0, GB=0.00, MB/s=234.00
67.00%, ll=-1.00000, gf=0.085, secs=0.0, GB=0.00, MB/s=251.99
72.00%, ll=-1.00000, gf=0.091, secs=0.0, GB=0.00, MB/s=269.99
77.00%, ll=-1.00000, gf=0.097, secs=0.0, GB=0.00, MB/s=287.99
82.00%, ll=-1.00000, gf=0.069, secs=0.0, GB=0.00, MB/s=204.00
87.00%, ll=-1.00000, gf=0.073, secs=0.0, GB=0.00, MB/s=216.00
91.00%, ll=-1.00000, gf=0.077, secs=0.0, GB=0.00, MB/s=228.00
96.00%, ll=-1.00000, gf=0.081, secs=0.0, GB=0.00, MB/s=240.00
100.00%, ll=-1.00000, gf=0.083, secs=0.0, GB=0.00, MB/s=248.00
Time=0.0030 secs, gflops=0.08

In [ ]:
val cx1=cx*5
min(cx1, 1, cx1)                       // the first "traindata" argument is the input, the other is output
max(cx1, 0, cx1) 
val p=ctest *@cx1 +(1-ctest) *@(1-cx1)
mean(p,2)

In [6]:
cx1



Out[6]:
         1   0.48584   0.16356         1         1         0  0.098707...
         0         0         0         0         0   0.82548         0...

In [7]:
val lacc = (cx1 ∙→ ctest + (1-cx1) ∙→ (1-ctest))/cx1.ncols
lacc.t
mean(lacc)



Out[7]:
0.76865

In [8]:
val model = mm.model
val (nn1, nopts1) = GLM.LBFGSpredictor(model, atest, cx)



Out[8]:
BIDMach.models.GLM$LearnLBFGSOptions@3fde0ffe

In [9]:
nn1.predict


corpus perplexity=1.970211
Predicting
 4.00%, ll=-0.88782, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.10
 8.00%, ll=-0.87371, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.19
12.00%, ll=-0.94405, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.29
16.00%, ll=-0.97745, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.38
20.00%, ll=-0.91692, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.24
24.00%, ll=-0.95906, gf=0.000, secs=0.0, GB=0.00, MB/s= 0.29
28.00%, ll=-0.91178, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.34
32.00%, ll=-0.80047, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.38
36.00%, ll=-0.92667, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.43
40.00%, ll=-0.91742, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.48
44.00%, ll=-0.92421, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.53
48.00%, ll=-0.94550, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.38
51.00%, ll=-0.81677, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.42
56.00%, ll=-0.91331, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.45
60.00%, ll=-0.93091, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.48
64.00%, ll=-0.85525, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.51
68.00%, ll=-0.87134, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.54
72.00%, ll=-0.94563, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.58
76.00%, ll=-0.78031, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.46
80.00%, ll=-0.94823, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.48
83.00%, ll=-0.95096, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.50
88.00%, ll=-0.86657, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.53
92.00%, ll=-0.97759, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.55
96.00%, ll=-0.75901, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.58
100.00%, ll=-0.93326, gf=0.001, secs=0.0, GB=0.00, MB/s= 0.60
Time=0.0050 secs, gflops=0.00

In [10]:
val cx1=cx*10
min(cx1, 1, cx1)                       // the first "traindata" argument is the input, the other is output
max(cx1, 0, cx1) 
val p=ctest *@cx1 +(1-ctest) *@(1-cx1)
mean(p,2)
val lacc = (cx1 ∙→ ctest + (1-cx1) ∙→ (1-ctest))/cx1.ncols
lacc.t
mean(lacc)



Out[10]:
0.81119

In [11]:
cx1



Out[11]:
        1  0.97167  0.32712        1        1        0  0.19741        1...
        0        0        0        0        0        1        0        0...

In [23]:
saveFMat(dict+"moonresult.fmat.txt",cx)




In [ ]:


In [19]: