Machine Learning at Scale, Part II

For this tutorial, we'll dig deeper into BIDMach's learning architecture. The examples so far have use convenience functions which assembled together a Data Source, Learner, Model, Updater and Mixin classes to make a trainable model. This time we'll separate out those components and see how they can be customized.

The dataset is from UCI and comprises Pubmed abstracts. It is about 7.3GB in text form. We'll compute an LDA topic model for this dataset.

First lets initialize BIDMach again.


In [1]:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,Image,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,
               HMat,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.JPlotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA,SVD}
import BIDMach.datasources.{DataSource,MatSource,FileSource,SFileSource}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}

Mat.checkMKL
Mat.checkCUDA
Mat.setInline
if (Mat.hasCUDA > 0) GPUmem


1 CUDA device found, CUDA version 7.0
Out[1]:
(0.99132067,11974557696,12079398912)

Check the GPU memory again, and make sure you dont have any dangling processes.

Large-scale Topic Models

A Topic model is a representation of a Bag-Of-Words corpus as several factors or topics. Each topic should represent a theme that recurs in the corpus. Concretely, the output of the topic model will be an (ntopics x nfeatures) matrix we will call tmodel. Each row of that matrix represents a topic, and the elements of that row are word probabilities for the topic (i.e. the rows sum to 1). There is more about topic models here on wikipedia.

The element tmodel(i,j) holds the probability that word j belongs to topic i. Later we will examine the topics directly and try to make sense of them.

Lets construct a learner with a files data source. Most model classes will accept a String argument, and assume it is a pattern for accessing a collection of files. To create the learner, we pass this pattern (which will be invoked with format i) to enumerate one filename.


In [2]:
val mdir = "../data/uci/pubmed_parts/";
val (nn, opts) = LDA.learner(mdir+"part%02d.smat.lz4")



Out[2]:
BIDMach.models.LDA$FileOpts@4f6fbde5

Note that this dataset is quite large, and isnt one of the ones loaded by getdata.sh in the scripts directory. You need to run the script getpubmed.sh separately (and plan a long walk or bike ride while you wait...).

This datasource uses just this sequence of files, and each matrix has 141043 rows. A number of options are listed below that control the files datasource. Most of these dont need to be set (you'll notice they're just set to their default values), but its useful to know about them for customizing data sources.


In [3]:
opts.nstart = 0;                 // Starting file number
opts.nend = 10;                  // Ending file number
opts.order = 0;                  // (0) sample order, 0=linear, 1=random
opts.lookahead = 2;              // (2) number of prefetch threads
opts.featType = 1;               // (1) feature type, 0=binary, 1=linear
// These are specific to SfilesDS:
opts.fcounts = icol(141043);     // how many rows to pull from each input matrix 
opts.eltsPerSample = 300         // how many rows to allocate (non-zeros per sample)



Out[3]:
300

We're ready to go. LDA is a popular topic model, described here on wikipedia.

We use a fast version of LDA which uses an incremental multiplicative update described by Hoffman, Blei and Bach here

Tuning Options

Add tuning options for minibatch size (say 100k), number of passes (4) and dimension (dim = 256).


In [4]:
opts.batchSize=50000
opts.npasses=2
opts.dim=256



Out[4]:
256

You invoke the learner the same way as before. You can change the options above after each run to optimize performance.


In [5]:
nn.train


corpus perplexity=9554.363735
pass= 0
 1.00%, ll=-8.62910, gf=19.682, secs=1.6, GB=0.04, MB/s=26.00, GPUmem=0.915805
 7.00%, ll=-7.21284, gf=30.997, secs=6.8, GB=0.27, MB/s=39.59, GPUmem=0.915805
14.00%, ll=-7.07065, gf=31.161, secs=12.5, GB=0.50, MB/s=39.69, GPUmem=0.915805
21.00%, ll=-6.99135, gf=31.853, secs=17.9, GB=0.72, MB/s=40.53, GPUmem=0.915805
28.00%, ll=-6.99923, gf=32.446, secs=23.1, GB=0.95, MB/s=41.27, GPUmem=0.915805
34.00%, ll=-6.97575, gf=32.613, secs=28.4, GB=1.18, MB/s=41.46, GPUmem=0.915805
41.00%, ll=-6.91192, gf=32.848, secs=33.7, GB=1.41, MB/s=41.76, GPUmem=0.915805
48.00%, ll=-6.95156, gf=32.909, secs=39.1, GB=1.64, MB/s=41.83, GPUmem=0.915805
54.00%, ll=-6.95275, gf=32.837, secs=44.6, GB=1.86, MB/s=41.73, GPUmem=0.915805
61.00%, ll=-6.88738, gf=32.913, secs=50.0, GB=2.09, MB/s=41.82, GPUmem=0.915805
68.00%, ll=-6.93559, gf=33.058, secs=55.2, GB=2.32, MB/s=42.00, GPUmem=0.915805
75.00%, ll=-6.92956, gf=33.061, secs=60.6, GB=2.55, MB/s=42.00, GPUmem=0.915805
81.00%, ll=-6.88030, gf=33.178, secs=65.9, GB=2.78, MB/s=42.15, GPUmem=0.915805
88.00%, ll=-6.92671, gf=33.192, secs=71.2, GB=3.00, MB/s=42.17, GPUmem=0.915805
95.00%, ll=-6.91851, gf=33.268, secs=76.4, GB=3.23, MB/s=42.26, GPUmem=0.915805
100.00%, ll=-6.93309, gf=33.316, secs=80.3, GB=3.40, MB/s=42.33, GPUmem=0.915805
pass= 1
 1.00%, ll=-6.88018, gf=32.976, secs=82.1, GB=3.44, MB/s=41.92, GPUmem=0.915805
 7.00%, ll=-6.90460, gf=33.061, secs=87.2, GB=3.67, MB/s=42.02, GPUmem=0.915805
14.00%, ll=-6.89758, gf=33.135, secs=92.5, GB=3.89, MB/s=42.11, GPUmem=0.915805
21.00%, ll=-6.86670, gf=33.195, secs=97.7, GB=4.12, MB/s=42.19, GPUmem=0.915805
28.00%, ll=-6.90028, gf=33.249, secs=103.0, GB=4.35, MB/s=42.25, GPUmem=0.915805
34.00%, ll=-6.89525, gf=33.306, secs=108.1, GB=4.58, MB/s=42.32, GPUmem=0.915805
41.00%, ll=-6.84500, gf=33.314, secs=113.6, GB=4.81, MB/s=42.33, GPUmem=0.915805
48.00%, ll=-6.89398, gf=33.365, secs=118.7, GB=5.03, MB/s=42.40, GPUmem=0.915805
54.00%, ll=-6.90187, gf=33.413, secs=123.9, GB=5.26, MB/s=42.46, GPUmem=0.915805
61.00%, ll=-6.84352, gf=33.452, secs=129.2, GB=5.49, MB/s=42.50, GPUmem=0.915805
68.00%, ll=-6.89512, gf=33.492, secs=134.4, GB=5.72, MB/s=42.55, GPUmem=0.915805
75.00%, ll=-6.89321, gf=33.531, secs=139.5, GB=5.94, MB/s=42.60, GPUmem=0.915805
81.00%, ll=-6.84816, gf=33.537, secs=144.9, GB=6.17, MB/s=42.61, GPUmem=0.915805
88.00%, ll=-6.89672, gf=33.574, secs=150.1, GB=6.40, MB/s=42.65, GPUmem=0.915805
95.00%, ll=-6.89115, gf=33.598, secs=155.3, GB=6.63, MB/s=42.68, GPUmem=0.915805
100.00%, ll=-6.90773, gf=33.614, secs=159.1, GB=6.80, MB/s=42.71, GPUmem=0.915805
Time=159.1470 secs, gflops=33.61

Each training run creates a results matrix which is essentially a graph of the log likelihood vs number of input samples. The first row is the likelihood values, the second is the corresponding number of input samples procesed. We can plot the results here:


In [6]:
plot(nn.results(1,?), nn.results(0,?))



Out[6]:

Evaluation

To evaluate the model, we save the model matrix itself, and also load a dictionary of the terms in the corpus.


In [7]:
val tmodel = FMat(nn.modelmat)
val dict = Dict(loadSBMat(mdir+"../pubmed.term.sbmat.lz4"))



Out[7]:
BIDMat.Dict@7a7e6283

The dictionary allows us to look up terms by their index, e.g. dict(1000), by their string represenation dict("book"), and by matrices of these, e.g. dict(ii) where ii is an IMat. Try a few such queries to the dict here:


In [ ]:

Next we evaluate the entropy of each dimension of the model. Recall that the entropy of a discrete probability distribution is $E = -\sum_{i=1}^n p_i \ln(p_i)$. The rows of the matrix are the topic probabilities.

Compute the entropies for each topic:


In [8]:
val ent = -(tmodel dotr ln(tmodel))
ent.t // put them in a horizontal line



Out[8]:
5.1114,5.2976,4.6267,5.2456,4.3975,5.1331,4.5964,5.2893,4.7173,5.4398,5.8625,4.1256,4.5969,3.5420,4.7882,5.2300,4.4556,2.7039,3.1350,5.6754,3.5055,5.6651,5.3590,3.8807,2.3031,4.4372,3.9389,5.7394,4.4904,5.1992,6.1797,3.9195,2.3029,5.4791,4.9826,2.7610,5.2653,5.5982,1.4048,3.4634,4.4630,5.7733,2.2171,4.0891,4.4084,5.4493,6.0265,4.1399,5.8458,4.9230,5.2576,5.4123,5.9085,5.4099,3.9270,5.8947,4.5297,4.0587,5.6131,5.0017,4.4420,5.7174,5.1036,4.2589,5.6449,5.0591,3.8745,1.8975,5.8035,5.0307,4.4231,1.3162,5.2058,2.5560,5.0222,5.3514,5.6441,4.8174,2.9534,5.2721,5.2945,5.0691,5.0058,2.9747,5.8789,5.6344,5.1950,4.5837,5.4611,5.2889,5.3010,4.7849,6.6542,6.6387,5.3950,5.3466,4.6309,1.9564,4.5986,4.9530,5.0361,6.4518,3.5818,2.7613,4.5762,5.4378,4.5264,4.8327,5.6308,5.1502,5.7634,5.4617,4.0019,5.1501,4.0902,5.9191,5.6831,5.2341,2.7303,4.6649,5.2866,5.1677,4.9706,5.2402,5.1934,3.7595,5.6273,4.0434,4.0109,4.7617,5.1109,5.5546,4.8784,4.1887,3.7054,5.4612,5.2764,4.8451,5.8668,4.8190,4.7739,5.2897,3.9158,5.7511,4.9064,6.3467,5.1945,5.7916,4.3248,6.3391,5.3953,4.3254,4.0941,3.6181,4.9485,5.8006,5.3474,3.7264,4.6339,6.9143,4.5528,4.4066,4.4110,5.1522,5.0870,5.0149,5.6603,4.5497,4.3591,4.6512,5.4001,5.1460,5.8381,4.8695,5.8047,5.2049,4.6524,5.3800,6.0958,4.6190,5.4246,4.5809,4.9653,4.3739,4.2059,3.8247,4.2678,4.6395,4.4611,3.1075,2.8976,5.3807,4.6884,4.8215,5.2609,5.5787,4.9535,5.3556,4.3266,3.6062,5.0851,5.7960,5.7255,4.5949,5.4241,5.8773,5.5728,3.7411,4.9058,5.0009,5.5011,4.2409,4.5751,1.3331,3.6396,4.5907,5.0229,5.0792,5.8104,5.7106,3.6759,6.0295,3.8709,5.5567,3.2505,3.5519,2.4837,5.1221,6.2520,3.4415,5.6699,2.3538,3.5475,5.3747,4.5216,5.0225,5.0951,4.7469,5.2699,5.1190,4.4600,4.4932,3.9057,4.8174,2.8009,4.4684,2.4636,5.0251,4.3337,5.0809,5.7879,5.3045,3.3504,5.4780,5.2759,4.8923

Get the mean value (should be positive)


In [9]:
mean(ent)



Out[9]:
4.7454

Find the smallest and largest entropy topic indices (use maxi2 and mini2). Call them elargest and esmallest.


In [10]:
val (vlargest,elargest) = maxi2(ent)
val (vsmallest,esmallest) = mini2(ent)



Out[10]:
71

Now we'll sort the probabilities within each topic to bring the highest probability terms to the beginning. We sort down (descending order) along dimension 2 (rows) to do this. bestv gets the sorted values and besti gets the sorted indices which are the feature indices.


In [11]:
val (bestp, besti) = sortdown2(tmodel,2)



Out[11]:
   284   901   716   696   546     2  1342  2119  1120  3749   735...
  1244  1414  2143  2309  3013  2058  2541  4052  4272   649  6756...
   127   398   397   401  1109   178   977   115  2521  1605  1998...
   466  1117  2414  2266  4043  2892  3263  3104  5788  4002    43...
   675  1387  2321  3212  5436  4943  5288  3942  4564    19  8242...
   402   533   999  1072   957  2558  3145     2  3532  2680  4939...
  2214  2666  1673  4530  3199  3119  3694  5270  5069  6945  3868...
   494   550   766  1486  2527  6144   267  2929  4068    18  2735...
    ..    ..    ..    ..    ..    ..    ..    ..    ..    ..    ..

Now examine the 100 strongest terms in each topic:


In [12]:
dict(besti(elargest,0->100))



Out[12]:
hospital,program,care,health,cost,services,health_care,need,community,nursing,service,public,decision,plan,policy,benefit,planning,economic,organization,new,nurses,impact,management,national,goal,facilities,resources,issues,personnel,law,institution,development,home,process,article,mental_health,nurse,professional,unit,provide,under,legal,financial,effort,pharmacy,social,demand,provision,quality,strategies,medicare,project,utilization,federal,implementation,payment,staff,act,center,setting,effective,requirement,provider,market,public_health,government,ethical,health_services,private,prevention,major,individual,intervention,meet,policies,responsibility,objectives,regulation,medical,standard,court,provided,facility,insurance,current,delivery,medical_care,part,product,implication,person,based,concern,providing,research,future,agencies,saving,homes,change

In [13]:
dict(besti(esmallest,0->100))



Out[13]:
protein,basic,binding_protein,acidic,cap,mbp,major,s-100,amount,myelin,bound,scrapie,bind,ubiquitin,proteolipid,cellular,retinol-binding,biochemical,specifically,protein-bound,protein_level,known,acid-binding,milligram,function,appear,s100,including,constituent,involved,pmol/mg,prion,gfa,saf,flagellin,prealbumin,unknown,abundant,immunoblot,kilodalton,macromolecules,general,common,nmol/mg,lowry,show,immunologically,identification,product,variety,possible,rich,g_protein,scrapie-infected,hsp90,immunochemical,origin,abundance,fut-175,suggesting,previously,p55,identify,include,encephalitogenic,2-glycoprotein,least,unique,association,c4b-binding,cysteine-rich,called,properties,intrinsic,comparison,immunoblotting,absent,anti-mbp,share,scrapie-associated,tightly,exception,analyzed,counterpart,protein-containing,protein-free,closely,bacterially,protein_expression,protein-depleted,biuret,70-kda,globular,protein-specific,dm-20,constitute,belong,individual,identical,termed

Do you notice any difference in the coherence of these two topics?

TODO: Fill in your answer here

By sorting the entropies, find the 2nd and 3rd smallest entropy topics. Give the top 100 terms in each topic below:


In [14]:
val (sent, ient) = sort2(ent)
// words for 2nd lowest entropy topic
dict(besti(ient(1),0->100))



Out[14]:
system,reference,special,variability,laboratories,tf,identification,component,api,micro,collaborative,functioning,interlaboratory,subsystem,pmc,biochemical,consist,connected,capable,20e,operating,variabilities,generating,inter-,participating,apparatus,link,automicrobic,consisting,same-day,robot,commercial,md.,robotic,system&quot,candidate,misidentification,controlling,inter-laboratory,utilizes,20s,cusum,developed,microscan,autobac,vitek,analytab,bbl,coupled,zym,product,operate,essential,controlled,invest,lab,minitek,studying,equivalency,intralaboratory,n.y.,dual,mo.,clin,parallel,concerned,aea,comparative,mentioned,stingray,misidentified,ensures,sop,interrelated,serve,tfd,inc.,functioned,serves,regulated,participated,manufacturer,built-in,model_system,microsystem,inflexible,keyed,micro-id,ampule,hazelwood,cockeysville,dli,specialized,realized,accordance,turned,ensured,non-commercial,self-regulating,assigned

In [15]:
// words for 3rd lowest entropy topic
dict(besti(ient(2),0->100))



Out[15]:
activity,activities,enzyme_activity,enzymes,measured,enzyme_activities,exhibited,assayed,highest,lower,biochemical,homogenates,correlated,enzymatic,enzymatic_activity,exhibit,tested,displayed,regulation,restored,functional,transferase,high_level,markedly,detected,relation,possessed,measuring,intrinsic,possess,show,enzymic,detectable,characterized,lowest,slightly,hand,devoid,parallel,weak,comparable,paralleled,u/mg,regulated,relative,relationship,marked,residual,significantly_higher,possesses,maximum,biochemically,reflect,tpp,dependent,unchanged,low_level,strong,showing,hydrolytic,remained,possessing,exhibiting,10-fold,4-fold,responsible,maximal,2-fold,affected,lowered,considerable,estimated,biologic,lysates,greatest,activity.abstract,correlate,regard,mu/mg,non-specific,respect,correlates,nearly,modulated,lacked,relatively_high,significant_increase,moderate,3-fold,comparison,strongly,fourfold,significant_decrease,decreasing,demonstrable,depended,respective,indicator,distinctly,5-fold

Running more topics

What would you expect to happen to the average topic entropy if you run fewer topics?

TODO: answer here

Change the opts.dim argument above and try it. First note the entropy at dim = 256 below. Then run again with dim=64 and put the new value below:

dim mean entropy
64 ...
256 ...