Machine Learning at Scale, Part II

For this tutorial, we'll dig deeper into BIDMach's learning architecture. The examples so far have use convenience functions which assembled together a Data Source, Learner, Model, Updater and Mixin classes to make a trainable model. This time we'll separate out those components and see how they can be customized.

The dataset is from UCI and comprises Pubmed abstracts. It is about 7.3GB in text form. We'll compute an LDA topic model for this dataset.

First lets initialize BIDMach again.


In [ ]:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,Image,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,HMat,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.Plotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA}
import BIDMach.datasources.{DataSource,MatDS,FilesDS,SFilesDS}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}

Mat.checkMKL
Mat.checkCUDA
if (Mat.hasCUDA > 0) GPUmem

Check the GPU memory again, and make sure you dont have any dangling processes.

Large-scale Topic Models

A Topic model is a representation of a Bag-Of-Words corpus as several factors or topics. Each topic should represent a theme that recurs in the corpus. Concretely, the output of the topic model will be an (ntopics x nfeatures) matrix we will call tmodel. Each row of that matrix represents a topic, and the elements of that row are word probabilities for the topic (i.e. the rows sum to 1). There is more about topic models here on wikipedia.

The element tmodel(i,j) holds the probability that word j belongs to topic i. Later we will examine the topics directly and try to make sense of them.

Lets construct a learner with a files data source. Most model classes will accept a String argument, and assume it is a pattern for accessing a collection of files. To create the learner, we pass this pattern (which will be invoked with format i) to enumerate one filename.


In [ ]:
val mdir = "../data/uci/pubmed_parts/";
val (nn, opts) = LDA.learner(mdir+"part%02d.smat.lz4", 256)

Note that this dataset is quite large, and isnt one of the ones loaded by getdata.sh in the scripts directory. You need to run the script getpubmed.sh separately (and plan a long walk or bike ride while you wait...).

This datasource uses just this sequence of files, and each matrix has 141043 rows. A number of options are listed below that control the files datasource. Most of these dont need to be set (you'll notice they're just set to their default values), but its useful to know about them for customizing data sources.


In [ ]:
opts.nstart = 0;                 // Starting file number
opts.nend = 10;                  // Ending file number
opts.order = 0;                  // (0) sample order, 0=linear, 1=random
opts.lookahead = 2;              // (2) number of prefetch threads
opts.featType = 1;               // (1) feature type, 0=binary, 1=linear
// These are specific to SfilesDS:
opts.fcounts = icol(141043);     // how many rows to pull from each input matrix 
opts.eltsPerSample = 300         // how many rows to allocate (non-zeros per sample)

We're ready to go. LDA is a popular topic model, described here on wikipedia.

We use a fast version of LDA which uses an incremental multiplicative update described by Hoffman, Blei and Bach here

Tuning Options

Add tuning options for minibatch size (say 100k), number of passes (4) and dimension (dim = 256).


In [ ]:
opts.batchSize=50000
opts.npasses=2
opts.dim=256

You invoke the learner the same way as before. You can change the options above after each run to optimize performance.


In [ ]:
nn.train

Each training run creates a results matrix which is essentially a graph of the log likelihood vs number of input samples. The first row is the likelihood values, the second is the corresponding number of input samples procesed. We can plot the results here:


In [ ]:
plot(nn.results(1,?), nn.results(0,?))

Evaluation

To evaluate the model, we save the model matrix itself, and also load a dictionary of the terms in the corpus.


In [ ]:
val tmodel = FMat(nn.modelmat)
val dict = Dict(loadSBMat(mdir+"../pubmed.term.sbmat.lz4"))

The dictionary allows us to look up terms by their index, e.g. dict(1000), by their string represenation dict("book"), and by matrices of these, e.g. dict(ii) where ii is an IMat. Try a few such queries to the dict here:


In [ ]:

Next we evaluate the entropy of each dimension of the model. Recall that the entropy of a discrete probability distribution is $E = -\sum_{i=1}^n p_i \ln(p_i)$. The rows of the matrix are the topic probabilities.

Compute the entropies for each topic:


In [ ]:
val ent = -(tmodel dotr ln(tmodel))
ent.t // put them in a horizontal line

Get the mean value (should be positive)


In [ ]:
mean(ent)

Find the smallest and largest entropy topic indices (use maxi2 and mini2). Call them elargest and esmallest.


In [ ]:
val (vlargest,elargest) = maxi2(ent)
val (vsmallest,esmallest) = mini2(ent)

Now we'll sort the probabilities within each topic to bring the highest probability terms to the beginning. We sort down (descending order) along dimension 2 (rows) to do this. bestv gets the sorted values and besti gets the sorted indices which are the feature indices.


In [ ]:
val (bestp, besti) = sortdown2(tmodel,2)

Now examine the 100 strongest terms in each topic:


In [ ]:
dict(besti(elargest,0->100))

In [ ]:
dict(besti(esmallest,0->100))

Do you notice any difference in the coherence of these two topics?

TODO: Fill in your answer here

By sorting the entropies, find the 2nd and 3rd smallest entropy topics. Give the top 100 terms in each topic below:


In [ ]:
val (sent, ient) = sort2(ent)
// words for 2nd lowest entropy topic
dict(besti(ient(1),0->100))

In [ ]:
// words for 3rd lowest entropy topic
dict(besti(ient(2),0->100))

Running more topics

What would you expect to happen to the average topic entropy if you run fewer topics?

TODO: answer here

Change the opts.dim argument above and try it. First note the entropy at dim = 256 below. Then run again with dim=64 and put the new value below:

dim mean entropy
64 ...
256 ...