This ipython notebook will teach you the basics of how softmax regression works, and show you how to do softmax regression in pylearn2.
To do this, we will go over several concepts:
Part 1: What pylearn2 is doing for you in this example
What softmax regression is, and the math of how it works
The basic theory of how softmax regression training works
Part 2: How to use pylearn2 to do softmax regression
How to load data in pylearn2, and specifically how to load the MNIST dataset
How to configure the pylearn2 SoftmaxRegression model
How to set up a pylearn2 training algorithm
How to run training with the pylearn2 train script, and interpret its output
How to analyze the results of training
Note that this won't explain in detail how the individual classes are implemented. The classes follow pretty good naming conventions and have pretty good docstrings, but if you have trouble understanding them, write to me and I might add a part 3 explaining how some of the parts work under the hood.
Please write to pylearn-dev@googlegroups.com if you encounter any problem with this tutorial.
Before running this notebook, you must have installed pylearn2. Follow the download and installation instructions if you have not yet done so.
In this part, we won't get into any specifics of pylearn2 yet. We'll just discuss how to train a softmax regression model. If you already know about softmax regression, feel free to skip straight to part 2, where we show how to do all of this in pylearn2.
Softmax regression is type of classification model (so the "regression" in the name is really a misnomer), which means it is a pattern recognition algorithm that maps input patterns to categories. In this tutorial, the input patterns will be images of handwritten digits, and the output category will be the identity of the digit (0-9). In other words, we will use softmax regression to solve a simple optical character recognition problem.
You may have heard of logistic regression. Logistic regression is a special case of softmax regression. Specifically, it is the case where there are only two possible output categories. Softmax regression is a generalization of logistic regression to multiple categories.
Let's define some basic terms. First, we'll use the variable $x$ to represent the input to the softmax regression model. We'll use the variable $y$ to represent the output category. Let $y$ be a non-negative integer, such that $0 \leq y < k$ , where $k$ is the number of categories $x$ may belong to. In our example, we are classifying handwritten digits ranging in value from 0 to 9, so the value of y is very easy to interpret. When $y = 7$, the category identified is 7. In most applications, we interpret $y$ as being a numeric code identifying a category, e.g., 0 = cat, 1 = dog, 2 = airplane, etc.
The job of the softmax regression classifier is to predict the probability of $x$ belonging to each class. i.e, we want to be able to compute $p(y = i \mid x)$ for all $k$ possible values of $i$.
The role of a parametric model like softmax regression is to define a set of parameters and describe how they map to functions $f$ defining $p(y \mid x)$. In the case of softmax regression, the model assumes that the log probability of $y=i$ is an affine function of the input $x$, up to some constant $c(x)$. $c(x)$ is defined to be whatever constant is needed to make the distribution add up to 1.
To make this more formal, let $p(y)$ be written as a vector $[ p(y=0), p(y=1), \dots, p(y=k-1) ]^T$. Assume that $x$ can be represented as a vector of numbers (in this example, we will regard each pixel of an grayscale image as being represented by a number in [0,1], and we will turn the 2D array of the image into a vector by using numpy's reshape method). Then the assumption that softmax regression makes is that
$$\log p(y \mid x) = x^T W + b + c(x) $$where $W$ is a matrix and $b$ is a vector. Note that $c(x)$ is just a scalar but here I am adding it to a vector. I'm using numpy broadcasting rules in my math here, so this means to add $c(x)$ to every element of the vector. I'll use numpy broadcasting rules throughout this tutorial.
$W$ and $b$ are the parameters of the model, and determine how inputs are mapped to output categories. We usually call $W$ the "weights" and $b$ the "biases."
By doing some algebra, using the constraint that $p(y)$ must add up to 1, we get
$$ p(y \mid x) = \frac { \exp( x^T W + b ) } { \sum_i \exp(x^T W + b)_i } = \text{softmax}( x^T W + b) $$where $\text{softmax}$ is the softmax activation function.
Of course, the softmax model will only assign $x$ to the right category if its parameters have been adjusted to make them specify the right mapping. To do this we need to train the model.
The basic idea is that we have a collection of training examples, $\mathcal{D}$. Each example is an (x, y) tuple. We will fit the model to the training set, so that when run on the training data, it outputs a good estimate of the probability distribution over $y$ for all of the $x$s.
One way to fit the model is maximum likelihood estimation. Suppose we draw a category variable $\hat{y}$ from our model's distribution $p(y \mid x)$ for every training example independently. We want to maximize the probability of all of those labels being correct. To do this, we maximize the function
$$ J( \mathcal{D}, W, b) = \Pi_{x,y \in \mathcal{D} } p(y \mid x ). $$That function involves lots of multiplication, of possibly very small numbers (note that the softmax activation function guarantees none of them will ever be exactly zero). Multiplying together many small numbers can result in numerical underflow. In practice, we usually take the logarithm of this function to avoid underflow. Since the logarithm is a monotically increasing function, it doesn't change which parameter value is optimal. It does get rid of the multiplication though:
$$ J( \mathcal{D}, W, b) = \sum_{x,y \in \mathcal{D} } \log p(y \mid x ). $$Many different algorithms can maximize $J$. In this tutorial, we will use an algorithm called nonlinear conjugate gradient descent to minimize $-J$. In the case of softmax regression, maximizing $J$ is a convex optimization problem so any optimization algorithm should find the same solution. The choice of nonlinear conjugate gradient is mostly to demonstrate that feature of pylearn2.
One problem with maximium likelihood estimation is that it can suffer from a problem called overfitting. The basic intuition is that the model can memorize patterns in the training set that are specific to the training examples, i.e. patterns that are spurious and not indicative of the correct way to categorize new, previously unseen inputs. One way to prevent this is to use early stopping. Most optimization methods are iterative, in that they try out several values of $W$ and $b$ gradually looking for the best one. Early stopping refers to stopping this search before finding the absolute best values on the training set. If we start with $W$ close to the origin, then stopping early means that $W$ will not travel as far from the origin as it would if we ran the optimization procedure to completion. Early stopping corresponds to assuming that the correlations between input features and output categories are not as strong as pure maximum likelihood estimation would determine them to be.
In order to pick the right point in time to stop, we divide the training set into two subsets: one that we will actually train on, and one that we use to see how well the model is generalizing to new data, then "validation set." The idea is to return the model that does the best at classifying the validation set, rather than the model that assigns the highest probability to the training set.
Now that we've described the theory of what we're going to do, it's time to do it! This part describes how to use pylearn2 to run the algorithms described above.
To train a model in pylearn2, we need to construct several objects specifying how to train it. There are two ways to do this. One is to explicitly construct them as python objects. The other is to specify them using YAML strings. The latter option is better supported at present, so we will use that.
In this ipython notebook, we will construct YAML strings in python. Most of the time when I use pylearn2, I write the yaml string out on disk, then run pylearn2's train.py script on that YAML file. In the format of this tutorial, in an ipython notebook, it's easier to just do everything in python though.
YAML allows the definition of third-party tags that specify how the YAML string should be deserialized, and pylearn2 has a few of those. One of them is the !obj tag, which specifies that what follows is a full specification of a python callable that returns an object. Usually this will just be a class name.
In this tutorial, we will train our model on the MNIST dataset. In order to load that, we use an !obj tag to construct an instance of pylearn2's MNIST class, found in the pylearn2.datasets.mnist python module.
We can pass arguments to the MNIST class's init method by defining a dictionary mapping argument names to their values.
The MNIST dataset is split into a training set and a test set. Since the object we are constructing now will be used as the training set, we must specify that we want to load the training data. We can use the 'which_set' argument to do this.
The 'one_hot' argument is a boolean variable. We want to set it to true, to specify that that the 'y' variable should be represented with a 'one-hot' representation, where $y$ is a vector of dimension $k$, and category $i$ is represented by setting $y_i=1$ and all other elements of $y$ to 0. This is a relatively arbitrary formatting detail to specify, and to set it correctly you need to know that that is the format that pylearn2's MLP code expects. In future versions of pylearn2 we plan for it to be unnecessary to specify this detail.
Finally, as described above, we will use early stopping, so we shouldn't train on the entire training set. The MNIST training set contains 60,000 examples. We use the 'start' and 'stop' arguments to train on the first 50,000 of them.
In [2]:
dataset = """!obj:pylearn2.datasets.mnist.MNIST {
which_set: 'train',
one_hot: 1,
start: 0,
stop: 50000
}"""
Next, we need to specify an object representing the model to be trained. To do this, we need to make an instance of the SoftmaxRegression class defined in pylearn2.models.softmax_regression. We need to specify a few details of how to configure the model.
The "nvis" argument stands for "number of visible units." In neural network terminology, the "visible units" are the pieces of data that the model gets to observe. This argument is asking for the dimension of $x$. If we didn't want $x$ to be a vector, there is another more flexible way of configuring the input of the model, but for vector-based models, "nvis" is the easiest piece of the API to use. The MNIST dataset contains 28x28 grayscale images, not vectors, so the SoftmaxRegression model will ask pylearn2 to flatten the images into vectors. That means it will receive a vector with 28*28=784 elements.
We also need to specify how many categories or classes there are with the "n_classes" argument.
Finally, the matrix $W$ will be randomly initialized. There are a few different initialization schemes in pylearn2. Specifying the "irange" argument will make each element of $W$ be initialized from $U(-\text{irange}, \text{irange})$. Since softmax regression training is a convex optimization problem, we can set irange to 0 to initialize all of $W$ to 0. (Some other models require that the different columns of $W$ differ from each other initially in order for them to train correctly)
In [3]:
model = """!obj:pylearn2.models.softmax_regression.SoftmaxRegression {
n_classes: 10,
irange: 0.,
nvis: 784,
}"""
Next, we need to specify a training algorithm to maximize the log likelihood with. (Actually, we will minimize the negative log likelihood, because all of pylearn2's optimization algorithms are written in terms of minimizing a cost function. theano will optimize out any double-negation that results, so this has no effect on the runtime of the algorithm)
We can use an !obj tag to load pylearn2's BGD class. BGD stands for batch gradient descent. It is a class designed to train models by moving in the direction of the gradient of the objective function applied to large batches of examples.
The "batch_size" argument determines how many examples the BGD class will act on at one time. This should be a fairly large number so that the updates are more likely to generalize to other batches.
Setting "line_search_mode" to exhaustive means that the BGD class will try to binary search for the best possible point along the direction of the gradient of the cost function, rather than just trying out a few pre-selected step sizes. This implements the method of steepest descent.
"conjugate" is a boolean flag. By setting it to 1, we make BGD modify the gradient directions to preserve conjugacy prior to doing the line search. This implements nonlinear conjugate gradient descent.
During training, we will keep track of several different quantities of interest to the experimenter, such as the number of examples that are classified correctly, the objective function value, etc. The quantities to track are determined by the model class and by the training algorithm class. These quantities are referred to as "channels" and the act of tracking them is called "monitoring" in pylearn2 terms. In order to track them, we need to specify a monitoring dataset. In this case, we use a dictionary to make multiple, named monitoring datasets.
We use "train" to define the training set. The is YAML syntax saying to reference an object defined elsewhere in the YAML file. Later, when we specify which dataset to train on, we will define this reference.
Finally, the BGD algorithm needs to know when to stop training. We therefore give it a "termination criterion." In this case, we use a monitor-based termination criterion that says to stop when too little progress is being made at reducing the value tracked by one of the monitoring channels. In this case, we use "valid_y_misclass", which is the rate at which the model mislabels examples on the validation set. MonitorBased has some other arguments that we don't bother to specify here, and just use the defaults. These defaults will result in the training algorithm running for a while after the lowest value of the validation error has been reached, to make sure that we don't stop too soon just because the validation error randomly bounced upward for a few epochs.
You might expect the BGD algorithm to need to be told what objective function to minimize. It turns out that if the user doesn't say what objective function to minimize, BGD will ask the model for some default objective function, by calling the models "get_default_cost" method. In this case, the SoftmaxRegression model provides the negative log likelihood as the default objective function.
In [4]:
algorithm = """!obj:pylearn2.training_algorithms.bgd.BGD {
batch_size: 10000,
line_search_mode: 'exhaustive',
conjugate: 1,
monitoring_dataset:
{
'train' : *train,
'valid' : !obj:pylearn2.datasets.mnist.MNIST {
which_set: 'train',
one_hot: 1,
start: 50000,
stop: 60000
},
'test' : !obj:pylearn2.datasets.mnist.MNIST {
which_set: 'test',
one_hot: 1,
}
},
termination_criterion: !obj:pylearn2.termination_criteria.MonitorBased {
channel_name: "valid_y_misclass"
}
}"""
We now use a pylearn2 Train object to represent the training problem.
We use "&train" here to define the reference used with the "*train" line in the algorithm section.
We use the python %(varname)s syntax and the locals() dictionary to paste the dataset, model, and algorithm strings from the earlier sections into this final string here.
As specified in the previous section, the model will keep training for a while after the lowest validation error is reached, just to make sure that it won't start going down again. However, the final model we would like to return is the one with the lowest validation error. We add an "extension" to the training algorithm here. Extensions are objects with callbacks that get triggered at different points in time, such as the end of a training epoch. In this case, we use the MonitorBasedSaveBest extension. Whenever the monitoring channels are updated, MonitorBasedSaveBest will check if a specific channel decreased, and if so, it will save a copy of the model. This way, the best model is saved at the end. Here we save the model with the lowest validation set error to "softmax_regression_best.pkl."
In [5]:
train = """!obj:pylearn2.train.Train {
dataset: &train %(dataset)s,
model: %(model)s,
algorithm: %(algorithm)s,
extensions: [
!obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
channel_name: 'valid_y_misclass',
save_path: "softmax_regression_best.pkl"
},
],
save_path: "softmax_regression.pkl",
save_freq: 1
}""" % locals()
Execute the cell below to see the final YAML string.
In [6]:
print train
Now, we use pylearn2's yaml_parse.load to construct the Train object, and run its main loop. The same thing could be accomplished by running pylearn2's train.py script on a file containing the yaml string.
Execute the next cell to train the model. This will take a few minutes, and it will print out output periodically as it runs.
In [7]:
from pylearn2.config import yaml_parse
train = yaml_parse.load(train)
train.main_loop()
As the model trained, it should have printed out progress messages. Most of these are the values of the various channels being monitored throughout training.
We can use the print_monitor script to print the last monitoring entry of a saved model. By running it on "softmax_regression_best.pkl", we can see the performance of the model at the point where it did the best on the validation set. We see by executing the next cell (the ! mark tells ipython to run a shell command) that the test set misclassification rate is 0.0747, obtained after training for 9 epochs.
In [8]:
!print_monitor.py softmax_regression_best.pkl
Another common way of analyzing trained models is to look at their weights. Here we use the show_weights script to visualize $W$:
In [8]:
!show_weights.py softmax_regression_best.pkl
You can find more information on softmax regression from the following sources:
LISA lab's Deep Learning Tutorials: Classifying MNIST digits using Logistic Regression
Stanford's Unsupervised Feature Learning and Deep Learning wiki: Softmax Regression
This is by no means a complete list.