pylearn2 tutorial: Softmax regression

by Ian Goodfellow

Introduction

This ipython notebook will teach you the basics of how softmax regression works, and show you how to do softmax regression in pylearn2.

To do this, we will go over several concepts:

Part 1: What pylearn2 is doing for you in this example

  • What softmax regression is, and the math of how it works

  • The basic theory of how softmax regression training works

Part 2: How to use pylearn2 to do softmax regression

  • How to load data in pylearn2, and specifically how to load the MNIST dataset

  • How to configure the pylearn2 SoftmaxRegression model

  • How to set up a pylearn2 training algorithm

  • How to run training with the pylearn2 train script, and interpret its output

  • How to analyze the results of training

Note that this won't explain in detail how the individual classes are implemented. The classes follow pretty good naming conventions and have pretty good docstrings, but if you have trouble understanding them, write to me and I might add a part 3 explaining how some of the parts work under the hood.

Please write to pylearn-dev@googlegroups.com if you encounter any problem with this tutorial.

Requirements

Before running this notebook, you must have installed pylearn2. Follow the download and installation instructions if you have not yet done so.

Part 1: What pylearn2 is doing for you in this example

In this part, we won't get into any specifics of pylearn2 yet. We'll just discuss how to train a softmax regression model. If you already know about softmax regression, feel free to skip straight to part 2, where we show how to do all of this in pylearn2.

What softmax regression is, and the math of how it works

Softmax regression is type of classification model (so the "regression" in the name is really a misnomer), which means it is a pattern recognition algorithm that maps input patterns to categories. In this tutorial, the input patterns will be images of handwritten digits, and the output category will be the identity of the digit (0-9). In other words, we will use softmax regression to solve a simple optical character recognition problem.

You may have heard of logistic regression. Logistic regression is a special case of softmax regression. Specifically, it is the case where there are only two possible output categories. Softmax regression is a generalization of logistic regression to multiple categories.

Let's define some basic terms. First, we'll use the variable $x$ to represent the input to the softmax regression model. We'll use the variable $y$ to represent the output category. Let $y$ be a non-negative integer, such that $0 \leq y < k$ , where $k$ is the number of categories $x$ may belong to. In our example, we are classifying handwritten digits ranging in value from 0 to 9, so the value of y is very easy to interpret. When $y = 7$, the category identified is 7. In most applications, we interpret $y$ as being a numeric code identifying a category, e.g., 0 = cat, 1 = dog, 2 = airplane, etc.

The job of the softmax regression classifier is to predict the probability of $x$ belonging to each class. i.e, we want to be able to compute $p(y = i \mid x)$ for all $k$ possible values of $i$.

The role of a parametric model like softmax regression is to define a set of parameters and describe how they map to functions $f$ defining $p(y \mid x)$. In the case of softmax regression, the model assumes that the log probability of $y=i$ is an affine function of the input $x$, up to some constant $c(x)$. $c(x)$ is defined to be whatever constant is needed to make the distribution add up to 1.

To make this more formal, let $p(y)$ be written as a vector $[ p(y=0), p(y=1), \dots, p(y=k-1) ]^T$. Assume that $x$ can be represented as a vector of numbers (in this example, we will regard each pixel of an grayscale image as being represented by a number in [0,1], and we will turn the 2D array of the image into a vector by using numpy's reshape method). Then the assumption that softmax regression makes is that

$$\log p(y \mid x) = x^T W + b + c(x) $$

where $W$ is a matrix and $b$ is a vector. Note that $c(x)$ is just a scalar but here I am adding it to a vector. I'm using numpy broadcasting rules in my math here, so this means to add $c(x)$ to every element of the vector. I'll use numpy broadcasting rules throughout this tutorial.

$W$ and $b$ are the parameters of the model, and determine how inputs are mapped to output categories. We usually call $W$ the "weights" and $b$ the "biases."

By doing some algebra, using the constraint that $p(y)$ must add up to 1, we get

$$ p(y \mid x) = \frac { \exp( x^T W + b ) } { \sum_i \exp(x^T W + b)_i } = \text{softmax}( x^T W + b) $$

where $\text{softmax}$ is the softmax activation function.

The basic theory of how softmax regression training works

Of course, the softmax model will only assign $x$ to the right category if its parameters have been adjusted to make them specify the right mapping. To do this we need to train the model.

The basic idea is that we have a collection of training examples, $\mathcal{D}$. Each example is an (x, y) tuple. We will fit the model to the training set, so that when run on the training data, it outputs a good estimate of the probability distribution over $y$ for all of the $x$s.

One way to fit the model is maximum likelihood estimation. Suppose we draw a category variable $\hat{y}$ from our model's distribution $p(y \mid x)$ for every training example independently. We want to maximize the probability of all of those labels being correct. To do this, we maximize the function

$$ J( \mathcal{D}, W, b) = \Pi_{x,y \in \mathcal{D} } p(y \mid x ). $$

That function involves lots of multiplication, of possibly very small numbers (note that the softmax activation function guarantees none of them will ever be exactly zero). Multiplying together many small numbers can result in numerical underflow. In practice, we usually take the logarithm of this function to avoid underflow. Since the logarithm is a monotically increasing function, it doesn't change which parameter value is optimal. It does get rid of the multiplication though:

$$ J( \mathcal{D}, W, b) = \sum_{x,y \in \mathcal{D} } \log p(y \mid x ). $$

Many different algorithms can maximize $J$. In this tutorial, we will use an algorithm called nonlinear conjugate gradient descent to minimize $-J$. In the case of softmax regression, maximizing $J$ is a convex optimization problem so any optimization algorithm should find the same solution. The choice of nonlinear conjugate gradient is mostly to demonstrate that feature of pylearn2.

One problem with maximium likelihood estimation is that it can suffer from a problem called overfitting. The basic intuition is that the model can memorize patterns in the training set that are specific to the training examples, i.e. patterns that are spurious and not indicative of the correct way to categorize new, previously unseen inputs. One way to prevent this is to use early stopping. Most optimization methods are iterative, in that they try out several values of $W$ and $b$ gradually looking for the best one. Early stopping refers to stopping this search before finding the absolute best values on the training set. If we start with $W$ close to the origin, then stopping early means that $W$ will not travel as far from the origin as it would if we ran the optimization procedure to completion. Early stopping corresponds to assuming that the correlations between input features and output categories are not as strong as pure maximum likelihood estimation would determine them to be.

In order to pick the right point in time to stop, we divide the training set into two subsets: one that we will actually train on, and one that we use to see how well the model is generalizing to new data, then "validation set." The idea is to return the model that does the best at classifying the validation set, rather than the model that assigns the highest probability to the training set.

Part 2: How to use pylearn2 to do softmax regression

Now that we've described the theory of what we're going to do, it's time to do it! This part describes how to use pylearn2 to run the algorithms described above.

How to load data in pylearn2, and specifically how to load the MNIST dataset

To train a model in pylearn2, we need to construct several objects specifying how to train it. There are two ways to do this. One is to explicitly construct them as python objects. The other is to specify them using YAML strings. The latter option is better supported at present, so we will use that.

In this ipython notebook, we will construct YAML strings in python. Most of the time when I use pylearn2, I write the yaml string out on disk, then run pylearn2's train.py script on that YAML file. In the format of this tutorial, in an ipython notebook, it's easier to just do everything in python though.

YAML allows the definition of third-party tags that specify how the YAML string should be deserialized, and pylearn2 has a few of those. One of them is the !obj tag, which specifies that what follows is a full specification of a python callable that returns an object. Usually this will just be a class name.

In this tutorial, we will train our model on the MNIST dataset. In order to load that, we use an !obj tag to construct an instance of pylearn2's MNIST class, found in the pylearn2.datasets.mnist python module.

We can pass arguments to the MNIST class's init method by defining a dictionary mapping argument names to their values.

The MNIST dataset is split into a training set and a test set. Since the object we are constructing now will be used as the training set, we must specify that we want to load the training data. We can use the 'which_set' argument to do this.

The 'one_hot' argument is a boolean variable. We want to set it to true, to specify that that the 'y' variable should be represented with a 'one-hot' representation, where $y$ is a vector of dimension $k$, and category $i$ is represented by setting $y_i=1$ and all other elements of $y$ to 0. This is a relatively arbitrary formatting detail to specify, and to set it correctly you need to know that that is the format that pylearn2's MLP code expects. In future versions of pylearn2 we plan for it to be unnecessary to specify this detail.

Finally, as described above, we will use early stopping, so we shouldn't train on the entire training set. The MNIST training set contains 60,000 examples. We use the 'start' and 'stop' arguments to train on the first 50,000 of them.


In [2]:
dataset = """!obj:pylearn2.datasets.mnist.MNIST {
        which_set: 'train',
        one_hot: 1,
        start: 0,
        stop: 50000
    }"""

How to configure the pylearn2 SoftmaxRegression model

Next, we need to specify an object representing the model to be trained. To do this, we need to make an instance of the SoftmaxRegression class defined in pylearn2.models.softmax_regression. We need to specify a few details of how to configure the model.

The "nvis" argument stands for "number of visible units." In neural network terminology, the "visible units" are the pieces of data that the model gets to observe. This argument is asking for the dimension of $x$. If we didn't want $x$ to be a vector, there is another more flexible way of configuring the input of the model, but for vector-based models, "nvis" is the easiest piece of the API to use. The MNIST dataset contains 28x28 grayscale images, not vectors, so the SoftmaxRegression model will ask pylearn2 to flatten the images into vectors. That means it will receive a vector with 28*28=784 elements.

We also need to specify how many categories or classes there are with the "n_classes" argument.

Finally, the matrix $W$ will be randomly initialized. There are a few different initialization schemes in pylearn2. Specifying the "irange" argument will make each element of $W$ be initialized from $U(-\text{irange}, \text{irange})$. Since softmax regression training is a convex optimization problem, we can set irange to 0 to initialize all of $W$ to 0. (Some other models require that the different columns of $W$ differ from each other initially in order for them to train correctly)


In [3]:
model = """!obj:pylearn2.models.softmax_regression.SoftmaxRegression {
    n_classes: 10,
    irange: 0.,
    nvis: 784,
}"""

How to set up a pylearn2 training algorithm

Next, we need to specify a training algorithm to maximize the log likelihood with. (Actually, we will minimize the negative log likelihood, because all of pylearn2's optimization algorithms are written in terms of minimizing a cost function. theano will optimize out any double-negation that results, so this has no effect on the runtime of the algorithm)

We can use an !obj tag to load pylearn2's BGD class. BGD stands for batch gradient descent. It is a class designed to train models by moving in the direction of the gradient of the objective function applied to large batches of examples.

The "batch_size" argument determines how many examples the BGD class will act on at one time. This should be a fairly large number so that the updates are more likely to generalize to other batches.

Setting "line_search_mode" to exhaustive means that the BGD class will try to binary search for the best possible point along the direction of the gradient of the cost function, rather than just trying out a few pre-selected step sizes. This implements the method of steepest descent.

"conjugate" is a boolean flag. By setting it to 1, we make BGD modify the gradient directions to preserve conjugacy prior to doing the line search. This implements nonlinear conjugate gradient descent.

During training, we will keep track of several different quantities of interest to the experimenter, such as the number of examples that are classified correctly, the objective function value, etc. The quantities to track are determined by the model class and by the training algorithm class. These quantities are referred to as "channels" and the act of tracking them is called "monitoring" in pylearn2 terms. In order to track them, we need to specify a monitoring dataset. In this case, we use a dictionary to make multiple, named monitoring datasets.

We use "train" to define the training set. The is YAML syntax saying to reference an object defined elsewhere in the YAML file. Later, when we specify which dataset to train on, we will define this reference.

Finally, the BGD algorithm needs to know when to stop training. We therefore give it a "termination criterion." In this case, we use a monitor-based termination criterion that says to stop when too little progress is being made at reducing the value tracked by one of the monitoring channels. In this case, we use "valid_y_misclass", which is the rate at which the model mislabels examples on the validation set. MonitorBased has some other arguments that we don't bother to specify here, and just use the defaults. These defaults will result in the training algorithm running for a while after the lowest value of the validation error has been reached, to make sure that we don't stop too soon just because the validation error randomly bounced upward for a few epochs.

You might expect the BGD algorithm to need to be told what objective function to minimize. It turns out that if the user doesn't say what objective function to minimize, BGD will ask the model for some default objective function, by calling the models "get_default_cost" method. In this case, the SoftmaxRegression model provides the negative log likelihood as the default objective function.


In [4]:
algorithm = """!obj:pylearn2.training_algorithms.bgd.BGD {
        batch_size: 10000,
        line_search_mode: 'exhaustive',
        conjugate: 1,
        monitoring_dataset:
            {
                'train' : *train,
                'valid' : !obj:pylearn2.datasets.mnist.MNIST {
                              which_set: 'train',
                              one_hot: 1,
                              start: 50000,
                              stop:  60000
                          },
                'test'  : !obj:pylearn2.datasets.mnist.MNIST {
                              which_set: 'test',
                              one_hot: 1,
                          }
            },
        termination_criterion: !obj:pylearn2.termination_criteria.MonitorBased {
            channel_name: "valid_y_misclass"
        }
    }"""

How to run training with the pylearn2 train script, and interpret its output

We now use a pylearn2 Train object to represent the training problem.

We use "&train" here to define the reference used with the "*train" line in the algorithm section.

We use the python %(varname)s syntax and the locals() dictionary to paste the dataset, model, and algorithm strings from the earlier sections into this final string here.

As specified in the previous section, the model will keep training for a while after the lowest validation error is reached, just to make sure that it won't start going down again. However, the final model we would like to return is the one with the lowest validation error. We add an "extension" to the training algorithm here. Extensions are objects with callbacks that get triggered at different points in time, such as the end of a training epoch. In this case, we use the MonitorBasedSaveBest extension. Whenever the monitoring channels are updated, MonitorBasedSaveBest will check if a specific channel decreased, and if so, it will save a copy of the model. This way, the best model is saved at the end. Here we save the model with the lowest validation set error to "softmax_regression_best.pkl."


In [5]:
train = """!obj:pylearn2.train.Train {
    dataset: &train %(dataset)s,
    model: %(model)s,
    algorithm: %(algorithm)s,
    extensions: [
        !obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
             channel_name: 'valid_y_misclass',
             save_path: "softmax_regression_best.pkl"
        },
    ],
    save_path: "softmax_regression.pkl",
    save_freq: 1
}""" % locals()

Execute the cell below to see the final YAML string.


In [6]:
print train


!obj:pylearn2.train.Train {
    dataset: &train !obj:pylearn2.datasets.mnist.MNIST {
        which_set: 'train',
        one_hot: 1,
        start: 0,
        stop: 50000
    },
    model: !obj:pylearn2.models.softmax_regression.SoftmaxRegression {
    n_classes: 10,
    irange: 0.,
    nvis: 784,
},
    algorithm: !obj:pylearn2.training_algorithms.bgd.BGD {
        batch_size: 10000,
        line_search_mode: 'exhaustive',
        conjugate: 1,
        monitoring_dataset:
            {
                'train' : *train,
                'valid' : !obj:pylearn2.datasets.mnist.MNIST {
                              which_set: 'train',
                              one_hot: 1,
                              start: 50000,
                              stop:  60000
                          },
                'test'  : !obj:pylearn2.datasets.mnist.MNIST {
                              which_set: 'test',
                              one_hot: 1,
                          }
            },
        termination_criterion: !obj:pylearn2.termination_criteria.MonitorBased {
            channel_name: "valid_y_misclass"
        }
    },
    extensions: [
        !obj:pylearn2.train_extensions.best_params.MonitorBasedSaveBest {
             channel_name: 'valid_y_misclass',
             save_path: "softmax_regression_best.pkl"
        },
    ],
    save_path: "softmax_regression.pkl",
    save_freq: 1
}

Now, we use pylearn2's yaml_parse.load to construct the Train object, and run its main loop. The same thing could be accomplished by running pylearn2's train.py script on a file containing the yaml string.

Execute the next cell to train the model. This will take a few minutes, and it will print out output periodically as it runs.


In [7]:
from pylearn2.config import yaml_parse
train = yaml_parse.load(train)
train.main_loop()


compiling begin_record_entry...
/u/goodfeli/pylearn2/models/mlp.py:36: UserWarning: MLP changing the recursion limit.
  warnings.warn("MLP changing the recursion limit.")
compiling begin_record_entry done. Time elapsed: 0.149942 seconds
Monitored channels: 
	ave_grad_mult
	ave_grad_size
	ave_step_size
	test_objective
	test_y_col_norms_max
	test_y_col_norms_mean
	test_y_col_norms_min
	test_y_max_max_class
	test_y_mean_max_class
	test_y_min_max_class
	test_y_misclass
	test_y_nll
	test_y_row_norms_max
	test_y_row_norms_mean
	test_y_row_norms_min
	train_objective
	train_y_col_norms_max
	train_y_col_norms_mean
	train_y_col_norms_min
	train_y_max_max_class
	train_y_mean_max_class
	train_y_min_max_class
	train_y_misclass
	train_y_nll
	train_y_row_norms_max
	train_y_row_norms_mean
	train_y_row_norms_min
	valid_objective
	valid_y_col_norms_max
	valid_y_col_norms_mean
	valid_y_col_norms_min
	valid_y_max_max_class
	valid_y_mean_max_class
	valid_y_min_max_class
	valid_y_misclass
	valid_y_nll
	valid_y_row_norms_max
	valid_y_row_norms_mean
	valid_y_row_norms_min
Compiling accum...
graph size: 54
graph size: 51
graph size: 51
Compiling accum done. Time elapsed: 2.896270 seconds
Monitoring step:
	Epochs seen: 0
	Batches seen: 0
	Examples seen: 0
	ave_grad_mult: 0.0
	ave_grad_size: 0.0
	ave_step_size: 0.0
	test_objective: 2.3027
	test_y_col_norms_max: 0.0
	test_y_col_norms_mean: 0.0
	test_y_col_norms_min: 0.0
	test_y_max_max_class: 0.1
	test_y_mean_max_class: 0.0999903
	test_y_min_max_class: 0.1
	test_y_misclass: 0.902
	test_y_nll: 2.3027
	test_y_row_norms_max: 0.0
	test_y_row_norms_mean: 0.0
	test_y_row_norms_min: 0.0
	train_objective: 2.3027
	train_y_col_norms_max: 0.0
	train_y_col_norms_mean: 0.0
	train_y_col_norms_min: 0.0
	train_y_max_max_class: 0.1
	train_y_mean_max_class: 0.0999903
	train_y_min_max_class: 0.1
	train_y_misclass: 0.90136
	train_y_nll: 2.3027
	train_y_row_norms_max: 0.0
	train_y_row_norms_mean: 0.0
	train_y_row_norms_min: 0.0
	valid_objective: 2.3027
	valid_y_col_norms_max: 0.0
	valid_y_col_norms_mean: 0.0
	valid_y_col_norms_min: 0.0
	valid_y_max_max_class: 0.1
	valid_y_mean_max_class: 0.0999903
	valid_y_min_max_class: 0.1
	valid_y_misclass: 0.9009
	valid_y_nll: 2.3027
	valid_y_row_norms_max: 0.0
	valid_y_row_norms_mean: 0.0
	valid_y_row_norms_min: 0.0
Time this epoch: 34.597072 seconds
Monitoring step:
	Epochs seen: 1
	Batches seen: 5
	Examples seen: 50000
	ave_grad_mult: 2.52791
	ave_grad_size: 0.69593
	ave_step_size: 1.82697
	test_objective: 0.302061
	test_y_col_norms_max: 3.20579
	test_y_col_norms_mean: 2.89182
	test_y_col_norms_min: 2.18949
	test_y_max_max_class: 0.999996
	test_y_mean_max_class: 0.88234
	test_y_min_max_class: 0.190331
	test_y_misclass: 0.0843
	test_y_nll: 0.302061
	test_y_row_norms_max: 0.887513
	test_y_row_norms_mean: 0.243902
	test_y_row_norms_min: 0.0
	train_objective: 0.313987
	train_y_col_norms_max: 3.20579
	train_y_col_norms_mean: 2.89182
	train_y_col_norms_min: 2.18949
	train_y_max_max_class: 0.999997
	train_y_mean_max_class: 0.87722
	train_y_min_max_class: 0.209529
	train_y_misclass: 0.08792
	train_y_nll: 0.313987
	train_y_row_norms_max: 0.887513
	train_y_row_norms_mean: 0.243902
	train_y_row_norms_min: 0.0
	valid_objective: 0.295637
	valid_y_col_norms_max: 3.20579
	valid_y_col_norms_mean: 2.89182
	valid_y_col_norms_min: 2.18949
	valid_y_max_max_class: 0.999999
	valid_y_mean_max_class: 0.884418
	valid_y_min_max_class: 0.193732
	valid_y_misclass: 0.081
	valid_y_nll: 0.295637
	valid_y_row_norms_max: 0.887513
	valid_y_row_norms_mean: 0.243902
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.010933 seconds
Time this epoch: 34.953455 seconds
Monitoring step:
	Epochs seen: 2
	Batches seen: 10
	Examples seen: 100000
	ave_grad_mult: 2.66792
	ave_grad_size: 0.443053
	ave_step_size: 1.16596
	test_objective: 0.285252
	test_y_col_norms_max: 3.96882
	test_y_col_norms_mean: 3.50975
	test_y_col_norms_min: 2.68422
	test_y_max_max_class: 0.999999
	test_y_mean_max_class: 0.895905
	test_y_min_max_class: 0.175118
	test_y_misclass: 0.0787
	test_y_nll: 0.285252
	test_y_row_norms_max: 1.04548
	test_y_row_norms_mean: 0.305209
	test_y_row_norms_min: 0.0
	train_objective: 0.288429
	train_y_col_norms_max: 3.96882
	train_y_col_norms_mean: 3.50975
	train_y_col_norms_min: 2.68422
	train_y_max_max_class: 0.999999
	train_y_mean_max_class: 0.891604
	train_y_min_max_class: 0.221239
	train_y_misclass: 0.08052
	train_y_nll: 0.288429
	train_y_row_norms_max: 1.04548
	train_y_row_norms_mean: 0.305209
	train_y_row_norms_min: 0.0
	valid_objective: 0.276159
	valid_y_col_norms_max: 3.96882
	valid_y_col_norms_mean: 3.50975
	valid_y_col_norms_min: 2.68422
	valid_y_max_max_class: 0.999999
	valid_y_mean_max_class: 0.898079
	valid_y_min_max_class: 0.219191
	valid_y_misclass: 0.0767
	valid_y_nll: 0.276159
	valid_y_row_norms_max: 1.04548
	valid_y_row_norms_mean: 0.305209
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.011647 seconds
Time this epoch: 34.658676 seconds
Monitoring step:
	Epochs seen: 3
	Batches seen: 15
	Examples seen: 150000
	ave_grad_mult: 2.69474
	ave_grad_size: 0.287667
	ave_step_size: 0.757921
	test_objective: 0.279283
	test_y_col_norms_max: 4.41492
	test_y_col_norms_mean: 3.87252
	test_y_col_norms_min: 3.01571
	test_y_max_max_class: 0.999999
	test_y_mean_max_class: 0.901172
	test_y_min_max_class: 0.225032
	test_y_misclass: 0.0791
	test_y_nll: 0.279283
	test_y_row_norms_max: 1.12576
	test_y_row_norms_mean: 0.341641
	test_y_row_norms_min: 0.0
	train_objective: 0.277765
	train_y_col_norms_max: 4.41492
	train_y_col_norms_mean: 3.87252
	train_y_col_norms_min: 3.01571
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.897146
	train_y_min_max_class: 0.218673
	train_y_misclass: 0.07842
	train_y_nll: 0.277765
	train_y_row_norms_max: 1.12576
	train_y_row_norms_mean: 0.341641
	train_y_row_norms_min: 0.0
	valid_objective: 0.27224
	valid_y_col_norms_max: 4.41492
	valid_y_col_norms_mean: 3.87252
	valid_y_col_norms_min: 3.01571
	valid_y_max_max_class: 0.999999
	valid_y_mean_max_class: 0.90255
	valid_y_min_max_class: 0.227596
	valid_y_misclass: 0.077
	valid_y_nll: 0.27224
	valid_y_row_norms_max: 1.12576
	valid_y_row_norms_mean: 0.341641
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.012716 seconds
Time this epoch: 34.421453 seconds
Monitoring step:
	Epochs seen: 4
	Batches seen: 20
	Examples seen: 200000
	ave_grad_mult: 2.81558
	ave_grad_size: 0.192648
	ave_step_size: 0.51311
	test_objective: 0.277856
	test_y_col_norms_max: 4.74045
	test_y_col_norms_mean: 4.18956
	test_y_col_norms_min: 3.28184
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.904732
	test_y_min_max_class: 0.240714
	test_y_misclass: 0.0789
	test_y_nll: 0.277856
	test_y_row_norms_max: 1.19456
	test_y_row_norms_mean: 0.373561
	test_y_row_norms_min: 0.0
	train_objective: 0.271673
	train_y_col_norms_max: 4.74045
	train_y_col_norms_mean: 4.18956
	train_y_col_norms_min: 3.28184
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.900819
	train_y_min_max_class: 0.223979
	train_y_misclass: 0.07656
	train_y_nll: 0.271673
	train_y_row_norms_max: 1.19456
	train_y_row_norms_mean: 0.373561
	train_y_row_norms_min: 0.0
	valid_objective: 0.267586
	valid_y_col_norms_max: 4.74045
	valid_y_col_norms_mean: 4.18956
	valid_y_col_norms_min: 3.28184
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.906228
	valid_y_min_max_class: 0.243919
	valid_y_misclass: 0.0739
	valid_y_nll: 0.267586
	valid_y_row_norms_max: 1.19456
	valid_y_row_norms_mean: 0.373561
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.012976 seconds
Time this epoch: 34.708221 seconds
Monitoring step:
	Epochs seen: 5
	Batches seen: 25
	Examples seen: 250000
	ave_grad_mult: 2.8394
	ave_grad_size: 0.135578
	ave_step_size: 0.363796
	test_objective: 0.273552
	test_y_col_norms_max: 5.05031
	test_y_col_norms_mean: 4.45481
	test_y_col_norms_min: 3.50675
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.908396
	test_y_min_max_class: 0.199031
	test_y_misclass: 0.0786
	test_y_nll: 0.273552
	test_y_row_norms_max: 1.2757
	test_y_row_norms_mean: 0.399997
	test_y_row_norms_min: 0.0
	train_objective: 0.265903
	train_y_col_norms_max: 5.05031
	train_y_col_norms_mean: 4.45481
	train_y_col_norms_min: 3.50675
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.90448
	train_y_min_max_class: 0.221457
	train_y_misclass: 0.0746
	train_y_nll: 0.265903
	train_y_row_norms_max: 1.2757
	train_y_row_norms_mean: 0.399997
	train_y_row_norms_min: 0.0
	valid_objective: 0.262761
	valid_y_col_norms_max: 5.05031
	valid_y_col_norms_mean: 4.45481
	valid_y_col_norms_min: 3.50675
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.909874
	valid_y_min_max_class: 0.22915
	valid_y_misclass: 0.0735
	valid_y_nll: 0.262761
	valid_y_row_norms_max: 1.2757
	valid_y_row_norms_mean: 0.399997
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.012991 seconds
Time this epoch: 34.953659 seconds
Monitoring step:
	Epochs seen: 6
	Batches seen: 30
	Examples seen: 300000
	ave_grad_mult: 2.86387
	ave_grad_size: 0.0998371
	ave_step_size: 0.271333
	test_objective: 0.27373
	test_y_col_norms_max: 5.3073
	test_y_col_norms_mean: 4.67881
	test_y_col_norms_min: 3.6952
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.909192
	test_y_min_max_class: 0.250221
	test_y_misclass: 0.0765
	test_y_nll: 0.27373
	test_y_row_norms_max: 1.34881
	test_y_row_norms_mean: 0.423036
	test_y_row_norms_min: 0.0
	train_objective: 0.263285
	train_y_col_norms_max: 5.3073
	train_y_col_norms_mean: 4.67881
	train_y_col_norms_min: 3.6952
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.90499
	train_y_min_max_class: 0.22298
	train_y_misclass: 0.0735
	train_y_nll: 0.263285
	train_y_row_norms_max: 1.34881
	train_y_row_norms_mean: 0.423036
	train_y_row_norms_min: 0.0
	valid_objective: 0.264057
	valid_y_col_norms_max: 5.3073
	valid_y_col_norms_mean: 4.67881
	valid_y_col_norms_min: 3.6952
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.910245
	valid_y_min_max_class: 0.248311
	valid_y_misclass: 0.0739
	valid_y_nll: 0.264057
	valid_y_row_norms_max: 1.34881
	valid_y_row_norms_mean: 0.423036
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.010976 seconds
Time this epoch: 35.551351 seconds
Monitoring step:
	Epochs seen: 7
	Batches seen: 35
	Examples seen: 350000
	ave_grad_mult: 2.97239
	ave_grad_size: 0.0791985
	ave_step_size: 0.22217
	test_objective: 0.272212
	test_y_col_norms_max: 5.54306
	test_y_col_norms_mean: 4.90184
	test_y_col_norms_min: 3.89849
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.912612
	test_y_min_max_class: 0.221462
	test_y_misclass: 0.0779
	test_y_nll: 0.272212
	test_y_row_norms_max: 1.41566
	test_y_row_norms_mean: 0.445152
	test_y_row_norms_min: 0.0
	train_objective: 0.261363
	train_y_col_norms_max: 5.54306
	train_y_col_norms_mean: 4.90184
	train_y_col_norms_min: 3.89849
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.908657
	train_y_min_max_class: 0.23405
	train_y_misclass: 0.07314
	train_y_nll: 0.261363
	train_y_row_norms_max: 1.41566
	train_y_row_norms_mean: 0.445152
	train_y_row_norms_min: 0.0
	valid_objective: 0.264774
	valid_y_col_norms_max: 5.54306
	valid_y_col_norms_mean: 4.90184
	valid_y_col_norms_min: 3.89849
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.912274
	valid_y_min_max_class: 0.227835
	valid_y_misclass: 0.074
	valid_y_nll: 0.264774
	valid_y_row_norms_max: 1.41566
	valid_y_row_norms_mean: 0.445152
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.011809 seconds
Time this epoch: 35.177995 seconds
Monitoring step:
	Epochs seen: 8
	Batches seen: 40
	Examples seen: 400000
	ave_grad_mult: 3.02246
	ave_grad_size: 0.0664074
	ave_step_size: 0.189654
	test_objective: 0.270816
	test_y_col_norms_max: 5.77828
	test_y_col_norms_mean: 5.09737
	test_y_col_norms_min: 4.03863
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.911329
	test_y_min_max_class: 0.202355
	test_y_misclass: 0.0767
	test_y_nll: 0.270816
	test_y_row_norms_max: 1.52357
	test_y_row_norms_mean: 0.465021
	test_y_row_norms_min: 0.0
	train_objective: 0.25662
	train_y_col_norms_max: 5.77828
	train_y_col_norms_mean: 5.09738
	train_y_col_norms_min: 4.03863
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.907957
	train_y_min_max_class: 0.224183
	train_y_misclass: 0.07108
	train_y_nll: 0.25662
	train_y_row_norms_max: 1.52357
	train_y_row_norms_mean: 0.465021
	train_y_row_norms_min: 0.0
	valid_objective: 0.26108
	valid_y_col_norms_max: 5.77828
	valid_y_col_norms_mean: 5.09737
	valid_y_col_norms_min: 4.03863
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.912884
	valid_y_min_max_class: 0.238637
	valid_y_misclass: 0.0716
	valid_y_nll: 0.26108
	valid_y_row_norms_max: 1.52357
	valid_y_row_norms_mean: 0.465021
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.023402 seconds
Time this epoch: 34.887850 seconds
Monitoring step:
	Epochs seen: 9
	Batches seen: 45
	Examples seen: 450000
	ave_grad_mult: 3.17279
	ave_grad_size: 0.0580917
	ave_step_size: 0.17244
	test_objective: 0.270429
	test_y_col_norms_max: 6.00532
	test_y_col_norms_mean: 5.31085
	test_y_col_norms_min: 4.22847
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.914839
	test_y_min_max_class: 0.23326
	test_y_misclass: 0.0747
	test_y_nll: 0.270429
	test_y_row_norms_max: 1.62754
	test_y_row_norms_mean: 0.485995
	test_y_row_norms_min: 0.0
	train_objective: 0.255312
	train_y_col_norms_max: 6.00532
	train_y_col_norms_mean: 5.31085
	train_y_col_norms_min: 4.22847
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.911416
	train_y_min_max_class: 0.243246
	train_y_misclass: 0.07112
	train_y_nll: 0.255312
	train_y_row_norms_max: 1.62754
	train_y_row_norms_mean: 0.485995
	train_y_row_norms_min: 0.0
	valid_objective: 0.260996
	valid_y_col_norms_max: 6.00532
	valid_y_col_norms_mean: 5.31085
	valid_y_col_norms_min: 4.22847
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.916162
	valid_y_min_max_class: 0.2421
	valid_y_misclass: 0.0705
	valid_y_nll: 0.260996
	valid_y_row_norms_max: 1.62754
	valid_y_row_norms_mean: 0.485995
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.025014 seconds
Time this epoch: 34.877525 seconds
Monitoring step:
	Epochs seen: 10
	Batches seen: 50
	Examples seen: 500000
	ave_grad_mult: 3.25453
	ave_grad_size: 0.0529143
	ave_step_size: 0.164737
	test_objective: 0.272896
	test_y_col_norms_max: 6.1855
	test_y_col_norms_mean: 5.48418
	test_y_col_norms_min: 4.3501
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.912609
	test_y_min_max_class: 0.239542
	test_y_misclass: 0.0769
	test_y_nll: 0.272896
	test_y_row_norms_max: 1.7055
	test_y_row_norms_mean: 0.503506
	test_y_row_norms_min: 0.0
	train_objective: 0.25444
	train_y_col_norms_max: 6.1855
	train_y_col_norms_mean: 5.48418
	train_y_col_norms_min: 4.3501
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.909344
	train_y_min_max_class: 0.242918
	train_y_misclass: 0.07144
	train_y_nll: 0.25444
	train_y_row_norms_max: 1.7055
	train_y_row_norms_mean: 0.503506
	train_y_row_norms_min: 0.0
	valid_objective: 0.262689
	valid_y_col_norms_max: 6.1855
	valid_y_col_norms_mean: 5.48418
	valid_y_col_norms_min: 4.3501
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.91362
	valid_y_min_max_class: 0.252801
	valid_y_misclass: 0.0731
	valid_y_nll: 0.262689
	valid_y_row_norms_max: 1.7055
	valid_y_row_norms_mean: 0.503506
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.025058 seconds
Time this epoch: 34.460117 seconds
Monitoring step:
	Epochs seen: 11
	Batches seen: 55
	Examples seen: 550000
	ave_grad_mult: 3.27085
	ave_grad_size: 0.0499808
	ave_step_size: 0.158432
	test_objective: 0.270295
	test_y_col_norms_max: 6.39294
	test_y_col_norms_mean: 5.65277
	test_y_col_norms_min: 4.55345
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.914152
	test_y_min_max_class: 0.205881
	test_y_misclass: 0.076
	test_y_nll: 0.270295
	test_y_row_norms_max: 1.7773
	test_y_row_norms_mean: 0.520371
	test_y_row_norms_min: 0.0
	train_objective: 0.251438
	train_y_col_norms_max: 6.39294
	train_y_col_norms_mean: 5.65277
	train_y_col_norms_min: 4.55345
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.910548
	train_y_min_max_class: 0.221637
	train_y_misclass: 0.06986
	train_y_nll: 0.251438
	train_y_row_norms_max: 1.7773
	train_y_row_norms_mean: 0.520371
	train_y_row_norms_min: 0.0
	valid_objective: 0.260282
	valid_y_col_norms_max: 6.39294
	valid_y_col_norms_mean: 5.65277
	valid_y_col_norms_min: 4.55345
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.915041
	valid_y_min_max_class: 0.255092
	valid_y_misclass: 0.0709
	valid_y_nll: 0.260282
	valid_y_row_norms_max: 1.7773
	valid_y_row_norms_mean: 0.520371
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.021758 seconds
Time this epoch: 34.530042 seconds
Monitoring step:
	Epochs seen: 12
	Batches seen: 60
	Examples seen: 600000
	ave_grad_mult: 3.20453
	ave_grad_size: 0.0482614
	ave_step_size: 0.151155
	test_objective: 0.269378
	test_y_col_norms_max: 6.56634
	test_y_col_norms_mean: 5.81159
	test_y_col_norms_min: 4.65254
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.916179
	test_y_min_max_class: 0.251211
	test_y_misclass: 0.074
	test_y_nll: 0.269378
	test_y_row_norms_max: 1.82656
	test_y_row_norms_mean: 0.536196
	test_y_row_norms_min: 0.0
	train_objective: 0.250769
	train_y_col_norms_max: 6.56634
	train_y_col_norms_mean: 5.81159
	train_y_col_norms_min: 4.65254
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.912095
	train_y_min_max_class: 0.230697
	train_y_misclass: 0.07028
	train_y_nll: 0.250769
	train_y_row_norms_max: 1.82656
	train_y_row_norms_mean: 0.536196
	train_y_row_norms_min: 0.0
	valid_objective: 0.261866
	valid_y_col_norms_max: 6.56634
	valid_y_col_norms_mean: 5.81159
	valid_y_col_norms_min: 4.65254
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.916839
	valid_y_min_max_class: 0.23959
	valid_y_misclass: 0.0718
	valid_y_nll: 0.261866
	valid_y_row_norms_max: 1.82656
	valid_y_row_norms_mean: 0.536196
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.023884 seconds
Time this epoch: 35.033513 seconds
Monitoring step:
	Epochs seen: 13
	Batches seen: 65
	Examples seen: 650000
	ave_grad_mult: 3.3601
	ave_grad_size: 0.0464985
	ave_step_size: 0.152934
	test_objective: 0.272599
	test_y_col_norms_max: 6.76432
	test_y_col_norms_mean: 5.97424
	test_y_col_norms_min: 4.79433
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.915948
	test_y_min_max_class: 0.229139
	test_y_misclass: 0.0762
	test_y_nll: 0.272599
	test_y_row_norms_max: 1.9107
	test_y_row_norms_mean: 0.552268
	test_y_row_norms_min: 0.0
	train_objective: 0.249775
	train_y_col_norms_max: 6.76432
	train_y_col_norms_mean: 5.97424
	train_y_col_norms_min: 4.79433
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.912226
	train_y_min_max_class: 0.234442
	train_y_misclass: 0.07028
	train_y_nll: 0.249775
	train_y_row_norms_max: 1.9107
	train_y_row_norms_mean: 0.552268
	train_y_row_norms_min: 0.0
	valid_objective: 0.264216
	valid_y_col_norms_max: 6.76432
	valid_y_col_norms_mean: 5.97424
	valid_y_col_norms_min: 4.79433
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.916849
	valid_y_min_max_class: 0.245958
	valid_y_misclass: 0.073
	valid_y_nll: 0.264216
	valid_y_row_norms_max: 1.9107
	valid_y_row_norms_mean: 0.552268
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.024674 seconds
Time this epoch: 34.735592 seconds
Monitoring step:
	Epochs seen: 14
	Batches seen: 70
	Examples seen: 700000
	ave_grad_mult: 3.40292
	ave_grad_size: 0.0455894
	ave_step_size: 0.153333
	test_objective: 0.270963
	test_y_col_norms_max: 6.95548
	test_y_col_norms_mean: 6.13357
	test_y_col_norms_min: 4.92062
	test_y_max_max_class: 1.0
	test_y_mean_max_class: 0.916953
	test_y_min_max_class: 0.23508
	test_y_misclass: 0.0766
	test_y_nll: 0.270963
	test_y_row_norms_max: 1.97542
	test_y_row_norms_mean: 0.56806
	test_y_row_norms_min: 0.0
	train_objective: 0.247751
	train_y_col_norms_max: 6.95548
	train_y_col_norms_mean: 6.13357
	train_y_col_norms_min: 4.92062
	train_y_max_max_class: 1.0
	train_y_mean_max_class: 0.913252
	train_y_min_max_class: 0.245037
	train_y_misclass: 0.06932
	train_y_nll: 0.247751
	train_y_row_norms_max: 1.97542
	train_y_row_norms_mean: 0.56806
	train_y_row_norms_min: 0.0
	valid_objective: 0.262284
	valid_y_col_norms_max: 6.95548
	valid_y_col_norms_mean: 6.13357
	valid_y_col_norms_min: 4.92062
	valid_y_max_max_class: 1.0
	valid_y_mean_max_class: 0.917285
	valid_y_min_max_class: 0.257865
	valid_y_misclass: 0.0714
	valid_y_nll: 0.262284
	valid_y_row_norms_max: 1.97542
	valid_y_row_norms_mean: 0.56806
	valid_y_row_norms_min: 0.0
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.040456 seconds
Saving to softmax_regression.pkl...
Saving to softmax_regression.pkl done. Time elapsed: 0.020383 seconds

As the model trained, it should have printed out progress messages. Most of these are the values of the various channels being monitored throughout training.

How to analyze the results of training

We can use the print_monitor script to print the last monitoring entry of a saved model. By running it on "softmax_regression_best.pkl", we can see the performance of the model at the point where it did the best on the validation set. We see by executing the next cell (the ! mark tells ipython to run a shell command) that the test set misclassification rate is 0.0747, obtained after training for 9 epochs.


In [8]:
!print_monitor.py softmax_regression_best.pkl


/u/goodfeli/pylearn2/models/mlp.py:36: UserWarning: MLP changing the recursion limit.
  warnings.warn("MLP changing the recursion limit.")
epochs seen:  9
time trained:  337.832020998
ave_grad_mult : 3.17279
ave_grad_size : 0.0580917
ave_step_size : 0.17244
test_objective : 0.270429
test_y_col_norms_max : 6.00532
test_y_col_norms_mean : 5.31085
test_y_col_norms_min : 4.22847
test_y_max_max_class : 1.0
test_y_mean_max_class : 0.914839
test_y_min_max_class : 0.23326
test_y_misclass : 0.0747
test_y_nll : 0.270429
test_y_row_norms_max : 1.62754
test_y_row_norms_mean : 0.485995
test_y_row_norms_min : 0.0
train_objective : 0.255312
train_y_col_norms_max : 6.00532
train_y_col_norms_mean : 5.31085
train_y_col_norms_min : 4.22847
train_y_max_max_class : 1.0
train_y_mean_max_class : 0.911416
train_y_min_max_class : 0.243246
train_y_misclass : 0.07112
train_y_nll : 0.255312
train_y_row_norms_max : 1.62754
train_y_row_norms_mean : 0.485995
train_y_row_norms_min : 0.0
valid_objective : 0.260996
valid_y_col_norms_max : 6.00532
valid_y_col_norms_mean : 5.31085
valid_y_col_norms_min : 4.22847
valid_y_max_max_class : 1.0
valid_y_mean_max_class : 0.916162
valid_y_min_max_class : 0.2421
valid_y_misclass : 0.0705
valid_y_nll : 0.260996
valid_y_row_norms_max : 1.62754
valid_y_row_norms_mean : 0.485995
valid_y_row_norms_min : 0.0

Another common way of analyzing trained models is to look at their weights. Here we use the show_weights script to visualize $W$:


In [8]:
!show_weights.py softmax_regression_best.pkl


making weights report
loading model
loading done
loading dataset...
...done
smallest enc weight magnitude: 0.0
mean enc weight magnitude: 0.122371237619
max enc weight magnitude: 1.47799
min norm:  4.22847
mean norm:  5.31085319519
max norm:  6.00532

Further reading

You can find more information on softmax regression from the following sources:

LISA lab's Deep Learning Tutorials: Classifying MNIST digits using Logistic Regression

Stanford's Unsupervised Feature Learning and Deep Learning wiki: Softmax Regression

This is by no means a complete list.