In [1]:
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics


Using gpu device 0: Tesla K40c
:0: FutureWarning: IPython widgets are experimental and may change in the future.
Welcome to the HoloViews IPython extension! (http://ioam.github.io/holoviews/)
Available magics: %compositor, %opts, %params, %view, %%labels, %%opts, %%view
<matplotlib.figure.Figure at 0x7fe2331f3b10>
<matplotlib.figure.Figure at 0x7fe23318f210>
<matplotlib.figure.Figure at 0x7fe2331f3fd0>

Monitoring Traces

Plotting the monitoring traces to see if the model is likely to keep improving:


In [2]:
m = pylearn2.utils.serial.load(
    "/disk/scratch/neuroglycerin/models/superclasses_online_recent.pkl")

In [3]:
def make_curves(model, *args):
    curves = None
    for c in args:
        channel = m.monitor.channels[c]
        c = c[0].upper() + c[1:]
        if not curves:
            curves = hl.Curve(zip(channel.example_record,channel.val_record),group=c)
        else:
            curves += hl.Curve(zip(channel.example_record,channel.val_record),group=c)
    return curves

In [9]:
nll_channels = [c for c in m.monitor.channels.keys() if 'nll' in c]

Looks like the nll for most of these is still improving, following a fairly linear dropoff. Maybe it'll keep going from here if we keep running it:


In [11]:
make_curves(m,*nll_channels)


Out[11]:

Plotting all the monitoring channels at the same time, could see something interesting happening:


In [8]:
make_curves(m,*m.monitor.channels.keys())


Out[8]:

Weights Learned

We can look at the weights learned by this model compared with our most recent best model and see what the differences are. Although, this might be difficult to interpret.


In [12]:
%env PYLEARN2_VIEWER_COMMAND=/afs/inf.ed.ac.uk/user/s08/s0805516/repos/neukrill-net-work/image_hack.sh
%run ~/repos/pylearn2/pylearn2/scripts/show_weights.py /disk/scratch/neuroglycerin/models/replicate_8aug.pkl


env: PYLEARN2_VIEWER_COMMAND=/afs/inf.ed.ac.uk/user/s08/s0805516/repos/neukrill-net-work/image_hack.sh
making weights report
loading model
loading done
smallest enc weight magnitude: 4.82278937852e-06
mean enc weight magnitude: 0.0964648425579
max enc weight magnitude: 1.09708678722

In [13]:
from IPython.display import Image

In [16]:
def plot_recent_pylearn2():
    pl2plt = Image(filename="/afs/inf.ed.ac.uk/user/s08/s0805516/tmp/pylearnplot.png", width=700)
    return pl2plt
plot_recent_pylearn2()


Out[16]:

In [17]:
%run ~/repos/pylearn2/pylearn2/scripts/show_weights.py /disk/scratch/neuroglycerin/models/superclasses_online.pkl


making weights report
loading model
loading done
smallest enc weight magnitude: 1.07464138637e-05
mean enc weight magnitude: 0.119180940092
max enc weight magnitude: 1.45175945759

In [18]:
plot_recent_pylearn2()


Out[18]:

Both seemed to have definitely learned Gabor-like filters, but the second seems to have focused more on some regular grid-like filters as well.