Feature Learning for Video Feedback

Victor Shepardson

Dartmouth College Digital Musics

The goal of this project is to discover aesthetically interesting digital video feedback processes by incorporating learned features into a hand constructed feedback process.

Consider a video feedback process defined by the mapping from images to images $x_t = \Delta_\phi(x_{t-1})$, where $\Delta$ is a transition function, $\phi$ is a parameterization which may be spatially varying or interactively controlled, and $x_t$ is the image at time step $t$.

Additionally suppose we have a deep autoencoder $\gamma$ for images: $$h^{\ell+1} = \gamma_\ell(h^\ell)$$ $$h^{\ell} \approx \gamma_\ell^{-1}(h^{\ell+1})$$ $$h^0 = x$$

Combining these two concepts, we can define a new feedback process where position in the feature hierarchy acts like another spatial dimension: $$h_t^\ell = \Delta_\phi( h_{t-1}^\ell, \gamma_{\ell-1}(h_{t-1}^{l-1}), \gamma_\ell^{-1}(h_{t-1}^{\ell+1}) )$$

The goal then is to learn a deep autoencoder which represents abstract image features and admits layer-wise encoding and decoding as above. I propose a convolutional pooling autoencoder based on the convolutional autoencoders of Masci et al. and the upsampling layers of Long et al..

Below I have trained a single layer pooled convolutional autoencoder on the CIFAR-10 dataset using caffe. The code is available at my GitHub. I use a filter size of 3x3x3 and 2x2 max pooling. For this experiment, the data dimensionality is preserved in the intermediate representation by using 12 filters (3 input colors x factor of 4 lost to pooling). I trained on the L2 reconstruction error with momentum but no other regularization. Test error was found to decrease consistently from about 100 at random initialization to about 1.3.


In [1]:
#get caffe and pycaffe set up

import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage
%matplotlib inline

#assuming feature-feedback repo and caffe root are in the same directory
caffe_root = '../../caffe/'
import sys
sys.path.insert(0, caffe_root+'python')

import caffe
from caffe.proto import caffe_pb2
#I have compiled caffe for CPU only (nvidia GPUs only)
caffe.set_mode_cpu()

In [ ]:
# L2 reconstruction error for images may not be a fantastic idea in RGB colorspace;
# we may want to preprocess the data by converting to CIELUV or something

In [ ]:
#run this cell to solve the model defined in the solver_file
solver_file = 'autoencoder-0-solver.prototxt'
solver = caffe.get_solver(solver_file);
solver.solve();

In [24]:
#load the model trained by the previous cell
#(and saved elsewhere in the repo) and set it up on test data
model_def_file = 'autoencoder-0.prototxt'
model_file = '../bin/cifar-tanh-20epoch-unregularized.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[24]:
{'l2_error': array(1.4730193614959717, dtype=float32)}

Visualize Reconstruction

We can pull inputs and reconstructions straight out of the caffe Net and manually undo the mean subtraction:


In [2]:
#load the cifar mean into numpy array
blob = caffe_pb2.BlobProto()
data = open('../../caffe/examples/cifar10/mean.binaryproto').read()
blob.ParseFromString(data)
mean = caffe.io.blobproto_to_array(blob)[0].transpose([1,2,0])/256

In [3]:
def get_reconstructions(net, mean, n, compare=0):
    inputs = np.hstack([ np.copy(net.blobs['data'].data[i]).transpose([1,2,0])+mean for i in range(n)])
    outputs = np.hstack([ np.copy(net.blobs['decode1neuron'].data[i]).transpose([1,2,0])+mean for i in range(n)])
    #clamp the reconstruction to [0,1]
    #even with tanh activation outputs can be out of bounds once mean is added back
    np.clip(outputs, 0, 1, outputs)
    #compare to cubic resampling through the intermediate spatial resolution
    #this is a good baseline for how well spatial information is stored and 
    #recovered by the convolutional layers
    if compare>0:
        comparisons = np.dsplit(np.copy(inputs), inputs.shape[2])
        comparisons = [scipy.ndimage.zoom(np.squeeze(c), 1./compare, order=3) for c in comparisons]
        comparisons = [scipy.ndimage.zoom(c, compare, order=3) for c in comparisons]
        comparisons = np.dstack(comparisons)
        np.clip(comparisons, 0, 1, comparisons)
        return (inputs, outputs, comparisons)
    return (inputs, outputs)
def vis_reconstructions(rec):
    disp = np.vstack(rec)
    plt.imshow(disp, interpolation='None')

In [60]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)


CIFAR-10 test inputs on top, reconstructions in the middle, cubic interpolation comparison on the bottom. Looks good!

Visualize Filters

Now let's pull our 12 3x3x3 filters out of the model


In [4]:
def get_filters(net, layer = 'encode1'):
    filters = np.copy(net.params[layer][0].data).transpose([0,2,3,1])
    biases = np.copy(net.params[layer][1].data)
    print biases
    return filters
def vis_filters(filters, rows):
    #normalize preserving 0 = 50% gray
    filters/=2*abs(filters).max()
    filters+=.5
    disp = np.hstack([np.pad(f,[(1,1),(1,1),(0,0)],'constant', constant_values=[.5]) for f in filters])
    disp = np.vstack(np.hsplit(disp,rows))
    return disp

In [154]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[-0.36862749 -0.1923824  -0.22088723 -0.12164118 -0.10508166 -0.49331141
 -0.53125817 -0.48567948 -0.44001433 -0.39040077 -0.32423615 -0.1737113 ]
Out[154]:
<matplotlib.image.AxesImage at 0x7d4510d2dc50>

Looks like the network mostly learned localized primary and secondary color detectors. Weird! These aren't the usual edge filters, but they seem to at least have some plausible structure.

Visualize filter responses

Now let's see the (max pooled) reponses of all 12 filters to a few inputs:


In [5]:
def get_responses(net, layer, filts, n):
    reps = np.hstack([ net.blobs[layer].data[i].transpose([1,2,0]) for i in range(n)])
    # normalize preserving 0 = 50% gray
    reps/=2*abs(reps).max()
    reps+=.5
    reps = np.vstack(np.dsplit(reps, filts))
    return reps.squeeze()    
def vis_responses(reps):
    plt.figure(figsize=(10,10))
    plt.imshow(reps, interpolation='none', cmap='coolwarm')

In [156]:
reps = get_responses(net, 'pool1', 12, 8)
vis_responses(reps)


(192, 128, 1)
Out[156]:
<matplotlib.image.AxesImage at 0x7d4510c0da90>

Pooled activations for each of 12 filters. Red is positive response, blue negative.

Dimensionality Reduction

Let's try fewer filters, reducing dimensionality in the intermediate representation


In [3]:
solver_file = 'autoencoder-1-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve()

In [66]:
model_def_file = 'autoencoder-1.prototxt'
model_file = '../bin/cifar-tanh-20epoch-squeezing.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[66]:
{'l2_error': array(2.5404462814331055, dtype=float32)}

In [67]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)


This time there's some clear loss of detail. The filters are doing something, though; looks better than cubic interpolation


In [24]:
filters = get_filters(net)
disp = vis_filters(filters, 2)
plt.imshow(disp, interpolation='none')


[-0.09201549 -0.12697266 -0.11692226 -0.10681173 -0.09015708 -0.11839788]
Out[24]:
<matplotlib.image.AxesImage at 0x7e93b02c8710>

These filters appear to be learning color gradients in a subtractive color space


In [ ]:


In [31]:
reps = get_responses(net, 'pool1', 6, 8)
vis_responses(reps)


Another Architecture

What about more aggressively reshaping the data within a single layer? How about larger 5x5 filters, 4x4 pooling, and the corresponding large number of filters (3x4x4 = 48)


In [ ]:
solver_file = 'autoencoder-2-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve('autoencoder-2_iter_20000.solverstate')

In [3]:
model_def_file = 'autoencoder-2.prototxt'
#model_file = '../bin/cifar-tanh-20epoch-squeezing-pool3.caffemodel'
model_file = 'autoencoder-2_iter_20000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[3]:
{'l2_error': array(5.954926013946533, dtype=float32)}

In [10]:
rec = get_reconstructions(net, mean, 8, compare=4)
vis_reconstructions(rec)


This looks worse than the dimensionality reducing version even. By 40 epochs training had slowed to a crawl.


In [12]:
filters = get_filters(net)
disp = vis_filters(filters, 6)
plt.imshow(disp, interpolation='none')


[-0.5458473  -0.85954159 -0.10250413 -0.11447078 -0.01339798 -0.31032813
 -0.02660202 -0.77891278  0.04860483  0.17118894 -0.18338218  0.03620156
 -0.21440832 -0.74913269 -0.677212   -0.11030301 -0.01261417 -0.31087366
 -0.56522584 -0.5849033  -0.3219969  -0.14499842  0.04703386 -0.71903616
 -0.64787728 -0.12493432 -0.73442465 -0.80459327 -0.23484692 -0.03186548
  0.02129112 -0.08047031 -0.23331846 -0.149628   -0.72716242 -0.1825075
  0.12050974 -0.21958451 -0.36699957 -0.15040933  0.08417189 -0.1206205
 -0.59321886  0.01488551 -0.28670374 -0.62786371  0.13887869 -0.20581132]
Out[12]:
<matplotlib.image.AxesImage at 0x7c728a3d36d0>

These filters look like noisy edge detectors. Something prevented the training from finding a good minimum


In [14]:
reps = get_responses(net, 'pool1', 18, 8)
vis_responses(reps)


Better Reconstruction

How many parameters do we need to get near-perfect reconstruction with a single layer? Let's step the orginal 2x2 pooling architecture up to 5x5 filters, leaving the reconstruction filters alone.


In [ ]:
solver_file = 'autoencoder-6-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve('autoencoder-6_iter_10000.solverstate')

In [10]:
model_def_file = 'autoencoder-6.prototxt'
model_file = 'autoencoder-6_iter_20000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[10]:
{'l2_error': array(1.173688292503357, dtype=float32)}

In [11]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)



In [12]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[-0.13053145 -0.51994026 -0.13268141 -0.24337031 -0.35381192 -0.31960326
 -0.1194208  -0.48448056 -0.23158208 -0.30801979 -0.3582561  -0.30901095]
Out[12]:
<matplotlib.image.AxesImage at 0x7e36041d89d0>

Interesting--these look like 3x3 filters with a random fringe. Curiously it learned better than the first architecture above, even though the extra pixels appear to be wasted. Perhaps it got a better random initialization, or the filter noisiness acts like a kind of regularization. It may have learned small filters because the reconstruction filter size was too small. Let's bump that up too:


In [9]:
solver_file = 'autoencoder-7-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve('autoencoder-7_iter_10000.solverstate')

In [5]:
model_def_file = 'autoencoder-7.prototxt'
model_file = 'autoencoder-7_iter_40000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[5]:
{'l2_error': array(0.9563984870910645, dtype=float32)}

In [10]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)



In [27]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[-0.12125222 -0.08644268 -0.33110973 -0.0643307  -0.09994981 -0.36133596
 -0.30555385 -0.06790387 -0.26083776 -0.45829996 -0.09892169 -0.32911256]
Out[27]:
<matplotlib.image.AxesImage at 0x7e3cac0acd50>

The more expressive decoder did reduce error and visual fidelity is now very close to perfect. It did not change the noisy-fringed character of the learned filters. The center filters are mostly in pairs which appear to be mirrors, rotations and/or or color inverses. neat!


In [16]:
reps = get_responses(net, 'pool1', 12, 8)
vis_responses(reps)


We could keep going to 7x7 encoders and 8x8 decoders; but at some point I expect larger filters to have trouble with CIFAR since the images are so tiny. With 7x7 filters about half of all convolutions are going to include some padding.

Mean Pooling

Let's investigate mean pooling and whether it leads to better reconstruction and/or trivial encodings


In [13]:
solver_file = 'autoencoder-12-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve()

In [4]:
model_def_file = 'autoencoder-12.prototxt'
model_file = 'autoencoder-12_iter_40000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[4]:
{'l2_error': array(0.3554781973361969, dtype=float32)}

In [10]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)



In [9]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.00290412  0.00727713  0.00024327 -0.00302645 -0.00094303  0.00105364
 -0.00314995  0.00155391 -0.00411961  0.00496217 -0.00200736  0.00198014]
Out[9]:
<matplotlib.image.AxesImage at 0x7eb867396d10>

In [13]:
filters = get_filters(net, 'decode1')
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.00911919  0.00756832  0.01358831]
Out[13]:
<matplotlib.image.AxesImage at 0x7eb865786c50>

How well can we do with smaller filters?


In [2]:
solver_file = 'autoencoder-13-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve()

In [3]:
model_def_file = 'autoencoder-13.prototxt'
model_file = 'autoencoder-13_iter_40000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[3]:
{'l2_error': array(0.46039336919784546, dtype=float32)}

In [9]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)



In [10]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[-0.05559838 -0.05691766 -0.05266549 -0.05314963  0.03955829 -0.02239857
  0.05004988  0.02978159  0.03100051 -0.02727667  0.0315514   0.06429416]
Out[10]:
<matplotlib.image.AxesImage at 0x7d8abda83b90>

In [11]:
filters = get_filters(net, 'decode1')
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.11995886  0.11436817  0.12056205]
Out[11]:
<matplotlib.image.AxesImage at 0x7d8abc1bc610>

In [ ]:
solver_file = 'autoencoder-14-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve()

In [2]:
model_def_file = 'autoencoder-14.prototxt'
model_file = 'autoencoder-14_iter_10000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[2]:
{'l2_error': array(1.5061973333358765, dtype=float32)}

In [8]:
rec = get_reconstructions(net, mean, 8, compare=2)
vis_reconstructions(rec)



In [9]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.0088502   0.00910483  0.03193062  0.05518277 -0.0281374  -0.00233442
 -0.02318202 -0.05383765 -0.00050979  0.05317014  0.04517453  0.04385578]
Out[9]:
<matplotlib.image.AxesImage at 0x7e7939127b90>

In [10]:
filters = get_filters(net, 'decode1')
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.03347526  0.04182296  0.05364645]
Out[10]:
<matplotlib.image.AxesImage at 0x7e793806a110>

In [ ]:


In [ ]:

Dump to texture

Let's dump the weights from the best 1-layer model to an image file so openFrameworks can load them into OpenGL


In [10]:
def dump_to_img(net, nlayers):
    for l in range(1, nlayers+1):
        encode_name = 'encode'+str(l)
        decode_name = 'decode'+str(l)
        #move source channel to innermost dimension
        filters = np.copy(net.params[encode_name][0].data).transpose([0,2,3,1])
        biases = np.copy(net.params[encode_name][1].data)
        np.save(encode_name+'-filters', filters)
        np.save(encode_name+'-biases', biases)
        filters = np.copy(net.params[decode_name][0].data).transpose([1,2,3,0])
        biases = np.copy(net.params[decode_name][1].data)
        np.save(decode_name+'-filters', filters)
        np.save(decode_name+'-biases', biases)

In [12]:
dump_to_img(net, 1)

In [ ]:

Deeper Network

Let's try stacking another layer on the last architecture, above. We'll freeze the first encoder/decoder and treat the first pooling layer as input to a new autoencoder.


In [ ]:
solver_file = 'autoencoder-8-solver.prototxt'
solver = caffe.get_solver(solver_file)
#initialize the first layer with previously trained weights
#first let's try stacking with the lower weights frozen
pre_net = caffe.Net('autoencoder-7.prototxt', 'autoencoder-7_iter_40000.caffemodel', caffe.TEST)
for layer in ['encode1', 'decode1']:
    solver.net.params[layer][0].data[:] = pre_net.params[layer][0].data
    solver.net.params[layer][1].data[:] = pre_net.params[layer][1].data
solver.solve()

In [8]:
model_def_file = 'autoencoder-8.prototxt'
#model_file = '../bin/cifar-tanh-20epoch-2layer.caffemodel'
model_file = 'autoencoder-8_iter_40000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[8]:
{'l2_error1': array(5.685671329498291, dtype=float32),
 'l2_error2': array(2.0855603218078613, dtype=float32)}

In [9]:
rec = get_reconstructions(net, mean, 8, compare=4)
vis_reconstructions(rec)


The loss for the new layer went pretty low, but the overall reconstruction error is high. Let's try fine tuning on all weights with original L2 reconstruction error


In [2]:
solver_file = 'autoencoder-9-solver.prototxt'
solver = caffe.get_solver(solver_file)
#initialize the first layer with previously trained weights
#this time bring over all the parameters
pre_net = caffe.Net('autoencoder-8.prototxt', 'autoencoder-8_iter_40000.caffemodel', caffe.TEST)
for layer in ['encode1', 'decode1', 'encode2', 'decode2']:
    solver.net.params[layer][0].data[:] = pre_net.params[layer][0].data
    solver.net.params[layer][1].data[:] = pre_net.params[layer][1].data
solver.solve()

In [8]:
model_def_file = 'autoencoder-9.prototxt'
model_file = '../bin/cifar-tanh-60epoch-2layer-finetuned-dualobjective.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[8]:
{'l2_error1': array(3.2483773231506348, dtype=float32),
 'l2_error2': array(0.21920974552631378, dtype=float32)}

In [9]:
rec = get_reconstructions(net, mean, 8, compare=4)
vis_reconstructions(rec)


Fine tuning reduced both parts of the loss, but still looks much worse than the single layer model


In [7]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.05174831 -0.02071226 -0.15018784  0.01387358  0.00702072 -0.02757885
 -0.11727612  0.00808964  0.01447153 -0.16562356  0.06779021 -0.09179834]
Out[7]:
<matplotlib.image.AxesImage at 0x7e4e24997e10>

Fine tuning on the first layer appears to have corrupted the nice filters we had before


In [11]:
dump_to_img(net, 2)

In [9]:
#map triples of filters to colors
reps = get_responses(net, 'pool2', 16, 8)
vis_responses(reps)



In [ ]:
solver_file = 'autoencoder-15-solver.prototxt'
solver = caffe.get_solver(solver_file)
#initialize the first layer with previously trained weights
#first let's try stacking with the lower weights frozen
pre_net = caffe.Net('autoencoder-14.prototxt', 'autoencoder-14_iter_10000.caffemodel', caffe.TEST)
for layer in ['encode1', 'decode1']:
    solver.net.params[layer][0].data[:] = pre_net.params[layer][0].data
    solver.net.params[layer][1].data[:] = pre_net.params[layer][1].data
solver.solve()

In [ ]:


In [ ]:
#unfreeze the first layer and switch to the end-to-end objective
solver_file = 'autoencoder-15-finetune-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve('autoencoder-15_iter_15000.solverstate')

In [6]:
model_def_file = 'autoencoder-15.prototxt'
model_file = 'autoencoder-15-finetune_iter_30000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[6]:
{'l2_error1': array(4.169836044311523, dtype=float32),
 'l2_error2': array(20.98929214477539, dtype=float32)}

In [7]:
rec = get_reconstructions(net, mean, 8, compare=4)
vis_reconstructions(rec)



In [8]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[-0.00058724 -0.00197034  0.00063191  0.00022017 -0.00355563  0.0013089
 -0.00259641 -0.00204745 -0.00275139  0.00592467  0.00146188  0.00091501]
Out[8]:
<matplotlib.image.AxesImage at 0x7e7d15ddb750>

In [11]:
dump_to_img(net, 2)

More Layers

After feedback experiments with random filters, the layer-wise reconstruction objective seems less appealing. Can we train 3 layers end to end? Let's try with mean pooling:


In [ ]:
solver_file = 'autoencoder-16-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve()

In [6]:
model_def_file = 'autoencoder-16.prototxt'
model_file = 'autoencoder-16_iter_15000.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[6]:
{'l2_error1': array(22.6910457611084, dtype=float32)}

In [8]:
rec = get_reconstructions(net, mean, 8, compare=8)
vis_reconstructions(rec)


Again, we are recovering a lot of spatial detail. Deeper is still worse, will be interesting to see how heavier training improves the situation


In [9]:
#map triples of filters to colors
reps = get_responses(net, 'pool3', 64, 8)
vis_responses(reps)



In [10]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
Out[10]:
<matplotlib.image.AxesImage at 0x7e41e9534dd0>

Rectifiers

ReLUs have helped to train very deep networks. For a classifier, it's not a problem to have zero mean inputs but nonnegative hidden+output layers. For this application, we rely on hidden layers having the same image properties as the input. Can we get rid of the mean subtraction and use a non negative image representation with ReLU instead of tanh units? Let's start back at the 1-layer, dimensionality preserving autoencoder:


In [13]:
solver_file = 'autoencoder-5-solver.prototxt'
solver = caffe.get_solver(solver_file)
solver.solve()

In [14]:
model_def_file = 'autoencoder-5.prototxt'
model_file = '../bin/cifar-relu-20epoch.caffemodel'
net = caffe.Net(model_def_file, model_file, caffe.TEST)
#run a batch
net.forward()


Out[14]:
{'l2_error': array(3.137385845184326, dtype=float32)}

In [15]:
rec = get_reconstructions(net, np.zeros(mean.shape), 8, compare=2)
vis_reconstructions(rec)


The ReLU units work with either a reduced learning rate or increased lr and the Adagrad solver, though still not quite as well as tanh units.


In [16]:
filters = get_filters(net)
disp = vis_filters(filters, 3)
plt.imshow(disp, interpolation='none')


[-0.00570135 -0.01200322  0.01738192 -0.0238696   0.07345211  0.0205862
  0.02235054 -0.01281478  0.03866118 -0.01633324  0.00320861  0.14468348]
Out[16]:
<matplotlib.image.AxesImage at 0x7e1a9c313810>

In [ ]: