Initial Setup


In [1]:
## Setup the path for our codebase
import sys
sys.path.append( '../code/' )

In [2]:
%matplotlib inline
import matplotlib.pyplot as plt

Example Data (centered Quadratic)


In [3]:
import neural_network.simple as simple

In [4]:
data = simple.generate_hill_data(100)

In [5]:
xs = map(lambda z: z[0], data)
ys = map(lambda z: z[1], data)
plt.plot( xs, ys )


Out[5]:
[<matplotlib.lines.Line2D at 0x7c00cf8>]

Simple Feed-Foward 1-Layer Neural Networks

Ok, try having a simple 3-node 1-layer neural network


In [6]:
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=3)
simple.plot_fits( nn, data, train_epochs=200 )


Let's visualize the inputs to the final layer


In [7]:
nn.visualize_inputs_to_final_layer(data)


<matplotlib.figure.Figure at 0x7ad2eb8>

hmm, maybe not enough hidden nodes to get the nonlinearity, try 10


In [8]:
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10)
simple.plot_fits( nn, data, train_epochs=200 )


Ok, it learned better and faster, let's see fewer training epochs per line


In [9]:
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10,learning_rate=0.001)
simple.plot_fits( nn, data, train_epochs=20 )


WTF? ... oh, right .... we are using a local optimizer so we get local optima :-)


In [10]:
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10,learning_rate=0.001)
simple.plot_fits( nn, data, train_epochs=20 )



In [11]:
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10,learning_rate=0.001)
simple.plot_fits( nn, data, train_epochs=2000 )



In [12]:
nn.visualize_inputs_to_final_layer(data)


<matplotlib.figure.Figure at 0x8db1da0>