In [1]:
import (
    "math/rand"
    "github.com/goml/gobrain"
)

Feed Forward vs. Recurrent Neural Networks

(comments from wikipedia, examples from the gobrain documentation)

Feed Forward Description:

<img style="float: left;", src="Feed_forward_neural_net.gif">

A feedforward neural network is an artificial neural network where connections between the units do not form a cycle. This is different from recurrent neural networks.

The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.

Gobrain Example - Feed Forward Function


In [2]:
// set the random seed to 0
rand.Seed(0)

In [3]:
// create the XOR representation patter to train the network
patterns := [][][]float64{
  {{0, 0}, {0}},
  {{0, 1}, {1}},
  {{1, 0}, {1}},
  {{1, 1}, {0}},
}

// instantiate the Feed Forward
ff := &gobrain.FeedForward{}


Out[3]:
&gobrain.FeedForward{
  NInputs:           0,
  NHiddens:          0,
  NOutputs:          0,
  Regression:        false,
  InputActivations:  []float64{},
  HiddenActivations: []float64{},
  OutputActivations: []float64{},
  Contexts:          [][]float64{},
  InputWeights:      [][]float64{},
  OutputWeights:     [][]float64{},
  InputChanges:      [][]float64{},
  OutputChanges:     [][]float64{},
}

In [4]:
// initialize the Neural Network;
// the networks structure will contain:
// 2 inputs, 2 hidden nodes and 1 output.
ff.Init(2, 2, 1)

In [5]:
// train the network using the XOR patterns
// the training will run for 1000 epochs
// the learning rate is set to 0.6 and the momentum factor to 0.4
// use true in the last parameter to receive reports about the learning error
ff.Train(patterns, 1000, 0.6, 0.4, true)
ff.Test(patterns)


Out[5]:
0 0.5524794213542835
[0 0] -> [0.05750394570844524]  :  [0]
[0 1] -> [0.9301006350712102]  :  [1]
[1 0] -> [0.927809966227284]  :  [1]
[1 1] -> [0.09740879532462095]  :  [0]

Where the first values are the inputs, the values after the arrow -> are the output values from the network and the values after : are the expected outputs.

Recurrent Description:

<img style="float: left;", src="Elman_srnn.png">

A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. This makes them applicable to tasks such as unsegmented connected handwriting recognition or speech recognition.

Gobrain - Recurrent Example

Gobrain implements Elman's Simple Recurrent Network. To take advantage of this, one can use the SetContexts function.


In [6]:
ff.SetContexts(1, nil)


Out[6]:
0 0.5524794213542835
[0 0] -> [0.05750394570844524]  :  [0]
[0 1] -> [0.9301006350712102]  :  [1]
[1 0] -> [0.927809966227284]  :  [1]
[1 1] -> [0.09740879532462095]  :  [0]

In the example above, a single context will be created initilized with 0.5.