Data

First we need example inputs and outputs. So lets use a simple real world system ... an AND logic gate...


In [ ]:
from itertools import product
from pprint import pprint
import pandas as pd

In [40]:
flat_training_set = [
    [1, i1, i2, 1. if (i1 or i2) else -1]
     for i1, i2 in product([0, 1], [0, 1])
    ]
columns = ['Bias', 'Input1', 'Input2', 'Output', 'Error', 'Err Count', 'W0', 'W1', 'W2']
df = pd.DataFrame(flat_training_set, columns=columns[:4])
print(df)

threshold = 0.5
learning_rate = 0.1
weights = [0, 0, 0]
training_set = [[row[:3], row[-1]] for row in flat_training_set]

def dot_product(values, weights):
    return sum(value * weight for value, weight in zip(values, weights))

print(training_set)


   Bias  Input1  Input2  Output
0     1       0       0      -1
1     1       0       1       1
2     1       1       0       1
3     1       1       1       1
[[[1, 0, 0], -1], [[1, 0, 1], 1.0], [[1, 1, 0], 1.0], [[1, 1, 1], 1.0]]

Training

Just push the weights halfway toward the right answer


In [46]:
training_set = [[[1, 0, 0], 1], [[1, 0, 1], 1], [[1, 1, 0], 1], [[1, 1, 1], -1]]

ans = []
error_count = len(training_set)
weights = [0, 0, 0]
for epoch in range(3):
    print('-' * 20 + str(epoch) + '-'*20 + str(error_count))
    error_count = 0
    error_L2 = 0.
    for input_vector, desired_output in training_set:
        print(weights)
        result = dot_product(input_vector, weights) > threshold
        error = desired_output - result
        error_L2 += error ** 2.
        if abs(error) > 0.001:
            error_count += 1
            for index, value in enumerate(input_vector):
                weights[index] += learning_rate * error * value
        ans += [input_vector + [desired_output, (error_L2)**.5, error_count] + weights]
    if error_count == 0:
        break

df = pd.DataFrame(ans, columns=columns)
print(df)


--------------------0--------------------4
[0, 0, 0]
[0.1, 0.0, 0.0]
[0.2, 0.0, 0.1]
[0.30000000000000004, 0.1, 0.1]
--------------------1--------------------4
[0.20000000000000004, 0.0, 0.0]
[0.30000000000000004, 0.0, 0.0]
[0.4, 0.0, 0.1]
[0.5, 0.1, 0.1]
--------------------2--------------------4
[0.3, -0.1, -0.1]
[0.4, -0.1, -0.1]
[0.5, -0.1, 0.0]
[0.6, 0.0, 0.0]
    Bias  Input1  Input2  Output     Error  Err Count   W0   W1   W2
0      1       0       0       1  1.000000          1  0.1  0.0  0.0
1      1       0       1       1  1.414214          2  0.2  0.0  0.1
2      1       1       0       1  1.732051          3  0.3  0.1  0.1
3      1       1       1      -1  2.000000          4  0.2  0.0  0.0
4      1       0       0       1  1.000000          1  0.3  0.0  0.0
5      1       0       1       1  1.414214          2  0.4  0.0  0.1
6      1       1       0       1  1.732051          3  0.5  0.1  0.1
7      1       1       1      -1  2.645751          4  0.3 -0.1 -0.1
8      1       0       0       1  1.000000          1  0.4 -0.1 -0.1
9      1       0       1       1  1.414214          2  0.5 -0.1  0.0
10     1       1       0       1  1.732051          3  0.6  0.0  0.0
11     1       1       1      -1  2.645751          4  0.4 -0.2 -0.2

In [ ]:

Smarter Learning

How would you improve this learning approach? Are we pushing too hard (learning rate too high)?

Activation Function

A simple threshold/does fine for logic gate behavior


In [ ]:
def activate(i):
    if i >= 0.5:
        return 1
    return 0

print(learn(training_set, activate=activate))

Your Turn

See if you can train a neuron to behave like a NOR logic gate


In [ ]:

Recursive Network

Let's train a NN to beat you at Rock, Paper, Scissors All we have to do is play against it And punish or reward it with each good/bad move


In [ ]:

Tic-Tac-Toe?

How long would it take to train a neural net To win at tic-tac-toe? With this simple training and back-propagation approach?