Widrow-Hoff

a.k.a. delta rule, the ADALINE, or LMS filter

Intuition


In [25]:
def widrow_hoff(x, actual, prediction, learning_rate = 1):
    error = (actual - prediction)
    delta_w = learning_rate * error * x
    print('x       : {}'.format(x))
    print('Error   : {}'.format(error))
    print('delta_w : {}'.format(delta_w))

In [26]:
widrow_hoff(5, 10, 8)


x       : 5
Error   : 2
delta_w : 10

If actual > predicted (error > 0) and x > 0 then we need a larger weight to make up for the difference


In [27]:
widrow_hoff(5, 8, 10)


x       : 5
Error   : -2
delta_w : -10

If actual < predicted (error < 0) and x > 0 then we need to reduce the weight


In [28]:
widrow_hoff(-5, 10, 8)


x       : -5
Error   : 2
delta_w : -10

If actual < predicted (error > 0) and x < 0 then we need to decrease the weight


In [29]:
widrow_hoff(-5, 8, 10)


x       : -5
Error   : -2
delta_w : 10

If actual < predicted (error < 0) and x < 0 then we need to increase the weight

x > 0 x < 0
error > 0
error < 0