- Test appending bias to 2d matrix and then Test Time complexity for flattening and reshaping np.arrays.


In [1]:
import numpy as np
from sklearn.preprocessing import normalize


/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/sklearn/utils/fixes.py:64: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
  if 'order' in inspect.getargspec(np.copy)[0]:

In [57]:
z = np.zeros((3,3))
z


Out[57]:
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.],
       [ 0.,  0.,  0.]])

In [75]:
ndone = np.array([[1,1,1], [2,2,2], [3,3,3]])
np.concatenate((z, ndone), axis=1)


Out[75]:
array([[ 0.,  0.,  0.,  1.,  1.,  1.],
       [ 0.,  0.,  0.,  2.,  2.,  2.],
       [ 0.,  0.,  0.,  3.,  3.,  3.]])

In [21]:
y = np.append(z, 1)
print(y, z)


[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.] [[ 0.  0.  0.]
 [ 0.  0.  0.]
 [ 0.  0.  0.]]

- Also test results of finnegan's backprop against matrix form from Dolhansky.


In [42]:
a = np.array([2, 3, 2])
b = np.array([6, 6, 6])

In [47]:
def test1():
    error_matrix = [a[i] * (1 - a[i]) * (b[i] - a[i]) for i, neuron in enumerate(a)]
%timeit test1()


The slowest run took 4.35 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 3.76 µs per loop

In [48]:
def test2():
    temp_matrix = np.multiply(a, 1 - a)
    error_matrix2 = np.multiply(temp_matrix, b - a)
    return error_matrix2
%timeit test2()
error_matrix2


The slowest run took 9.15 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 2.19 µs per loop
Out[48]:
array([ -8, -18,  -8])

In [63]:
neurons = [[6,7], [6,9], [10,3]]
neurons = np.array(neurons)
temps = [[1,2,3],[3,4,5]]
la_neurons = [np.array(x) for x in temps]
la_error_matrix = error_matrix2[0:2]
def bptest():
    error_matrix3 = []
    for i, neuron in enumerate(neurons):
        temp_err = 0
        for j, la_neuron in enumerate(la_neurons):
            temp_err += la_neurons[j][i] * la_error_matrix[j]
        error_matrix3.append(a[i] * (1 - a[i]) * temp_err)
    return error_matrix3
%timeit bptest()
error_matrix3 = bptest()
print(error_matrix3)


100000 loops, best of 3: 6.8 µs per loop
[124, 528, 228]

In [69]:
lans = np.column_stack(la_neurons)
lans


Out[69]:
array([[1, 3],
       [2, 4],
       [3, 5]])

In [76]:
error_matrix4 = []
la_error_matrix = np.array(error_matrix[0:2])
def bptest2():
    lans = np.column_stack(la_neurons)
    temp = np.dot(lans, la_error_matrix)
    error_matrix4 = np.multiply(temp, np.multiply(a, 1 - a))
    return error_matrix4
%timeit bptest2()
print(bptest2())


The slowest run took 6.85 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 6.51 µs per loop
[ -3416 -14160  -6024]

In [4]:
b = np.array([18., 32., 3., 61., -8., -1000., 2., 100.])
c = normalize(b)
c


Out[4]:
array([[ 0.01786522,  0.03176038,  0.00297754,  0.06054323, -0.0079401 ,
        -0.99251195,  0.00198502,  0.0992512 ]])