for some reason, python is pretty slow at matrix math, so numpy was invented to fix it up. Matrix math and vectors is the backbone of neural nets and much else, so learning numpy is important.
Now, you can learn actual linear algebra from these links, so this is a quick guide on how to use numpy to do the maths for you.
the convention is to import numpy as np:
In [50]:
import numpy as np
# the below line outputs all variables, not just the last one
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Mostly we deal with vectors and matrixes
In [7]:
v = np.array([1,2,3])
v.shape
Out[7]:
Accessing elements is similar to how python lists work:
In [14]:
v[1:]
Out[14]:
In [8]:
m = np.array([[1,2,3], [4,5,6], [7,8,9]])
m.shape
Out[8]:
In [12]:
m[2][2]
Out[12]:
In [11]:
m[2,2]
Out[11]:
Now arrays can have many dimensions, which is where the math gets hard to visualize.
In [16]:
t = np.array([[[[1],[2]],[[3],[4]],[[5],[6]]],[[[7],[8]],\
[[9],[10]],[[11],[12]]],[[[13],[14]],[[15],[16]],[[17],[17]]]])
t.shape
Out[16]:
In [17]:
t[2][1][1][0]
Out[17]:
In [20]:
print(v)
print(v*2)
print(v+10)
In [22]:
v*3 - v
Out[22]:
or dot product. This means taking the the rows of the first matrix and the columns of the second matrix, multiplying the ith element of the row/column together and adding them up.
In [32]:
a = np.array([[1,2]])
b = np.array([[1,2,3], [1,2,3]])
In [33]:
a.dot(b)
Out[33]:
In [35]:
m = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
m
Out[35]:
In [36]:
m.T
Out[36]:
In [42]:
inputs = np.array([[-0.27, 0.45, 0.64, 0.31]])
inputs.shape
inputs
Out[42]:
Out[42]:
In [58]:
weights = np.array([[0.02, 0.001, -0.03, 0.036], \
[0.04, -0.003, 0.025, 0.009], [0.012, -0.045, 0.28, -0.067]])
weights.shape
weights
weights.T.shape
Out[58]:
Out[58]:
Out[58]:
In [44]:
np.matmul(inputs, weights)
In [49]:
np.matmul(inputs, weights.T)
np.matmul(weights, inputs.T)
Out[49]:
Out[49]:
In [52]:
inputs.min()
Out[52]:
In [54]:
inputs.shape[1]
Out[54]:
In [56]:
inputs.mean()
Out[56]:
In [ ]: