Exploratory notebook related to linear algebra theory and applications.
Examples from Computational Linear Algebra for Coders
Linear Algebra: "concerning vectors, their properties, mappings and operations"
A tensor is the generalization of an array with N axes (dimensions). A 0-Dimensional tensor is a scalar, a 1-D tensor is a vector and a 2-D tensor is a matrix.
In [1]:
import numpy as np
In [2]:
scalar = np.random.randint(10)
#print(scalar.shape) # scalar is an int object, not a numpy array
scalar
Out[2]:
In [3]:
vector = np.random.randint(10, size=2)
print(vector.shape)
vector
Out[3]:
In [4]:
matrix = np.random.randint(10, size=(2, 2))
print(matrix.shape)
In [5]:
def f(x):
if x<=1/2:
return x*2
if x>1/2:
return x*2-1
In [6]:
x = 1/10
for i in range(80):
print(x)
x = f(x)
Example from fastai
In [7]:
X = np.array([0.85, 0.1, 0.05, 0.]).reshape(1, 4)
In [8]:
Y = np.array([[0.9, 0.07, 0.02, 0.01],
[0, 0.93, 0.05, 0.02],
[0, 0, 0.85, 0.15],
[0, 0, 0, 1.]])
Using multiplication sign for numpy arrays is not equivalent to matrix multiplication, it instead relies on the concept of broadcasting.
Proper matrix multiplication can be obtained using np.dot()
(or @
in Python 3.x).
In [9]:
np.dot(X, Y)
Out[9]:
In [10]:
X @ Y
Out[10]:
In [11]:
X * Y
Out[11]:
Norm is the size of a vector. More generally, p-norm (or $L^p$ norm) of $x$ is defined as
$$ \left\|\mathbf {x} \right\|_{p}={\bigg (}\sum _{i=1}^{n}\left|x_{i}\right|^{p}{\bigg )}^{1/p} $$1-norm = Manhattan norm, simplifies to absolute values sum
2-norm = euclidean norm, length or magnitude (distance from the origin to the point identified by $x$)
In [12]:
a = np.array([2,2])
In [13]:
# vector 1-norm
np.linalg.norm(a, ord=1)
Out[13]:
In [14]:
# vector 2-norm
np.linalg.norm(a, ord=2)
Out[14]:
In [15]:
m = np.array([[5,3],[0,4]])
In [16]:
# matrix 1-norm
np.linalg.norm(m, ord=1)
Out[16]:
In [17]:
# vector 2-norm
np.linalg.norm(m, ord=2)
Out[17]:
A normalized vector (also known as unit vector) has unit length
$$ \left\|\mathbf {x} \right\|= 1 $$To normalize a vector simply divide it by its norm.
In [18]:
a_len = np.linalg.norm(a, ord=2)
a_len
Out[18]:
In [19]:
a / a_len
Out[19]:
In [20]:
np.linalg.norm(a / a_len, ord=2)
Out[20]:
In [21]:
a = np.array([1,2,3,4])
b = np.array([1,5])
In [22]:
a - 3
Out[22]:
In [23]:
a.reshape((4,1)) - 3
Out[23]:
In [24]:
a.reshape((2,2)) - 3
Out[24]:
In [25]:
#a - b #cannot broadcast
a.reshape((4,1)) - b.reshape((1, 2)) # both a and b are broadcasted to 4x2
Out[25]:
In [26]:
a.reshape((2, 2)) - b.reshape((1, 2))
Out[26]:
In [27]:
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
#%matplotlib notebook
%matplotlib inline
In [28]:
def draw_vector(draw, origin, vector, text=None, fill='black'):
draw.line([tuple(origin), tuple(vector)], fill=fill)
draw.text(tuple(vector), text, fill='black')
In [29]:
# setup PIL image and draw object
img_size = 300
img = Image.new('RGB', (img_size, img_size), (240, 240, 240))
draw = ImageDraw.Draw(img)
# draw elements on canvas
origin = np.array([0, 0])
p = np.array([150, 130])
q = np.array([45, 120])
# draw p
draw.point(p, fill='red')
draw_vector(draw, origin, p, 'p')
# draw q
draw.point(q, fill='red')
draw_vector(draw, origin, q, 'q')
# draw diff
diff_v = p - q
draw_vector(draw, origin, diff_v, 'p - q')
# draw sum
sum_v = p + q
draw_vector(draw, origin, sum_v, 'p + q')
# draw mean
draw_vector(draw, origin, np.mean([p, q], axis=0), 'mean \n(axis=0)')
draw_vector(draw, origin, np.mean([p, q], axis=1), 'mean \n(axis=1)')
In [30]:
# Plot
fig, ax = plt.subplots(dpi=300, figsize=(2, 2))
canvas = ax.imshow(img)
plt.axis('off')
plt.show()