Crash Course In Linear Algebra For Data Scientists

Preamble

This notebook was made for the purposes of quickly introducing the theoretical groundwork of Linear Algebra for Data Scientists. To that end, we will be primarily making the computer do the hard work of doing the tedious calculations for us. Specifically, this notebook will be using numpy as the backend for the computations.

This notebook will just ensure that your environment is loaded and all basic operations are supported.

Contents

  1. Basic Linear Algebra Operations
  2. Vector Spaces
  3. Inverse Matrix Theorem
  4. Regression and PCA
  5. L_1 and L_2 norms from Linear Algebra.
  6. Graph Representations

Aim of this notebook.

This notebook simply aims to make sure you have the basic environment set up to work with the rest of the notebooks.


In [6]:
import numpy as np

Creating and shaping numpy arrays.


In [7]:
# Create numpy arrays like this.

print("A 1 row by 4 column numpy matrix\n{}".format(np.array(range(4))))


A 1 row by 4 column numpy matrix
[0 1 2 3]

In [8]:
# Create a 2 by 2 matrix like this.

A = np.array(range(4)).reshape(2,2) # Reshape into an array of two rows and 2 columns.
print("A 2 row by 2 column numpy matrix\n{}".format(A))


A 2 row by 2 column numpy matrix
[[0 1]
 [2 3]]

In [9]:
# Create a 3 by 2 matrix.

A = np.array(range(6)).reshape(3,2)
print("A 3 row by 2 column numpy matrix\n{}".format(A))

# Note the ordering of how it reshapes.  It places items item by item until the row is filled.


A 3 row by 2 column numpy matrix
[[0 1]
 [2 3]
 [4 5]]

In [10]:
# Be careful to match the number of elements to the shape you give to reshape!
np.array(range(6)).reshape(2,4) # Size of the inputs must equal the product of the dimensions.


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-f131031bdce3> in <module>()
      1 # Be careful to match the number of elements to the shape you give to reshape!
----> 2 np.array(range(6)).reshape(2,4) # Size of the inputs must equal the product of the dimensions.

ValueError: cannot reshape array of size 6 into shape (2,4)

For our purposes, a matrix will be a numpy array of shape (m, n) where m, n > 0 and are integers. The matrix may consist of fractions, floats, or complex numbers.

A vector will be a matrix of shape (m, 1).

Algebraically speaking a matrix of integers is a module and is a generalization of linear Algebra. We will not speak of such things again. If a matrix of integers is provided, presume we meant either floats, fractions, or complex numbers as deduced from context.


In [11]:
# Create a (1,3) vector
np.array(range(3)).reshape(3,1)


Out[11]:
array([[0],
       [1],
       [2]])

In [12]:
# You can use this to create larger tensors, but we will not discuss tensors further in this tutorial.

print("A tensor!\n{}".format(np.array(range(24)).reshape(2,3,4)))

# Golly gee whiz!  Looks like two 3 by 4 matrices!


A tensor!
[[[ 0  1  2  3]
  [ 4  5  6  7]
  [ 8  9 10 11]]

 [[12 13 14 15]
  [16 17 18 19]
  [20 21 22 23]]]

Numpy supports broadcasting of operations.


In [13]:
A = np.array([2,3,4,5]).reshape(2,2)
2 * A


Out[13]:
array([[ 4,  6],
       [ 8, 10]])

This is analagous to scalar multiplication with matrices.

2 * [[2,3],[4,5]] = [[4,6],[8,10]]


In [14]:
# Numpy dynamically casts to floats, etc.
0.5 * A


Out[14]:
array([[ 1. ,  1.5],
       [ 2. ,  2.5]])

In [15]:
# Careful to multiply matrices, vectors, etc with the np.matmul operation.
A = np.array(range(4)).reshape(2,2)
b = np.array([2,5]).reshape(2,1)

element_wise_multiplication = A * b
matrix_multiplication = np.matmul(A, b)

print("Element-wise multiplication:\nA .* b = \n{}\n".format(element_wise_multiplication))
print("Matrix-Multiplication:\nA * b = \n{}".format(matrix_multiplication))


Element-wise multiplication:
A .* b = 
[[ 0  2]
 [10 15]]

Matrix-Multiplication:
A * b = 
[[ 5]
 [19]]

Transpositions of Matrices.


In [16]:
# Take a Matrix and use it's .transpose method.
print(A.transpose())


[[0 2]
 [1 3]]

We will frequently write vectors as $[1,2]^T$ so we can write them in line.


In [18]:
a = np.array([1,2]).reshape(2,1) # Or declare them like this in numpy.
print(a)


[[1]
 [2]]

Basic Axioms of Linear Algebra

Taken from the Ikkipedia

Vector spaces

The main structures of linear algebra are vector spaces. A vector space over a field F (often the field of the real numbers) is a set V equipped with two binary operations satisfying the following axioms. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The operations of addition and multiplication in a vector space must satisfy the following axioms.[15] In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F.

Axiom Signification

  • Associativity of addition u + (v + w) = (u + v) + w
  • Commutativity of addition u + v = v + u
  • Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V.
  • Inverse elements of addition For every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0
  • Distributivity of scalar multiplication with respect to vector addition   a(u + v) = au + av
  • Distributivity of scalar multiplication with respect to field addition (a + b)v = av + bv
  • Compatibility of scalar multiplication with field multiplication a(bv) = (ab)v
  • Identity element of scalar multiplication 1v = v, where 1 denotes the multiplicative identity in F.

The first four axioms are those of V being an abelian group under vector addition. Elements of a vector space may have various nature; for example, they can be sequences, functions, polynomials or matrices. Linear algebra is concerned with properties common to all vector spaces.

Lets see how these look in numpy


In [ ]:
# Fix vectors u, v, w
# Fix scalars a, b

u = np.array([2,3,4]).reshape(3,1)
v = np.array([-2,3,4.5]).reshape(3,1)
w = np.array([-3.2, 5, 12]).reshape(3,1)
zero = np.array([0,0,0]).reshape(3,1)

a = 3.2
b = 4

print("Check associativity:\n {} \n== \n{}\n True!".format(u + (v + w),(u + v) + w))
print("Check commutativity:\n {}\n ==\n {}".format(u + v, v + u))
print("Check addition has an identity element:\n {}\n+\n{}\n==\n{}".format(v, -v, zero))
print("You're getting the picture...")

Restating the Axioms in plain English...

We are dealing with a situation analogous to one where we can stretch things along a number of dimensions.

Basically it states that given a vector from a vector space we are allowed to...

  • Stretch (or shrink) that vector by some amount across all of its dimensions.
  • We can cancel a vector by adding a vector pointing opposite it with the same magnitude.
  • The basic rules of algebra (of stretching and shrinking) hold.
  • The basic algebraic rules associated with adding and subtracting components hold.
  • There is a unique stretch which does nothing when we stretch a vector by this quantity (identity).
  • There is a unique vector such that when added to another vector does nothing (think 0)

Some interesting examples include.

  1. The space of polynomial functions of degree less than n.
  2. The space of matrices of shape (n, m) over the Reals is a vector space.
  3. The real numbers are a vector space (as are the Complex numbers!)

And that's all for the intro folks!


In [ ]: