Learning SciPy for Numerical and Scientific Computing

Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).

NOTE: This IPython notebook should be read alonside the corresponding chapter in the book, where each piece of code is fully explained.

Chapter 3. SciPy for Linear Algebra

Summary

This chapter explores the treatment of matrices (whether normal or sparse) with the modules on linear algebra – linalg and sparse.linalg, which expand and improve the NumPy module with the same name.

References

Vector creation


In [1]:
import numpy 
vectorA = numpy.array([1,2,3,4,5,6,7])  
vectorA


Out[1]:
array([1, 2, 3, 4, 5, 6, 7])

In [2]:
vectorB = vectorA[::-1].copy()
vectorB


Out[2]:
array([7, 6, 5, 4, 3, 2, 1])

In [3]:
vectorB[0]=123 
vectorB


Out[3]:
array([123,   6,   5,   4,   3,   2,   1])

In [4]:
vectorA


Out[4]:
array([1, 2, 3, 4, 5, 6, 7])

In [5]:
vectorB = vectorA[::-1].copy() 
vectorB


Out[5]:
array([7, 6, 5, 4, 3, 2, 1])

Vector Operations

Additon/subtraction


In [6]:
vectorC = vectorA + vectorB
vectorC


Out[6]:
array([8, 8, 8, 8, 8, 8, 8])

In [7]:
vectorD = vectorB - vectorA 
vectorD


Out[7]:
array([ 6,  4,  2,  0, -2, -4, -6])

Scalar/Dot product


In [8]:
dotProduct1 = numpy.dot(vectorA,vectorB)
dotProduct1


Out[8]:
84

In [9]:
dotProduct2 = (vectorA*vectorB).sum()
dotProduct2


Out[9]:
84

Cross/vectorial product (on 3 dimensional space vectors)


In [10]:
vectorA = numpy.array([5, 6, 7])
vectorA


Out[10]:
array([5, 6, 7])

In [11]:
vectorB = numpy.array([7, 6, 5])
vectorB


Out[11]:
array([7, 6, 5])

In [12]:
crossProduct = numpy.cross(vectorA,vectorB) 
crossProduct


Out[12]:
array([-12,  24, -12])

In [13]:
crossProduct = numpy.cross(vectorB,vectorA)
crossProduct


Out[13]:
array([ 12, -24,  12])

Matrix creation


In [14]:
import numpy

In [15]:
A=numpy.matrix("1,2,3;4,5,6")
print(A)


[[1 2 3]
 [4 5 6]]

In [16]:
A=numpy.matrix([[1,2,3],[4,5,6]])
print(A)


[[1 2 3]
 [4 5 6]]

Please, refers to the corresponding section on the book to read about the meaning of the following matrix

$$ \boxed{ \begin{pmatrix} 0 & 10 & 0 & 0 & 0 \\ 0 & 0 & 20 & 0 & 0 \\ 0 & 0 & 0 & 30 & 0 \\ 0 & 0 & 0 & 0 & 40 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} }$$


In [17]:
A=numpy.matrix([ [0,10,0,0,0], [0,0,20,0,0], [0,0,0,30,0],
                     [0,0,0,0,40], [0,0,0,0,0] ]) 
A


Out[17]:
matrix([[ 0, 10,  0,  0,  0],
        [ 0,  0, 20,  0,  0],
        [ 0,  0,  0, 30,  0],
        [ 0,  0,  0,  0, 40],
        [ 0,  0,  0,  0,  0]])

In [18]:
A[0,1],A[1,2],A[2,3],A[3,4]


Out[18]:
(10, 20, 30, 40)

In [19]:
rows=numpy.array([0,1,2,3])
cols=numpy.array([1,2,3,4])
vals=numpy.array([10,20,30,40])

In [20]:
import scipy.sparse

In [21]:
A=scipy.sparse.coo_matrix( (vals,(rows,cols)) )
print(A)


  (0, 1)	10
  (1, 2)	20
  (2, 3)	30
  (3, 4)	40

In [22]:
print(A.todense())


[[ 0 10  0  0  0]
 [ 0  0 20  0  0]
 [ 0  0  0 30  0]
 [ 0  0  0  0 40]]

In [23]:
scipy.sparse.isspmatrix_coo(A)


Out[23]:
True

In [24]:
B=numpy.mat(numpy.ones((3,3)))
W=numpy.mat(numpy.zeros((3,3)))
print(numpy.bmat('B,W;W,B'))


[[ 1.  1.  1.  0.  0.  0.]
 [ 1.  1.  1.  0.  0.  0.]
 [ 1.  1.  1.  0.  0.  0.]
 [ 0.  0.  0.  1.  1.  1.]
 [ 0.  0.  0.  1.  1.  1.]
 [ 0.  0.  0.  1.  1.  1.]]

In [25]:
a=numpy.array([[1,2],[3,4]])
a


Out[25]:
array([[1, 2],
       [3, 4]])

In [26]:
a*a


Out[26]:
array([[ 1,  4],
       [ 9, 16]])

Please, refers to the corresponding section on the book to read about the meaning of the following matrix product

$$ \boxed{ \begin{align} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} & \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix} \end{align} }$$


In [27]:
A=numpy.mat(a)
A


Out[27]:
matrix([[1, 2],
        [3, 4]])

In [28]:
A*A


Out[28]:
matrix([[ 7, 10],
        [15, 22]])

In [29]:
numpy.dot(A,A)


Out[29]:
matrix([[ 7, 10],
        [15, 22]])

In [30]:
b=numpy.array([[1,2,3],[3,4,5]]) 
numpy.dot(a,b)


Out[30]:
array([[ 7, 10, 13],
       [15, 22, 29]])

In [31]:
numpy.multiply(A,A)


Out[31]:
matrix([[ 1,  4],
        [ 9, 16]])

In [32]:
a=numpy.arange(5); A=numpy.mat(a)
a.shape, A.shape, a.transpose().shape, A.transpose().shape


Out[32]:
((5,), (1, 5), (5,), (5, 1))

In [33]:
import scipy.linalg

In [34]:
A=scipy.linalg.hadamard(8)
zero_sum_rows = (numpy.sum(A,0)==0)
B=A[zero_sum_rows,:]
print(B[0:3,:])


[[ 1 -1  1 -1  1 -1  1 -1]
 [ 1  1 -1 -1  1  1 -1 -1]
 [ 1 -1 -1  1  1 -1 -1  1]]

Matrix methods


In [35]:
import numpy
A = numpy.matrix("1+1j, 2-1j; 3-1j, 4+1j")
print (A)


[[ 1.+1.j  2.-1.j]
 [ 3.-1.j  4.+1.j]]

In [36]:
print (A.T)


[[ 1.+1.j  3.-1.j]
 [ 2.-1.j  4.+1.j]]

In [37]:
print (A.H)


[[ 1.-1.j  3.+1.j]
 [ 2.+1.j  4.-1.j]]

Operations between matrices

Please, refers to the corresponding section on the book to read about the meaning of the following basis vectors

$$ \boxed{ \begin{align} v_{1} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1,0,1 \end{pmatrix} \\ v_{2} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 0,0,0 \end{pmatrix} \\ v_{3} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1,0,-1 \end{pmatrix} \end{align} }$$


In [38]:
mu=1/numpy.sqrt(2)
A=numpy.matrix([[mu,0,mu],[0,1,0],[mu,0,-mu]])
B=scipy.linalg.kron(A,A)

In [39]:
print (B[:,0:-1:2])


[[ 0.5  0.5  0.   0.5]
 [ 0.   0.   0.   0. ]
 [ 0.5 -0.5  0.   0.5]
 [ 0.   0.   0.   0. ]
 [ 0.   0.   1.   0. ]
 [ 0.  -0.   0.   0. ]
 [ 0.5  0.5  0.  -0.5]
 [ 0.   0.   0.  -0. ]
 [ 0.5 -0.5  0.  -0.5]]

Functions on matrices


In [40]:
A=numpy.matrix("1,1j;21,3")
print (A**2);


[[  1.+21.j   0. +4.j]
 [ 84. +0.j   9.+21.j]]

In [41]:
print (numpy.asarray(A)**2)


[[   1.+0.j   -1.+0.j]
 [ 441.+0.j    9.+0.j]]

In [42]:
a=numpy.arange(0,2*numpy.pi,1.6)
A = scipy.linalg.toeplitz(a)
print (A)


[[ 0.   1.6  3.2  4.8]
 [ 1.6  0.   1.6  3.2]
 [ 3.2  1.6  0.   1.6]
 [ 4.8  3.2  1.6  0. ]]

In [43]:
print (numpy.exp(A))


[[   1.            4.95303242   24.5325302   121.51041752]
 [   4.95303242    1.            4.95303242   24.5325302 ]
 [  24.5325302     4.95303242    1.            4.95303242]
 [ 121.51041752   24.5325302     4.95303242    1.        ]]

In [44]:
print (scipy.linalg.expm(A))


[[ 1271.76972856   916.49316549   916.63015271  1271.70874469]
 [  916.49316549   660.86560972   660.5306514    916.63015271]
 [  916.63015271   660.5306514    660.86560972   916.49316549]
 [ 1271.70874469   916.63015271   916.49316549  1271.76972856]]

In [45]:
x=10**100; y=9; v=numpy.matrix([x,y])
scipy.linalg.norm(v,2)    # the right method


Out[45]:
1e+100

As mentioned in the book, the following command will generate an error from the python computational engine


In [46]:
numpy.sqrt(x*x+y*y)       # the wrong method


---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-46-eb52376620ae> in <module>()
----> 1 numpy.sqrt(x*x+y*y)       # the wrong method

AttributeError: sqrt

Eigenvalue problems and matrix decompositions

This section refers the reader to the SciPy documentation related to eigenvalues problems and matrix decomposition

http://docs.scipy.org/doc/scipy/reference/linalg.html

http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvals.html

Image compression via the singular value decomposition

Please, refers to the corresponding section on the book to read about the meaning of the following two eaquations

$$ \boxed{ \begin{align} A = U \cdot S \cdot V^{*}, & \quad U = \begin{pmatrix} u_{1} \\ \vdots \\ u_{n} \end{pmatrix}, & S = \begin{pmatrix} s_{1} & & \\ & \ddots & \\ & & s_{n} \end{pmatrix}, & \quad V^{*} = \begin{pmatrix} v_{1} \quad \cdots \quad v_{n} \end{pmatrix} \end{align} }$$

$$ \begin{equation} \boxed{ \sum_{j=1}^{k} s_{j}(u_{j} \cdot v_{j}) } \end{equation} $$


In [47]:
%matplotlib inline

In [48]:
import numpy
import scipy.misc
from scipy.linalg import svd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12.0, 8.0)
img=scipy.misc.lena()
U,s,Vh=svd(img)      # Singular Value Decomposition
A = numpy.dot( U[:,0:32],  # use only 32 singular values
          numpy.dot( numpy.diag(s[0:32]),
                     Vh[0:32,:]))
plt.subplot(121,aspect='equal'); 
plt.gray()
plt.imshow(img)
plt.subplot(122,aspect='equal'); 
plt.imshow(A)
plt.show()


Solvers

Please, refers to the corresponding section on the book to read about the meaning of the following eaquation

$$ \boxed{ \begin{align} \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} & \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 1 \\ 2\\ 3 \end{pmatrix} \end{align} }$$


In [49]:
A=numpy.mat(numpy.eye(3,k=1))
print(A)


[[ 0.  1.  0.]
 [ 0.  0.  1.]
 [ 0.  0.  0.]]

In [50]:
b=numpy.mat(numpy.arange(3) + 1).T
print(b)


[[1]
 [2]
 [3]]

In [51]:
xinfo=scipy.linalg.lstsq(A,b)
print (xinfo[0].T)      # output the solution


[[ 0.  1.  2.]]

This is the end of the working codes shown and thoroughly discussed in Chapter 3 of the book Learning SciPy for Numerical and Scientific Computing

Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).