Forked from Lecture 2 of Scientific Python Lectures by J.R. Johansson
In [1]:
%matplotlib inline
import traceback
import matplotlib.pyplot as plt
import numpy as np
In [2]:
%%time
total = 0
for i in range(100000):
total += i
In [3]:
%%time
total = np.arange(100000).sum()
In [4]:
%%time
l = list(range(0, 1000000))
ltimes5 = [x * 5 for x in l]
In [5]:
%%time
l = np.arange(1000000)
ltimes5 = l * 5
The numpy
package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
To use numpy
you need to import the module, using for example:
In [6]:
import numpy as np
In the numpy
package the terminology used for vectors, matrices and higher-dimensional data sets is array.
There are a number of ways to initialize new numpy arrays, for example from
arange
, linspace
, etc.For example, to create new vector and matrix arrays from Python lists we can use the numpy.array
function.
In [7]:
# a vector: the argument to the array function is a Python list
v = np.array([1,2,3,4])
v
Out[7]:
In [8]:
# a matrix: the argument to the array function is a nested Python list
M = np.array([[1, 2], [3, 4]])
M
Out[8]:
The v
and M
objects are both of the type ndarray
that the numpy
module provides.
In [9]:
type(v), type(M)
Out[9]:
The difference between the v
and M
arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape
property.
In [10]:
v.shape
Out[10]:
In [11]:
M.shape
Out[11]:
The number of elements in the array is available through the ndarray.size
property:
In [12]:
M.size
Out[12]:
Equivalently, we could use the function numpy.shape
and numpy.size
In [13]:
np.shape(M)
Out[13]:
In [14]:
np.size(M)
Out[14]:
So far the numpy.ndarray
looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
There are several reasons:
numpy
arrays can be implemented in a compiled language (C and Fortran is used).Using the dtype
(data type) property of an ndarray
, we can see what type the data of an array has:
In [15]:
M.dtype
Out[15]:
We get an error if we try to assign a value of the wrong type to an element in a numpy array:
In [16]:
try:
M[0,0] = "hello"
except ValueError as e:
print(traceback.format_exc())
If we want, we can explicitly define the type of the array data when we create it, using the dtype
keyword argument:
In [17]:
M = np.array([[1, 2], [3, 4]], dtype=complex)
M
Out[17]:
Common data types that can be used with dtype
are: int
, float
, complex
, bool
, object
, etc.
We can also explicitly define the bit size of the data types, for example: int64
, int16
, float128
, complex128
.
For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy
that generate arrays of different forms. Some of the more common are:
In [18]:
# create a range
x = np.arange(0, 10, 1) # arguments: start, stop, step
x
Out[18]:
In [19]:
x = np.arange(-1, 1, 0.1)
x
Out[19]:
In [20]:
# using linspace, both end points ARE included
np.linspace(0, 10, 25)
Out[20]:
In [21]:
np.logspace(0, 10, 10, base=np.e)
Out[21]:
In [22]:
x, y = np.mgrid[0:5, 0:5] # similar to meshgrid in MATLAB
In [23]:
x
Out[23]:
In [24]:
y
Out[24]:
In [25]:
# uniform random numbers in [0,1]
np.random.rand(5,5)
Out[25]:
In [26]:
# standard normal distributed random numbers
np.random.randn(5,5)
Out[26]:
In [27]:
# a diagonal matrix
np.diag([1,2,3])
Out[27]:
In [28]:
# diagonal with offset from the main diagonal
np.diag([1,2,3], k=1)
Out[28]:
In [29]:
np.zeros((3,3))
Out[29]:
In [30]:
np.ones((3,3))
Out[30]:
A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the numpy.genfromtxt
function. For example,
In [31]:
!head ../data/stockholm_td_adj.dat
In [32]:
data = np.genfromtxt('../data/stockholm_td_adj.dat')
In [33]:
data.shape
Out[33]:
In [34]:
fig, ax = plt.subplots(figsize=(14,4))
ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5])
ax.axis('tight')
ax.set_title('tempeatures in Stockholm')
ax.set_xlabel('year')
ax.set_ylabel('temperature (C)');
Using numpy.savetxt
we can store a Numpy array to a file in CSV format:
In [35]:
M = np.random.rand(3,3)
M
Out[35]:
In [36]:
np.savetxt("../data/random-matrix.csv", M)
In [37]:
!cat ../data/random-matrix.csv
In [38]:
np.savetxt("../data/random-matrix.csv", M, fmt='%.5f') # fmt specifies the format
!cat ../data/random-matrix.csv
Useful when storing and reading back numpy array data. Use the functions numpy.save
and numpy.load
:
In [39]:
np.save("../data/random-matrix.npy", M)
!file ../data/random-matrix.npy
In [40]:
np.load("../data/random-matrix.npy")
Out[40]:
In [41]:
M.itemsize # bytes per element
Out[41]:
In [42]:
M.nbytes # number of bytes
Out[42]:
In [43]:
M.ndim # number of dimensions
Out[43]:
We can index elements in an array using square brackets and indices:
In [44]:
# v is a vector, and has only one dimension, taking one index
v[0]
Out[44]:
In [45]:
# M is a matrix, or a 2 dimensional array, taking two indices
M[1,1]
Out[45]:
If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
In [46]:
M
Out[46]:
In [47]:
M[1]
Out[47]:
The same thing can be achieved with using :
instead of an index:
In [48]:
M[1,:] # row 1
Out[48]:
In [49]:
M[:,1] # column 1
Out[49]:
We can assign new values to elements in an array using indexing:
In [50]:
M[0,0] = 1
In [51]:
M
Out[51]:
In [52]:
# also works for rows and columns
M[1,:] = 0
M[:,2] = -1
In [53]:
M
Out[53]:
Index slicing is the technical name for the syntax M[lower:upper:step]
to extract part of an array:
In [54]:
A = np.array([1,2,3,4,5])
A
Out[54]:
In [55]:
A[1:3]
Out[55]:
Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified:
In [56]:
A[1:3] = [-2,-3]
A
Out[56]:
We can omit any of the three parameters in M[lower:upper:step]
:
In [57]:
A[::] # lower, upper, step all take the default values
Out[57]:
In [58]:
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
Out[58]:
In [59]:
A[:3] # first three elements
Out[59]:
In [60]:
A[3:] # elements from index 3
Out[60]:
Negative indices counts from the end of the array (positive index from the begining):
In [61]:
A = np.array([1,2,3,4,5])
In [62]:
A[-1] # the last element in the array
Out[62]:
In [63]:
A[-3:] # the last three elements
Out[63]:
Index slicing works exactly the same way for multidimensional arrays:
In [64]:
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
A
Out[64]:
In [65]:
# a block from the original array
A[1:4, 1:4]
Out[65]:
In [66]:
# strides
A[::2, ::2]
Out[66]:
Fancy indexing is the name for when an array or list is used in-place of an index:
In [67]:
row_indices = [1, 2, 3]
A[row_indices]
Out[67]:
In [68]:
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices]
Out[68]:
We can also use index masks: If the index mask is an Numpy array of data type bool
, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element:
In [69]:
B = np.array([n for n in range(5)])
B
Out[69]:
In [70]:
row_mask = np.array([True, False, True, False, False])
B[row_mask]
Out[70]:
In [71]:
# same thing
row_mask = np.array([1,0,1,0,0], dtype=bool)
B[row_mask]
Out[71]:
This feature is very useful to conditionally select elements from an array, using for example comparison operators:
In [72]:
x = np.arange(0, 10, 0.5)
x
Out[72]:
In [73]:
mask = (5 < x) * (x < 7.5)
mask
Out[73]:
In [74]:
x[mask]
Out[74]:
The index mask can be converted to position index using the where
function
In [75]:
indices = np.where(mask)
indices
Out[75]:
In [76]:
x[indices] # this indexing is equivalent to the fancy indexing x[mask]
Out[76]:
With the diag function we can also extract the diagonal and subdiagonals of an array:
In [77]:
np.diag(A)
Out[77]:
In [78]:
np.diag(A, -1)
Out[78]:
The take
function is similar to fancy indexing described above:
In [79]:
v2 = np.arange(-3,3)
v2
Out[79]:
In [80]:
row_indices = [1, 3, 5]
v2[row_indices] # fancy indexing
Out[80]:
In [81]:
v2.take(row_indices)
Out[81]:
But take
also works on lists and other objects:
In [82]:
np.take([-3, -2, -1, 0, 1, 2], row_indices)
Out[82]:
Constructs an array by picking elements from several arrays:
In [83]:
which = [1, 0, 1, 0]
choices = [[-2,-2,-2,-2], [5,5,5,5]]
np.choose(which, choices)
Out[83]:
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
In [84]:
v1 = np.arange(0, 5)
In [85]:
v1 * 2
Out[85]:
In [86]:
v1 + 2
Out[86]:
In [87]:
A * 2, A + 2
Out[87]:
When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations:
In [88]:
A * A # element-wise multiplication
Out[88]:
In [89]:
v1 * v1
Out[89]:
If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
In [90]:
A.shape, v1.shape
Out[90]:
In [91]:
A * v1
Out[91]:
What about matrix mutiplication? There are two ways. We can either use the dot
function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
In [92]:
np.dot(A, A)
Out[92]:
Python 3 has a new operator for using infix notation with matrix multiplication.
In [93]:
A @ A
Out[93]:
In [94]:
np.dot(A, v1)
Out[94]:
In [95]:
np.dot(v1, v1)
Out[95]:
Alternatively, we can cast the array objects to the type matrix
. This changes the behavior of the standard arithmetic operators +, -, *
to use matrix algebra.
In [96]:
M = np.matrix(A)
v = np.matrix(v1).T # make it a column vector
In [97]:
v
Out[97]:
In [98]:
M * M
Out[98]:
In [99]:
M * v
Out[99]:
In [100]:
# inner product
v.T * v
Out[100]:
In [101]:
# with matrix objects, standard matrix algebra applies
v + M*v
Out[101]:
If we try to add, subtract or multiply objects with incomplatible shapes we get an error:
In [102]:
v = np.matrix([1,2,3,4,5,6]).T
In [103]:
M.shape, v.shape
Out[103]:
In [104]:
import traceback
try:
M * v
except ValueError as e:
print(traceback.format_exc())
See also the related functions: inner
, outer
, cross
, kron
, tensordot
. Try for example help(np.kron)
.
Above we have used the .T
to transpose the matrix object v
. We could also have used the transpose
function to accomplish the same thing.
Other mathematical functions that transform matrix objects are:
In [105]:
C = np.matrix([[1j, 2j], [3j, 4j]])
C
Out[105]:
In [106]:
np.conjugate(C)
Out[106]:
Hermitian conjugate: transpose + conjugate
In [107]:
C.H
Out[107]:
We can extract the real and imaginary parts of complex-valued arrays using real
and imag
:
In [108]:
np.real(C) # same as: C.real
Out[108]:
In [109]:
np.imag(C) # same as: C.imag
Out[109]:
Or the complex argument and absolute value
In [110]:
np.angle(C+1) # heads up MATLAB Users, angle is used instead of arg
Out[110]:
In [111]:
abs(C)
Out[111]:
In [112]:
np.linalg.inv(C) # equivalent to C.I
Out[112]:
In [113]:
C.I * C
Out[113]:
In [114]:
np.linalg.det(C)
Out[114]:
In [115]:
np.linalg.det(C.I)
Out[115]:
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
For example, let's calculate some properties from the Stockholm temperature dataset used above.
In [116]:
# reminder, the tempeature dataset is stored in the data variable:
np.shape(data)
Out[116]:
In [117]:
# the temperature data is in column 3
np.mean(data[:,3])
Out[117]:
The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C.
In [118]:
np.std(data[:,3]), np.var(data[:,3])
Out[118]:
In [119]:
# lowest daily average temperature
data[:,3].min()
Out[119]:
In [120]:
# highest daily average temperature
data[:,3].max()
Out[120]:
In [121]:
d = np.arange(0, 10)
d
Out[121]:
In [122]:
# sum up all elements
np.sum(d)
Out[122]:
In [123]:
# product of all elements
np.prod(d+1)
Out[123]:
In [124]:
# cummulative sum
np.cumsum(d)
Out[124]:
In [125]:
# cummulative product
np.cumprod(d+1)
Out[125]:
In [126]:
# same as: diag(A).sum()
np.trace(A)
Out[126]:
We can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above).
For example, let's go back to the temperature dataset:
In [127]:
!head -n 3 ../data/stockholm_td_adj.dat
The dataformat is: year, month, day, daily average temperature, low, high, location.
If we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using:
In [128]:
np.unique(data[:,1]) # the month column takes values from 1 to 12
Out[128]:
In [129]:
mask_feb = data[:,1] == 2
In [130]:
# the temperature data is in column 3
np.mean(data[mask_feb,3])
Out[130]:
With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code:
In [131]:
months = np.arange(1,13)
monthly_mean = [np.mean(data[data[:,1] == month, 3]) for month in months]
fig, ax = plt.subplots()
ax.bar(months, monthly_mean)
ax.set_xlabel("Month")
ax.set_ylabel("Monthly avg. temp.");
When functions such as min
, max
, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis
argument we can specify how these functions should behave:
In [132]:
m = np.random.rand(3,3)
m
Out[132]:
In [133]:
# global max
m.max()
Out[133]:
In [134]:
# max in each column
m.max(axis=0)
Out[134]:
In [135]:
# max in each row
m.max(axis=1)
Out[135]:
Many other functions and methods in the array
and matrix
classes accept the same (optional) axis
keyword argument.
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
In [136]:
A
Out[136]:
In [137]:
n, m = A.shape
In [138]:
B = A.reshape((1,n*m))
B
Out[138]:
In [139]:
B[0,0:5] = 5 # modify the array
B
Out[139]:
In [140]:
A # and the original variable is also changed. B is only a different view of the same data
Out[140]:
We can also use the function flatten
to make a higher-dimensional array into a vector. But this function create a copy of the data.
In [141]:
B = A.flatten()
B
Out[141]:
In [142]:
B[0:5] = 10
B
Out[142]:
In [143]:
A # now A has not changed, because B's data is a copy of A's, not refering to the same data
Out[143]:
With newaxis
, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:
In [144]:
v = np.array([1,2,3])
In [145]:
v.shape
Out[145]:
In [146]:
# make a column matrix of the vector v
v[:, np.newaxis]
Out[146]:
In [147]:
# column matrix
v[:, np.newaxis].shape
Out[147]:
In [148]:
# row matrix
v[np.newaxis, :].shape
Out[148]:
Using function repeat
, tile
, vstack
, hstack
, and concatenate
we can create larger vectors and matrices from smaller ones:
In [149]:
a = np.array([[1, 2], [3, 4]])
In [150]:
# repeat each element 3 times
np.repeat(a, 3)
Out[150]:
In [151]:
# tile the matrix 3 times
np.tile(a, 3)
Out[151]:
In [152]:
b = np.array([[5, 6]])
In [153]:
np.concatenate((a, b), axis=0)
Out[153]:
In [154]:
np.concatenate((a, b.T), axis=1)
Out[154]:
In [155]:
np.vstack((a,b))
Out[155]:
In [156]:
np.hstack((a,b.T))
Out[156]:
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference).
In [157]:
A = np.array([[1, 2], [3, 4]])
A
Out[157]:
In [158]:
# now B is referring to the same array data as A
B = A
In [159]:
# changing B affects A
B[0,0] = 10
B
Out[159]:
In [160]:
A
Out[160]:
If we want to avoid this behavior, so that when we get a new completely independent object B
copied from A
, then we need to do a so-called "deep copy" using the function copy
:
In [161]:
B = np.copy(A)
In [162]:
# now, if we modify B, A is not affected
B[0,0] = -5
B
Out[162]:
In [163]:
A
Out[163]:
Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB/R), iterations are really slow compared to vectorized operations.
However, sometimes iterations are unavoidable. For such cases, the Python for
loop is the most convenient way to iterate over an array:
In [164]:
v = np.array([1,2,3,4])
for element in v:
print(element)
In [165]:
M = np.array([[1,2], [3,4]])
for row in M:
print("row", row)
for element in row:
print(element)
When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate
function to obtain both the element and its index in the for
loop:
In [166]:
for row_idx, row in enumerate(M):
print("row_idx", row_idx, "row", row)
for col_idx, element in enumerate(row):
print("col_idx", col_idx, "element", element)
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
In [167]:
# each element in M is now squared
M
Out[167]:
As mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs.
In [168]:
def theta(x):
"""
Scalar implemenation of the Heaviside step function.
"""
if x >= 0:
return 1
else:
return 0
In [169]:
try:
theta(np.array([-3,-2,-1,0,1,2,3]))
except Exception as e:
print(traceback.format_exc())
OK, that didn't work because we didn't write the Theta
function so that it can handle a vector input...
To get a vectorized version of Theta we can use the Numpy function vectorize
. In many cases it can automatically vectorize a function:
In [170]:
theta_vec = np.vectorize(theta)
In [171]:
%%time
theta_vec(np.array([-3,-2,-1,0,1,2,3]))
Out[171]:
We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance):
In [172]:
def theta(x):
"""
Vector-aware implemenation of the Heaviside step function.
"""
return 1 * (x >= 0)
In [173]:
%%time
theta(np.array([-3,-2,-1,0,1,2,3]))
Out[173]:
In [174]:
# still works for scalars as well
theta(-1.2), theta(2.6)
Out[174]:
When using arrays in conditions,for example if
statements and other boolean expressions, one needs to use any
or all
, which requires that any or all elements in the array evalutes to True
:
In [175]:
M
Out[175]:
In [176]:
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
In [177]:
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5")
Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype
functions (see also the similar asarray
function). This always create a new array of new type:
In [178]:
M.dtype
Out[178]:
In [179]:
M2 = M.astype(float)
M2
Out[179]:
In [180]:
M2.dtype
Out[180]:
In [181]:
M3 = M.astype(bool)
M3
Out[181]:
In [182]:
%reload_ext version_information
%version_information numpy
Out[182]: