NumPy provides an efficient representation of multidimensional datasets like vectors and matricies, and tools for linear algebra and general matrix manipulations - essential building blocks of virtually all technical computing
Typically NumPy is imported as np
:
In [ ]:
import numpy as np
NumPy, at its core, provides a powerful array object. Let's start by exploring how the NumPy array differs from a Python list.
We start by creating a simple Python list and a NumPy array with identical contents:
In [ ]:
lst = [10, 20, 30, 40]
arr = np.array([10, 20, 30, 40])
print(lst)
print(arr)
In [ ]:
print(lst[0], arr[0])
In [ ]:
print(lst[-1], arr[-1])
In [ ]:
print(lst[2:], arr[2:])
The first difference to note between lists and arrays is that arrays are homogeneous; i.e. all elements of an array must be of the same type. In contrast, lists can contain elements of arbitrary type. For example, we can change the last element in our list above to be a string:
In [ ]:
lst[-1] = 'a string inside a list'
lst
But the same can not be done with an array, as we get an error message:
In [ ]:
arr[-1] = 'a string inside an array'
Caveat, it can be done, but really don't do it; lists are generally better at non-homogeneous collections.
The following provide basic information about the size, shape and data in the array:
In [ ]:
print('Data type :', arr.dtype)
print('Total number of elements :', arr.size)
print('Number of dimensions :', arr.ndim)
print('Shape (dimensionality) :', arr.shape)
print('Memory used (in bytes) :', arr.nbytes)
Arrays also have many useful statistical/mathematical methods:
In [ ]:
print('Minimum and maximum :', arr.min(), arr.max())
print('Sum and product of all elements :', arr.sum(), arr.prod())
print('Mean and standard deviation :', arr.mean(), arr.std())
In [ ]:
arr.dtype
Once an array has been created, its dtype
is fixed (in this case to an 8 byte/64 bit signed integer) and it can only store elements of the same type.
For this example where the dtype
is integer, if we try storing a floating point number in the array it will be automatically converted into an integer:
In [ ]:
arr[-1] = 1.234
arr
NumPy comes with most of the common data types (and some uncommon ones too).
The most used (and portable) dtypes are:
Full details can be found at http://docs.scipy.org/doc/numpy/user/basics.types.html.
What are the limits of the common NumPy integer types?
In [ ]:
np.array(256, dtype=np.uint8)
In [ ]:
float_info = ('{finfo.dtype}: max={finfo.max:<18}, '
'approx decimal precision={finfo.precision};')
print(float_info.format(finfo=np.finfo(np.float32)))
print(float_info.format(finfo=np.finfo(np.float64)))
Floating point precision is covered in detail at http://en.wikipedia.org/wiki/Floating_point.
However, we can convert an array from one type to another with the astype
method
In [ ]:
np.array(1, dtype=np.uint8).astype(np.float32)
Above we created an array from an existing list. Now let's look into other ways in which we can create arrays.
A common need is to have an array initialized with a constant value. Very often this value is 0 or 1.
zeros
creates arrays of all zeros, with any desired dtype:
In [ ]:
np.zeros(5, dtype=np.float)
In [ ]:
np.zeros(3, dtype=np.int)
and similarly for ones
:
In [ ]:
print('5 ones:', np.ones(5, dtype=np.int))
If we want an array initialized with an arbitrary value, we can create an empty array and then use the fill method to put the value we want into the array:
In [ ]:
a = np.empty(4, dtype=np.float)
a.fill(5.5)
a
Alternatives, such as:
np.ones(4) * 5.5
np.zeros(4) + 5.5
are generally less efficient, but are also reasonable.
In [ ]:
np.arange(10, dtype=np.float64)
In [ ]:
np.arange(5, 7, 0.1)
The linspace
and logspace
functions to create linearly and logarithmically-spaced grids respectively, with a fixed number of points that include both ends of the specified interval:
In [ ]:
print("A linear grid between 0 and 1:")
print(np.linspace(0, 1, 5))
In [ ]:
print("A logarithmic grid between 10**2 and 10**4:")
print(np.logspace(2, 4, 3))
Finally, it is often useful to create arrays with random numbers that follow a specific distribution.
The np.random module contains a number of functions that can be used to this effect.
For more details see http://docs.scipy.org/doc/numpy/reference/routines.random.html.
In [ ]:
import numpy as np
import numpy.random
To produce an array of 5 random samples taken from a standard normal distribution (0 mean and variance 1):
In [ ]:
print(np.random.randn(5))
For an array of 5 samples from the normal distribution with a mean of 10 and a variance of 3:
In [ ]:
norm10 = np.random.normal(10, 3, 5)
print(norm10)
Above we saw how to index NumPy arrays with single numbers and slices, just like Python lists.
Arrays also allow for a more sophisticated kind of indexing that is very powerful: you can index an array with another array, and in particular with an array of boolean values.
This is particularly useful to extract information from an array that matches a certain condition.
Consider for example that in the array norm10
we want to replace all values above 9 with the value 0. We can do so by first finding the mask that indicates where this condition is true or false:
In [ ]:
mask = norm10 > 9
mask
Now that we have this mask, we can use it to either read those values or to reset them to 0:
In [ ]:
print(('Values above 9:', norm10[mask]))
In [ ]:
print('Resetting all values above 9 to 0...')
norm10[mask] = 0
print(norm10)
Whilst beyond the scope of this course, it is also worth knowing that a specific masked array object exists in NumPy. Further details are available at http://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html
Up until now all our examples have used one-dimensional arrays. NumPy can also create arrays of arbitrary dimensions, and all the methods illustrated in the previous section work on arrays with more than one dimension.
A list of lists can be used to initialize a two dimensional array:
In [ ]:
lst2 = [[1, 2, 3], [4, 5, 6]]
arr2 = np.array([[1, 2, 3], [4, 5, 6]])
print(arr2)
print(arr2.shape)
With two-dimensional arrays we start seeing the power of NumPy: while nested lists can be indexed by repeatedly using the [ ]
operator, multidimensional arrays support a much more natural indexing syntax using a single [ ]
and a set of indices separated by commas:
In [ ]:
print(lst2[0][1])
print(arr2[0, 1])
Question: Why does the following example produce different results?
In [ ]:
print(lst2[0:2][1])
print(arr2[0:2, 1])
The array creation functions listed previously can also be used to create arrays with more than one dimension.
For example:
In [ ]:
np.zeros((2, 3))
In [ ]:
np.random.normal(10, 3, size=(2, 4))
In fact, the shape of an array can be changed at any time, as long as the total number of elements is unchanged.
For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is:
In [ ]:
arr = np.arange(8).reshape(2, 4)
print(arr)
With multidimensional arrays you can also index using slices, and you can mix and match slices and single indices in the different dimensions:
In [ ]:
arr = np.arange(2, 18, 2).reshape(2, 4)
print(arr)
In [ ]:
print('Second element from dimension 0, last 2 elements from dimension one:')
print(arr[1, 2:])
If you only provide one index to slice a multidimensional array, then the slice will be expanded to ":"
for all of the remaining dimensions:
In [ ]:
print('First row: ', arr[0], 'is equivalent to', arr[0, :])
print('Second row: ', arr[1], 'is equivalent to', arr[1, :])
This is also known as "ellipsis".
Ellipsis can be specified explicitly with "..."
. It will automatically expand to ":"
for each of the unspecified dimensions in the array, and can even be used at the beginning of the slice:
In [ ]:
arr1 = np.empty((4, 6, 3))
print('Orig shape: ', arr1.shape)
print(arr1[...].shape)
print(arr1[..., 0:2].shape)
print(arr1[2:4, ..., ::2].shape)
print(arr1[2:4, :, ..., ::-1].shape)
Arrays support all regular arithmetic operators, and the NumPy library also contains a complete collection of basic mathematical functions that operate on arrays.
It is important to remember that in general, all operations with arrays are applied element-wise, i.e., are applied to all the elements of the array at the same time. For example:
In [ ]:
arr1 = np.arange(4)
arr2 = np.arange(10, 14)
print(arr1, '+', arr2, '=', arr1 + arr2)
Importantly, even the multiplication operator is by default applied element-wise: It is not the matrix multiplication from linear algebra:
In [ ]:
print(arr1, '*', arr2, '=', arr1 * arr2)
We may also multiply an array by a scalar:
In [ ]:
1.5 * arr1
This is an example of broadcasting.
The fact that NumPy operates on an element-wise basis means that in principle arrays must always match one another's shape. However, NumPy will also helpfully "broadcast" dimensions when possible.
Here is an example of broadcasting a scalar to a 1D array:
In [ ]:
print(np.arange(3))
print(np.arange(3) + 5)
We can also broadcast a 1D array to a 2D array, in this case adding a vector to all rows of a matrix:
In [ ]:
np.ones((3, 3)) + np.arange(3)
We can also broadcast in two directions at a time:
In [ ]:
a = np.arange(3).reshape((3, 1))
b = np.arange(3)
print(a, '+', b, '=\n', a + b)
Pictorially:
Broadcasting follows these three rules:
If the two arrays differ in their number of dimensions, the shape of the array with fewer dimensions is padded with ones on its leading (left) side.
If the shape of the two arrays does not match in any dimension, either array with shape equal to 1 in a given dimension is stretched to match the other shape.
If in any dimension the sizes disagree and neither has shape equal to 1, an error is raised.
Note that all of this happens without ever actually creating the expanded arrays in memory! This broadcasting behavior is in practice enormously powerful, especially given that when NumPy broadcasts to create new dimensions or to 'stretch' existing ones, it doesn't actually duplicate the data. In the example above the operation is carried out as if the scalar 1.5 was a 1D array with 1.5 in all of its entries, but no actual array is ever created. This can save lots of memory in cases when the arrays in question are large. As such this can have significant performance implications.
So when we do...
np.arange(3) + 5
The scalar 5 is:
After these two operations are complete, the addition can proceed as now both operands are one-dimensional arrays of length 3.
When we do
np.ones((3, 3)) + np.arange(3)
The second array is:
When we do
np.arange(3).reshape((3, 1)) + np.arange(3)
The second array is:
Then the operation proceeds as if on two 3 $\times$ 3 arrays.
The general rule is: when operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward, creating dimensions of length 1 as needed. Two dimensions are considered compatible when
If these conditions are not met, a ValueError: operands could not be broadcast together
exception is thrown, indicating that the arrays have incompatible shapes.
In [ ]:
arr1 = np.ones((2, 3))
arr2 = np.ones((2, 1))
# arr1 + arr2
In [ ]:
arr1 = np.ones((2, 3))
arr2 = np.ones(3)
# arr1 + arr2
In [ ]:
arr1 = np.ones((1, 3))
arr2 = np.ones((2, 1))
# arr1 + arr2
In [ ]:
arr1 = np.ones((1, 3))
arr2 = np.ones((1, 2))
# arr1 + arr2
In [ ]:
arr1 = np.ones((1, 3))
arr3 = arr2[:, :, np.newaxis]
# arr1 + arr3
1. Use np.arange
and reshape
to create the array
A = [[1 2 3 4]
[5 6 7 8]]
In [ ]:
2. Use np.array
to create the array
B = [1 2]
In [ ]:
3. Use broadcasting to add B
to A
to create the final array
A + B = [[2 3 4 5]
[7 8 9 10]
Hint: what shape does B
have to be changed to?
In [ ]:
For multidimensional arrays it is possible to carry out computations along a single dimension by passing the axis
parameter:
In [ ]:
print('For the following array:\n', arr)
print('The sum of elements along the rows is :', arr.sum(axis=1))
print('The sum of elements along the columns is :', arr.sum(axis=0))
As you can see in this example, the value of the axis
parameter is the dimension that will be consumed once the operation has been carried out. This is why to sum along the columns we use axis=0
.
This can be easily illustrated with an example that has more dimensions; we create an array with 4 dimensions and shape (3,4,5,6)
and sum along the axis number 2 (i.e. the third axis, since in Python all counts are 0-based). That consumes the dimension whose length was 5, leaving us with a new array that has shape (3,4,6)
:
In [ ]:
np.zeros((3, 4, 5, 6)).sum(axis=2).shape
Another widely used property of arrays is the .T
attribute, which allows you to access the transpose of the array:
In [ ]:
print('Array:\n', arr)
print('Transpose:\n', arr.T)
A common task is to generate a pair of arrays that represent the coordinates of our data.
When orthogonal 1d coordinate arrays already exist, NumPy's meshgrid
function is very useful:
In [ ]:
x = np.linspace(0, 9, 3)
y = np.linspace(-8, 4, 3)
x2d, y2d = np.meshgrid(x, y)
print(x2d)
print(y2d)
Reshaping arrays is a common task in order to make the best of NumPy's powerful broadcasting.
A useful tip with the reshape
method is that it is possible to provide a -1
length for at most one of the dimensions. This indicates that NumPy should automatically calculate the length of this dimension:
In [ ]:
np.arange(6).reshape((1, -1))
In [ ]:
np.arange(6).reshape((2, -1))
Another way to increase the dimensionality of an array is to use the newaxis
keyword:
In [ ]:
arr = np.arange(6)
print(arr[np.newaxis, :, np.newaxis].shape)
Note that reshaping (like most NumPy operations), wherever possible, provides a view of the same memory:
In [ ]:
arr = np.arange(8)
arr_view = arr.reshape(2, 4)
What this means is that if one array is modified, the other will also be updated:
In [ ]:
# Print the "view" array from reshape.
print('Before\n', arr_view)
# Update the first element of the original array.
arr[0] = 1000
# Print the "view" array from reshape again,
# noticing the first value has changed.
print('After\n', arr_view)
This lack of copying allows for very efficient vectorized operations, but this power should be used carefully - if used badly it can lead to some bugs that are hard to track down.
If in doubt, you can always copy the data to a different block of memory with the copy()
method.
NumPy ships with a full complement of mathematical functions that work on entire arrays, including logarithms, exponentials, trigonometric and hyperbolic trigonometric functions, etc.
For example, sampling the sine function at 100 points between $0$ and $2\pi$ is as simple as:
In [ ]:
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
Or to sample the exponential function between $-5$ and $5$ at intervals of $0.5$:
In [ ]:
x = np.arange(-5, 5.5, 0.5)
y = np.exp(x)
NumPy ships with a basic linear algebra library, and all arrays have a dot
method whose behavior is that of the scalar dot product when its arguments are vectors (one-dimensional arrays) and the traditional matrix multiplication when one or both of its arguments are two-dimensional arrays:
In [ ]:
v1 = np.array([2, 3, 4])
v2 = np.array([1, 0, 1])
print(v1, '.', v2, '=', np.dot(v1, v2))
For matrix-matrix multiplication, the regular $matrix \times matrix$ rules must be satisfied. For example $A \times A^T$:
In [ ]:
A = np.arange(6).reshape(2, 3)
print(A, '\n')
print(np.dot(A, A.T))
results in a (2, 2) array, yet $A^T \times A$ results in a (3, 3).
Why is this?:
In [ ]:
print(np.dot(A.T, A))
NumPy makes no distinction between row and column vectors and simply verifies that the dimensions match the required rules of matrix multiplication.
Below is an example of matrix-vector multiplication, and in this case we have a $2 \times 3$ matrix multiplied by a 3-vector, which produces a 2-vector:
In [ ]:
print(A, 'x', v1, '=', np.dot(A, v1))
Note: To help with the interpretation of this last result, notice that $0 \times 2 + 1 \times 3 + 2 \times 4 = 11$ and $3 \times 2 + 4 \times 3 + 5 \times 4 = 38$
In this exercise, you are tasked with implementing the simple trapezoid rule formula for numerical integration. If we want to compute the definite integral
$$ \int_{a}^{b}f(x)dx $$we can partition the integration interval $[a,b]$ into smaller subintervals. We then approximate the area under the curve for each subinterval by calculating the area of the trapezoid created by linearly interpolating between the two function values at each end of the subinterval:
For a pre-computed $y$ array (where $y = f(x)$ at discrete samples) the trapezoidal rule equation is:
$$ \int_{a}^{b}f(x)dx\approx\frac{1}{2}\sum_{i=1}^{n}\left(x_{i}-x_{i-1}\right)\left(y_{i}+y_{i-1}\right). $$In pure python, this can be written as:
def trapz_slow(x, y):
area = 0.
for i in range(1, len(x)):
area += (x[i] - x[i-1]) * (y[i] + y[i-1])
return area / 2
In [ ]:
In [ ]:
In [ ]:
In [ ]:
In [ ]:
Write a function trapzf(f, a, b, npts=100)
that accepts a function f
, the endpoints a
and b
and the number of samples to take npts
. Sample the function uniformly at these
points and return the value of the integral.
Use the trapzf function to identify the minimum number of sampling points needed to approximate the integral $\int_0^3 x^2$ with an absolute error of $<=0.0001$. (A loop is necessary here.)
In [ ]: