In [ ]:
from __future__ import print_function
this notebook is based on the SciPy NumPy tutorial
In [ ]:
import numpy as np
NumPy provides a multidimensional array class as well as a large number of functions that operate on arrays.
NumPy arrays allow you to write fast (optimized) code that works on arrays of data. To do this, there are some restrictions on arrays:
The nice part is that arithmetic operations work on entire arrays—this means that you can avoid writing loops in python (which tend to be slow). Instead the "looping" is done in the underlying compiled code
There are a lot of ways to create arrays. Let's look at a few
Here we create an array using arange
and then change its shape to be 3 rows and 5 columns. Note the row-major ordering—you'll see that the rows are together in the inner []
(more on this in a bit)
In [ ]:
a = np.arange(15)
In [ ]:
a
In [ ]:
a = np.arange(15).reshape(3,5)
print(a)
an array is an object of the ndarray
class, and it has methods that know how to work on the array data. Here, .reshape()
is one of the many methods.
A NumPy array has a lot of meta-data associated with it describing its shape, datatype, etc.
In [ ]:
print(a.ndim)
print(a.shape)
print(a.size)
print(a.dtype)
print(a.itemsize)
print(type(a))
In [ ]:
help(a)
we can also create an array from a list
In [ ]:
b = np.array( [1.0, 2.0, 3.0, 4.0] )
print(b)
print(b.dtype)
print(type(b))
we can create a multi-dimensional array of a specified size initialized all to 0 easily, using the zeros()
function.
In [ ]:
b = np.zeros((10,8))
b
There is also an analogous ones() and empty() array routine. Note that here we can explicitly set the datatype for the array in this function if we wish.
Unlike lists in python, all of the elements of a numpy array are of the same datatype
In [ ]:
c = np.eye(10, dtype=np.float64)
c
linspace
creates an array with evenly space numbers. The endpoint
optional argument specifies whether the upper range is in the array or not
In [ ]:
d = np.linspace(0, 1, 10, endpoint=False)
print(d)
Analogous to linspace()
, there is a logspace()
function that creates an array with elements equally spaced in log. Use help(np.logspace)
to see the arguments, and create an array with 10 elements from $10^{-6}$ to $10^3$.
In [ ]:
we can also initialize an array based on a function
In [ ]:
f = np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int)
f
most operations (+
, -
, *
, /
) will work on an entire array at once, element-by-element.
Note that that the multiplication operator is not a matrix multiply (there is a new operator in python 3.5+, @
, to do matrix multiplicaiton.
Let's create a simply array to start with
In [ ]:
a = np.arange(12).reshape(3,4)
print(a)
Multiplication by a scalar multiplies every element
In [ ]:
a*2
adding two arrays adds element-by-element
In [ ]:
a + a
multiplying two arrays multiplies element-by-element
In [ ]:
a*a
We can think of our 2-d array as a 3 x 5 matrix (3 rows, 5 columns). We can take the transpose to geta 5 x 3 matrix, and then we can do a matrix multiplication
What do you think 1./a
will do? Try it and see
In [ ]:
In [ ]:
b = a.transpose()
b
In [ ]:
a @ b
We can sum the elements of an array:
In [ ]:
a.sum()
sum()
takes an optional argument, axis=N
, where N
is the axis to sum over. Sum the elements of a
across rows to create an array with just the sum along each column of a
.
In [ ]:
We can also easily get the extrema
In [ ]:
print(a.min(), a.max())
universal functions work element-by-element. Let's create a new array scaled by pi
In [ ]:
b = a*np.pi/12.0
print(b)
In [ ]:
c = np.cos(b)
print(c)
In [ ]:
d = b + c
In [ ]:
print(d)
We will often want to write our own function that operates on an array and returns a new array. We can do this just like we did with functions previously—the key is to use the methods from the np
module to do any operations, since they work on, and return, arrays.
Write a simple function that returns $\sin(2\pi x)$ for an input array x
. Then test it
by passing in an array x
that you create via linspace()
slicing works very similarly to how we saw with strings. Remember, python uses 0-based indexing
Let's create this array from the image:
In [ ]:
a = np.arange(9)
a
Now look at accessing a single element vs. a range (using slicing)
Giving a single (0-based) index just references a single value
In [ ]:
a[2]
Giving a range uses the range of the edges to return the values
In [ ]:
print(a[2:3])
In [ ]:
a[2:4]
The :
can be used to specify all of the elements in that dimension
In [ ]:
a[:]
Multidimensional arrays are stored in a contiguous space in memory -- this means that the columns / rows need to be unraveled (flattened) so that it can be thought of as a single one-dimensional array. Different programming languages do this via different conventions:
Storage order:
The ordering matters when
looping over arrays -- you want to access elements that are next to one-another in memory
e.g, in Fortran:
double precision :: A(M,N)
do j = 1, N
do i = 1, M
A(i,j) = …
enddo
enddo
in C
double A[M][N];
for (i = 0; i < M; i++) {
for (j = 0; j < N; j++) {
A[i][j] = …
}
}
In python, using NumPy, we'll try to avoid explicit loops over elements as much as possible
Let's look at multidimensional arrays:
In [ ]:
a = np.arange(15).reshape(3,5)
a
Notice that the output of a
shows the row-major storage. The rows are grouped together in the inner [...]
Giving a single index (0-based) for each dimension just references a single value in the array
In [ ]:
a[1,1]
Doing slices will access a range of elements. Think of the start and stop in the slice as referencing the left-edge of the slots in the array.
In [ ]:
a[0:2,0:2]
Access a specific column
In [ ]:
a[:,1]
Sometimes we want a one-dimensional view into the array -- here we see the memory layout (row-major) more explicitly
In [ ]:
a.flatten()
we can also iterate -- this is done over the first axis (rows)
In [ ]:
for row in a:
print(row)
or element by element
In [ ]:
for e in a.flat:
print(e)
Generally speaking, we want to avoid looping over the elements of an array in python—this is slow. Instead we want to write and use functions that operate on the entire array at once.
Consider the array defined as:
In [ ]:
q = np.array([[1, 2, 3, 2, 1],
[2, 4, 4, 4, 2],
[3, 4, 4, 4, 3],
[2, 4, 4, 4, 2],
[1, 2, 3, 2, 1]])
4
's in q
(this will be called a view, as we'll see shortly)q
change?simply using "=" does not make a copy, but much like with lists, you will just have multiple names pointing to the same ndarray object
Therefore, we need to understand if two arrays, A
and B
point to:
All of these are possible:
B = A
this is assignment. No copy is made. A
and B
point to the same data in memory and share the same shape, etc. They are just two different labels for the same object in memory
B = A[:]
this is a view or shallow copy. The shape info for A and B are stored independently, but both point to the same memory location for data
B = A.copy()
this is a deep copy. A completely separate object will be created in memory, with a completely separate location in memory.
Let's look at examples
In [ ]:
a = np.arange(10)
print(a)
Here is assignment—we can just use the is
operator to test for equality
In [ ]:
b = a
b is a
Since b
and a
are the same, changes to the shape of one are reflected in the other—no copy is made.
In [ ]:
b.shape = (2, 5)
print(b)
a.shape
In [ ]:
b is a
In [ ]:
print(a)
a shallow copy creates a new view into the array—the data is the same, but the array properties can be different
In [ ]:
a = np.arange(12)
c = a[:]
a.shape = (3,4)
print(a)
print(c)
since the underlying data is the same memory, changing an element of one is reflected in the other
In [ ]:
c[1] = -1
print(a)
Even slices into an array are just views, still pointing to the same memory
In [ ]:
d = c[3:8]
print(d)
In [ ]:
d[:] = 0
In [ ]:
print(a)
print(c)
print(d)
There are lots of ways to inquire if two arrays are the same, views, own their own data, etc
In [ ]:
print(c is a)
print(c.base is a)
print(c.flags.owndata)
print(a.flags.owndata)
to make a copy of the data of the array that you can deal with independently of the original, you need a deep copy
In [ ]:
d = a.copy()
d[:,:] = 0.0
print(a)
print(d)
There are lots of fun ways to index arrays to access only those elements that meet a certain condition
In [ ]:
a = np.arange(12).reshape(3,4)
a
Here we set all the elements in the array that are > 4 to zero
In [ ]:
a[a > 4] = 0
a
and now, all the zeros to -1
In [ ]:
a[a == 0] = -1
a
In [ ]:
a == -1
if we have 2 tests, we need to use logical_and()
or logical_or()
In [ ]:
a = np.arange(12).reshape(3,4)
a[np.logical_and(a > 3, a <= 9)] = 0.0
a
Our test that we index the array with returns a boolean array of the same shape:
In [ ]:
a > 4
In general, you want to avoid loops over elements on an array.
Here, let's create 1-d x and y coordinates and then try to fill some larger array
In [ ]:
M = 32
N = 64
xmin = ymin = 0.0
xmax = ymax = 1.0
x = np.linspace(xmin, xmax, M, endpoint=False)
y = np.linspace(ymin, ymax, N, endpoint=False)
print(x.shape)
print(y.shape)
we'll time out code
In [ ]:
import time
In [ ]:
t0 = time.time()
g = np.zeros((M, N))
for i in range(M):
for j in range(N):
g[i,j] = np.sin(2.0*np.pi*x[i]*y[j])
t1 = time.time()
print("time elapsed: {} s".format(t1-t0))
Now let's instead do this using all array syntax. First will extend our 1-d coordinate arrays to be 2-d. NumPy has a function for this (meshgrid()
)
In [ ]:
x2d, y2d = np.meshgrid(x, y, indexing="ij")
print(x2d[:,0])
print(x2d[0,:])
print(y2d[:,0])
print(y2d[0,:])
In [ ]:
t0 = time.time()
g2 = np.sin(2.0*np.pi*x2d*y2d)
t1 = time.time()
print("time elapsed: {} s".format(t1-t0))
In [ ]:
print(np.max(np.abs(g2-g)))
Now we want to construct a derivative, $$ \frac{d f}{dx} $$
In [ ]:
x = np.linspace(0, 2*np.pi, 25)
f = np.sin(x)
We want to do this without loops—we'll use views into arrays offset from one another. Recall from calculus that a derivative is approximately: $$ \frac{df}{dx} = \frac{f(x+h) - f(x)}{h} $$ Here, we'll take $h$ to be a single adjacent element
In [ ]:
dx = x[1] - x[0]
dfdx = (f[1:] - f[:-1])/dx
In [ ]:
dfdx
In [ ]: