This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com) for UW's [Astro 599](http://www.astro.washington.edu/users/vanderplas/Astr599_2014/) course. Source and license info is on [GitHub](https://github.com/jakevdp/2014_fall_ASTR599/).
In [1]:
%run talktools.py
In [2]:
from IPython.display import YouTubeVideo
YouTubeVideo("y2R3FvS4xr4")
Out[2]:
That video was the original reference behind the now-defunct unladen swallow project, which had the goal of making Python's C implementation faster.
Yes, Python is (unfortunately) a rather slow language.
Here is an example:
In [3]:
# A silly function implemented in Python
def func_python(N):
d = 0.0
for i in range(N):
d += (i % 3 - 1) * i
return d
In [4]:
# Use IPython timeit magic to time the execution
%timeit func_python(10000)
To compare to a compiled language, let's write the same function in fortran and use the f2py
tool (included in NumPy) to compile it
In [5]:
%%file func_fortran.f
subroutine func_fort(n, d)
integer, intent(in) :: n
double precision, intent(out) :: d
integer :: i
d = 0
do i = 0, n - 1
d = d + (mod(i, 3) - 1) * i
end do
end subroutine func_fort
In [6]:
# use f2py rather than f2py3 for Python 2
!f2py3 -c func_fortran.f -m func_fortran > /dev/null
In [7]:
from func_fortran import func_fort
%timeit func_fort(10000)
Fortran is about 100 times faster for this task!
We alluded to this yesterday, but languages tend to have a compromise between convenience and performance.
C, Fortran, etc.: static typing and compiled code leads to fast execution
Python, R, Matlab, IDL, etc.: dynamic typing and interpreted excecution leads to fast development
We like Python because our development time is generally more valuable than execution time. But sometimes speed can be an issue.
Use Numpy ufuncs to your advantage
Use Numpy aggregates to your advantage
Use Numpy broadcasting to your advantage
Use Numpy slicing and masking to your advantage
Use a tool like SWIG, cython or f2py to interface to compiled code.
Here we'll cover the first four, and leave the fifth strategy for a later session.
In [8]:
import numpy as np
x = np.random.random(4)
print(x)
print(x + 1) # add 1 to each element of x
In [9]:
x * 2 # multiply each element of x by 2
Out[9]:
In [10]:
x * x # multiply each element of x by itself
Out[10]:
In [11]:
x[1:] - x[:-1]
Out[11]:
These are binary ufuncs: they take two arguments.
There are also many unary ufuncs:
In [12]:
-x
Out[12]:
In [13]:
np.sin(x)
Out[13]:
In [14]:
x = np.random.random(10000)
In [15]:
%%timeit
# compute element-wise x + 1 via a ufunc
y = np.zeros_like(x)
y = x + 1
In [16]:
%%timeit
# compute element-wise x + 1 via a loop
y = np.zeros_like(x)
for i in range(len(x)):
y[i] = x[i] + 1
np.sin
, np.cos
, etc.)scipy.special.j0
, scipy.special.gammaln
, etc.)np.minimum
, np.maximum
)
In [17]:
x = np.random.random(5)
print(x)
print(np.minimum(x, 0.5))
print(np.maximum(x, 0.5))
In [18]:
# contrast this behavior with that of min() and max()
print(np.min(x))
print(np.max(x))
In [19]:
%matplotlib inline
# On older IPython versions, use %pylab inline
import matplotlib.pyplot as plt
In [20]:
x = np.linspace(0, 10, 1000)
plt.plot(x, np.sin(x));
In [21]:
from scipy.special import gammaln
plt.plot(x, gammaln(x));
In [22]:
x = np.arange(5)
y = np.arange(1, 6)
np.add(x, y)
Out[22]:
In [23]:
np.add.accumulate(x)
Out[23]:
In [24]:
np.multiply.accumulate(x)
Out[24]:
In [25]:
np.multiply.accumulate(y)
Out[25]:
In [26]:
np.add.identity
Out[26]:
In [27]:
np.multiply.identity
Out[27]:
In [28]:
np.add.outer(x, y)
Out[28]:
In [30]:
# make a times-table
x = np.arange(1, 13)
np.multiply.outer(x, x)
Out[30]:
Each of the following functions take an array as input, and return an array as output. They are implemented using loops, which is not very efficient.
For each function, implement a fast version which uses ufuncs to calculate the result more efficiently. Double-check that you get the same result for several different arrays.
use the %timeit magic to time the execution of the two implementations for a large array (say, 1000 elements).
In [32]:
# 1. computing the element-wise sine + cosine
from math import sin, cos
def slow_sincos(x):
"""x is a 1-dimensional array"""
y = np.zeros_like(x)
for i in range(len(x)):
y[i] = sin(x[i]) + cos(x[i])
return y
x = np.random.random(5)
print(slow_sincos(x))
In [33]:
# write a fast_sincos function
In [35]:
# 2. computing the difference between adjacent squares
def slow_sqdiff(x):
"""x is a 1-dimensional array"""
y = np.zeros(len(x) - 1)
for i in range(len(y)):
y[i] = x[i + 1] ** 2 - x[i] ** 2
return y
x = np.random.random(5)
print(slow_sqdiff(x))
In [36]:
# write a fast_sqdiff function
In [38]:
# 3. computing the outer-product of each consecutive pair
def slow_pairprod(x):
"""x is a 1-dimensional array"""
if len(x) % 2 != 0:
raise ValueError("length of x must be even")
N = len(x) // 2
y = np.zeros((N, N))
for i in range(N):
for j in range(N):
y[i, j] = x[2 * i] * x[2 * j + 1]
return y
x = np.arange(1, 9)
print(slow_pairprod(x))
In [35]:
# write a fast_pairprod function
In [39]:
# 10 x 10 array drawn from a standard normal
x = np.random.randn(10, 10)
In [40]:
x.mean()
Out[40]:
In [41]:
np.mean(x)
Out[41]:
In [42]:
x.std()
Out[42]:
In [43]:
x.var()
Out[43]:
In [44]:
x.sum()
Out[44]:
In [45]:
x.prod()
Out[45]:
In [46]:
np.median(x)
Out[46]:
In [47]:
np.percentile(x, 50)
Out[47]:
In [48]:
np.percentile(x, (25, 75))
Out[48]:
In [49]:
x = np.random.rand(3, 5)
x
Out[49]:
In [50]:
x.sum(0) # sum all rows
Out[50]:
In [51]:
x.sum(1) # sum all columns
Out[51]:
In [52]:
np.median(x, 1)
Out[52]:
In [53]:
np.mean(x, 1)
Out[53]:
In [54]:
np.sum(x, 1)
Out[54]:
In [55]:
np.add.reduce(x, 1)
Out[55]:
In [56]:
np.prod(x, 1)
Out[56]:
In [57]:
np.multiply.reduce(x, 1)
Out[57]:
In [58]:
np.divide.reduce(x, 1)
Out[58]:
A caution: for reduce
methods, the default axis is 0:
In [59]:
np.add.reduce(x)
Out[59]:
In [60]:
np.sum(x)
Out[60]:
In [61]:
x = np.random.random(10000)
%timeit np.sum(x)
%timeit sum(x)
Dynamic type-checking is slow.
Make sure to use Numpy's sum
, min
, and max
.
In [62]:
def slow_cubesum(x):
"""x is a 1D array"""
result = 0
for i in range(len(x)):
result += x[i] ** 3
return result
x = np.random.random(100)
slow_cubesum(x)
Out[62]:
In [63]:
# implement fast_cubesum
In [64]:
def slow_rms(x):
"""x is a 1D array"""
m = np.mean(x)
rms = 0
for i in range(len(x)):
rms += (x[i] - m) ** 2
rms /= len(x)
return np.sqrt(rms)
x = np.random.random(100)
slow_rms(x)
Out[64]:
In [65]:
# implement fast_rms
Now we return to our silly function from the beginning of this section. Can you implement a fast version using ufuncs and aggregates?
In [66]:
def slow_sillyfunc(N):
"""N is an integer"""
d = 0.0
for i in range(N):
d += (i % 3 - 1) * i
return d
slow_sillyfunc(100)
Out[66]:
In [67]:
# Implement fast_sillyfunc using ufuncs & aggragates
We've taken a look at broadcasting previously. But it's important enough that we'll review it quickly here:
Broadcasting rules:
If the two arrays differ in their number of dimensions, the shape of the array with fewer dimensions is padded with ones on its leading (left) side.
If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
In [68]:
x = np.arange(10)
x ** 2
Out[68]:
In [69]:
Y = x * x[:, np.newaxis]
Y
Out[69]:
In [71]:
Y + 10 * x
Out[71]:
In [72]:
Y + 10 * x[:, np.newaxis]
Out[72]:
In [73]:
Y = np.random.random((2, 3, 4))
x = 10 * np.arange(3)
Y + x[:, np.newaxis]
Out[73]:
Now, assume you have $N$ points in $D$ dimensions, represented by an array of shape [N, D]
.
np.mean
aggregate (that is, find the D
-dimensional point which is the mean of the rest of the points)np.add
ufunc.np.std
aggregate.np.add
ufunc.M
, the centered and normalized version of the X
array: $$ M_{ij} = (X_{ij} - \mu_j) / \sigma_j $$ This is one version of whitening the array.
In [74]:
X = np.random.random((1000, 5)) # 1000 points in 5 dimensions
In [71]:
# 1. Compute the mean of the 1000 points in X
In [73]:
# 2. Compute the standard deviation across the 1000 points
In [75]:
# 5. Compute the whitened version of the array
# (Each row centered, then divided by standard deviation)
In [75]:
x = np.array([1, 2, 3, -999, 2, 4, -999])
How might you clean this array, setting all negative values to, say, zero?
In [76]:
for i in range(len(x)):
if x[i] < 0:
x[i] = 0
x
Out[76]:
A faster way is to construct a boolean mask:
In [77]:
x = np.array([1, 2, 3, -999, 2, 4, -999])
mask = (x < 0)
mask
Out[77]:
And the mask can be used directly to set the value you desire:
In [78]:
x[mask] = 0
x
Out[78]:
Typically this is done directly:
In [79]:
x = np.array([1, 2, 3, -999, 2, 4, -999])
x[x < 0] = 0
x
Out[79]:
In [80]:
x = np.random.random(5)
x
Out[80]:
In [81]:
x[x > 0.5] = np.nan
x
Out[81]:
In [82]:
x[np.isnan(x)] = np.inf
x
Out[82]:
In [84]:
np.nan == np.nan
Out[84]:
In [83]:
x[np.isinf(x)] = 0
x
Out[83]:
In [84]:
x = np.array([1, 0, -np.inf, np.inf, np.nan])
print("input ", x)
print("x < 0 ", (x < 0))
print("x > 0 ", (x > 0))
print("isinf ", np.isinf(x))
print("isnan ", np.isnan(x))
print("isposinf", np.isposinf(x))
print("isneginf", np.isneginf(x))
In [85]:
x = np.arange(16).reshape((4, 4))
x
Out[85]:
In [86]:
x < 5
Out[86]:
In [87]:
~(x < 5)
Out[87]:
In [88]:
(x < 10) & (x % 2 == 0)
Out[88]:
In [89]:
(x > 3) & (x < 8)
Out[89]:
In [90]:
x = np.random.random(100)
print("array is length", len(x), "and has")
print((x > 0.5).sum(), "elements are greater than 0.5")
In [91]:
# clip is a useful function:
x = np.clip(x, 0.3, 0.6)
print(np.sum(x < 0.3))
print(np.sum(x > 0.6))
In [92]:
# works for 2D arrays as well
X = np.random.random((10, 10))
(X < 0.1).sum()
Out[92]:
In [93]:
x = np.random.random((3, 3))
x
Out[93]:
In [94]:
np.where(x < 0.3)
Out[94]:
In [95]:
x[x < 0.3]
Out[95]:
In [96]:
x[np.where(x < 0.3)]
Out[96]:
When you index with the result of a where
function, you are using what is called fancy indexing: indexing with tuples
In [97]:
X = np.arange(16).reshape((4, 4))
X
Out[97]:
In [98]:
X[(0, 1), (1, 0)]
Out[98]:
In [99]:
X[range(4), range(4)]
Out[99]:
In [100]:
X.diagonal()
Out[100]:
In [101]:
X.diagonal() = 100
In [102]:
X[range(4), range(4)] = 100
In [103]:
X
Out[103]:
In [104]:
X = np.arange(24).reshape((6, 4))
X
Out[104]:
In [105]:
i = np.arange(6)
np.random.shuffle(i)
i
Out[105]:
In [106]:
X[i] # X[i, :] is identical
Out[106]:
Fancy indexing also works for multi-dimensional index arrays
In [107]:
i2 = i.reshape(3, 2)
X[i2]
Out[107]:
In [108]:
X[i2].shape
Out[108]:
It's all about moving loops into compiled code:
Use Numpy ufuncs to your advantage (eliminate loops!)
Use Numpy aggregates to your advantage (eliminate loops!)
Use Numpy broadcasting to your advantage (eliminate loops!)
Use Numpy slicing and masking to your advantage (eliminate loops!)
Use a tool like SWIG, cython or f2py to interface to compiled code.
In the github repository, there is a file containing measurements of 5000 asteroid orbits, at notebooks/data/asteroids5000.csv
.
These are compiled from a query at http://ssd.jpl.nasa.gov/sbdb_query.cgi
Use np.genfromtxt
to load the data from the file. This is like loadtxt
, but can handle missing data.
delimiter
keyword.genfromtxt
sets all missing values to np.nan
. Use the operations we discussed here to answer these questions:
Create a new array containing only the rows with no missing values.
Compute the maximum, minimum, mean, and standard deviation of the values in each column.
In [ ]:
Use the bash head
command to display the first line of the data file: this lists the names of the columns in the dataset. (remember that bash commands in the notebook are indicated by !
, and that head -n
displays the first n
lines of a file)
Invoke the matplotlib inline
magic to make figures appear inline in the notebook
Use plt.scatter
to plot the semi-major axis versus the sine of the inclination angle (note that the inclination angle is listed in degrees -- you'll have to convert it to radians to compute the sine). What do you notice about the distribution? What do you think could explain this?
Use plt.scatter
to plot a color-magnitude diagram of the asteroids (H vs B-V). You should see two distinct "families" of asteroids indicated in this plot. Over-plot a line that divides these.
Repeat the orbital parameter plot from above, but plot the two "families" from #4 in different colors. Note that this magnitude is undefined for many of the asteroids. Do you see any correlation between color and orbit?
Compare what you found to plots in Parker et al. 2008. Note that we're not using the same data here, but we're looking at a similar collection of objects.
In [ ]: