Requirements:
Jupyter creates an easy-to-read document that you can view in your web-browser with code (that runs and creates plots inside the document on the fly!) and text (with even math). The name "Jupyter" is a combination of Julia, Python, and R. However, it has support for over 40 programming languages. Jupyter is based on iPython notebooks, and, in fact you can still launch jupyter by typing ipython notebook on your terminal.
The concept is similar to Mathematica and it works similarly (to run a code cell you can press shift+enter)
You can launch a Jupyter notebook by just typing jupyter notebook on your terminal and this will open a new tab or window on your default browser. You can also select a different browser by setting the environment variable $BROWSER to the path of the browser that you want to use before launching or using the --browser option in the command line. In Windows under "Search programs and files" from the Start menu, type jupyter notebook and select "Jupyter notebook."
A Jupyter notebook is internally a JSON document but appears as a collection of "cells". Each segment of this document is a called cell. There are several types of cells but we are interested mainly in two types:
2.1. Markdown cells: Used for explanatory text (like this), and written in GitHub-flavored markdown. A markdown cells is usually displayed in output format, but a double click will switch it to input mode. Try that now on this cell. Use SHIFT+RETURN to toggle back to output format. Markdown cells can contain latex math, for example: $$ \frac{d\log G(z)}{d\log a} \simeq \left[ \frac{\Omega_m (1 + z)^3}{\Omega_m (1 + z)^3 + \Omega_\Lambda} \right]^{0.55} $$
2.2 Code cells: Contain executable source code in the language of the document’s associated kernel (usually python). Use SHIFT+RETURN to execute the code in a cell and see its output directly below. Try that now for the code cell below. Note that the output is not editable and that each code cell has an associated label, e.g. In [3], where the number records the order in which cells are executed (which is arbitrary since it depends on you). Re-run the code cell below and note that its number increases each time.
In [1]:
import datetime
print(datetime.datetime.now())
More info on notebooks and cells is here.
We will now focus on Python. To start a notebook it is a good practice to import all the packages and define the styles that we want to use in our "boilerplate". A good starting point is:
import numpy as np
import matplotlib.pyplot as plt
With these commands we set up our notebook to use the numpy package and the matplotlib package. If we use them like that, the plots will pop-up in a new window instead of being shown in the notebook. To see them in the notebook we should use a "magic function".
There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes. Cell magics are prefixed with a double %%, and they are functions that get as an argument not only the rest of the line, but also the lines below it in a separate argument. A useful example is:
In [2]:
%pylab inline
The magic %pylab sets up the interactive namespace from numpy and matplotlib and inline adds the plots to the notebook. These plots are rendered in PNG format by default.
More useful magic commands:
%time and %timeit measure execution time.%run runs a Python script and loads all its data on the interactive namespace.%config InlineBackend.figure_formats = {'png', 'retina'} Enables high-resolution PNG rendering and if we change 'png' to 'svg' or any other format we change the format of plots rendered within the notebook.The magic %load is really useful since it allows us to load any other Python script. It has an option -s that allows us to modify the code inside the notebook. We use %load below to reveal solutions to some of the exercises.
Command line magic: You can run any system shell command using ! before it. Example:
In [3]:
!ls
Advanced magic commands:
%load_ext Cython%cython or %%cythonNumpy is a Python package that implements N-dimensional arrays objects and it is designed for scientific computing. It also implements a multitude of mathematical functions to operate efficiently with these arrays. The use of numpy arrays can significantly boost the performance of your Python script to be comparable to compiled C code. Some useful examples and tutorials can be found here and here.
In [4]:
#Example of how to compute the sum of two lists
def add(x,y):
add=0
for element_x in x:
add=add+element_x
for element_y in y:
add=add+element_y
return add
In [5]:
my_list = range(0,100)
In [6]:
print(my_list)
In [7]:
%timeit -n10 sum_1=add(my_list,my_list) #I compute 10 iterations
In [8]:
#Example using numpy arrays
my_array = np.arange(0,100,1)
print(my_array)
In [9]:
%timeit -n10 np.sum(my_array+my_array) #I compute 10 iterations
The improvement is especially significant when you have to use vectors or matrices
Exercise 1: Compute the product element by element of a 2x2 list. Compare with numpy
In [10]:
#%load ex1.py
#my_list = [[1,2],[3,4]]
#my_array=np.arange(1,5,1)
#my_array=my_array.reshape(2,2)
A very useful feature of numpy are masks and masked arrays. You can easily select all the values of a vector or an array that fulfill certain condition using a mask.
In [11]:
#In this example we will split a random array into three different categories taking advantage of the numpy masks
#We generate an array with 1000 random elements in the interval [0,1)
my_array = np.random.random(1000)
In [12]:
%time mask = [np.logical_and(my_array>i/3.,my_array<(i+1)/3.) for i in range(0,3)]
In [13]:
print(len(my_array[mask[0]]), len(my_array[mask[1]]), len(my_array[mask[2]]))
Let's compare to a traditional brute-force approach
In [14]:
#This is a very simple implementation.
#Maybe sorting the list first or using a matrix instead of lists it would be faster
%time #Use %%time for python2.x
arr1=[]
arr2=[]
arr3=[]
for element in my_array:
if(element>0 and element<1./3.):
arr1.append(element)
if(element>1./3. and element<2./3.):
arr2.append(element)
else:
arr3.append(element)
This is the most widespread package for plotting in Python. There are tons of examples on the web, and it is very well integrated in Jupyter.
Example: Let's plot a sinusoidal wave using matplotlib. (It is imported already since we used the magic %pylab inline)
In [15]:
#First we are going to set up the plots to be SVGs instead of the default PNGs
### Uncomment this cell to use SVG
#%config InlineBackend.figure_formats = {'svg',}
In [16]:
#We will sample the function in 100 points from 0 to pi
x = np.linspace(0,np.pi,100)
#We compute the sine of the numpy array x
y = np.sin(x)
In [17]:
#We make the plot (it automatically generates the figure)
plt.plot(x,y,'-',color='green',label='$\sin(x)$')
#We add the label to the X and Y axes
plt.xlabel('$x$')
plt.ylabel('$\sin(x)$')
#We generate the legend
plt.legend()
#We change the limits of the X and Y axes
plt.xlim(-0.05,np.pi+0.05)
plt.ylim(-0.05,1.05)
Out[17]:
Exercise: Generate and plot a 2D histogram using matplotlib
In [ ]:
In [ ]:
In [18]:
#plt.hist2d
In [19]:
# %load ex2.py
def ex2():
rs = np.random.RandomState(112)
x=np.linspace(0,10,11)
y=np.linspace(0,10,11)
X,Y = np.meshgrid(x,y)
X=X.flatten()
Y=Y.flatten()
weights=np.random.random(len(X))
plt.hist2d(X,Y,weights=weights); #The semicolon here avoids that Jupyter shows the resulting arrays
ex2()
Seaborn is a Python package based on matplotlib that includes some convenient plotting functions for statistical analysis. (Some people also like more its default style)
In [20]:
#First let's import seaborn (a warning will appear because it conflicts with %pylab inline)
import seaborn as sns
In [21]:
#Compare with matplotlib style (you can still use the same commands but they will render in seaborn style)
#We make the plot (it automatically generates the figure)
plt.plot(x,y,'-',color='green',label='$\sin(x)$')
#We add the label to the X and Y axes
plt.xlabel('$x$')
plt.ylabel('$\sin(x)$')
#We generate the legend
plt.legend()
#We change the limits of the X and Y axes
plt.xlim(-0.05,np.pi+0.05)
plt.ylim(-0.05,1.05)
Out[21]:
Exercise: Plot again your 2D histogram using seaborn jointplot (https://web.stanford.edu/~mwaskom/software/seaborn/examples/hexbin_marginals.html)
In [22]:
#sns.jointplot()
In [23]:
# %load ex3.py
def ex3():
rs = np.random.RandomState(112)
x=np.linspace(0,10,11)
y=np.linspace(0,10,11)
X,Y = np.meshgrid(x,y)
X=X.flatten()
Y=Y.flatten()
weights=np.random.random(len(X))
sns.jointplot(X,Y,kind='hex',joint_kws={'C':weights}); #The semicolon here avoids that Jupyter shows the resulting arrays
ex3()
Jupyter also makes easier the use of new packages providing interactive documentation. The command help(name_of_the_package) lists the available documentation for a pacakge. ?name provides information about the package. shift+tab provides the arguments to a function.
Some of us have struggled a little while creating a FITS file using, for example, cfitsio (you have to initialize status and things like that). The syntax is also kind of obscure and you have to be sure of the format of the variables you are reading. Reading images or FITS tables using Python and Jupyter is much easier and intuitive (and it is not much slower).
There are basically two ways of reading a fits file using astropy:
astropy.io.fits: The astropy.io.fits module (originally PyFITS) is a “pure Python” FITS reader in that all the code for parsing the FITS file format is in Python, though Numpy is used to provide access to the FITS data. astropy.io.fits currently also accesses the CFITSIO to support the FITS Tile Compression convention, but this feature is optional. It does not use CFITSIO outside of reading compressed images.astropy.table: It uses internally astropy.io.fits it is very convenient for BinarytableHDU in FITS.There exist other ways to read fits files using Python. For example, you can use the fitsio package (to install it do pip install fitsio). This other package is faster and works better for large files than astropy, making it necessary when performance is a strong requirement or constrained. However, it doesn't work under Windows and it needs to have a C compiler installed. The fitsio interface is pretty similar to astropy.table but, it is not identical (some of the things learned here can be directly applied and some other cannot)
First we are going to download a small image from the WeakLensingDeblending package, which simulates one CCD chip in LSST at full depth (http://weaklensingdeblending.readthedocs.io/en/latest/products.html). The data can be downloaded using the link in here: ftp://ftp.slac.stanford.edu/groups/desc/WL/LSST_i_trimmed.fits.gz or from this repository.
First we are going to use astropy.io.fits to read the FITS file as an hdulist (that includes an image HDU and a BinaryTableHDU)
In [25]:
#We import the package needed to read the file
import astropy.io.fits as fits
In [26]:
path = './downloaded_data/LSST_i_trimmed.fits.gz'
#We open the file and it gives us an hdulist
hdulist = fits.open(path)
In [27]:
#We can check what this hdulist has using print
print(hdulist)
In [28]:
#We are going to see what is in the image, we use imshow and select a gray colormap
#we also select a minimum of 0 in the colorbar (vmin) and a maximum of 250 (vmax)
plt.imshow(hdulist[0].data,vmin=0,vmax=250,cmap='gray')
#Show the colorbar
plt.colorbar()
Out[28]:
Now we are going to use astropy.table to read the BinaryTableHDU. We could also read it using hdulist[1].data but let's make use of this nice package
In [29]:
#Importing astropy.table
import astropy.table
In [30]:
#reading the table. In a multi-hdu file we can specify the hdu with read(path,hdu=num_hdu)
table = astropy.table.Table.read(path)
In [31]:
#we show the contents of the table
table
Out[31]:
We can also select any column by simply using table['NAME_OF_THE_COLUMN']
In [32]:
#We print the purity column of the table
print(table['purity'])
Exercise: Make a histogram of the signal to noise snr_iso for different purity cuts (Hint: lookup the documentation for np.hist and make use of numpy masks)
In [33]:
plt.hist
Out[33]:
In [34]:
# %load ex4.py
def ex4():
masks = [np.logical_and(table['purity']>i/4.,table['purity']<(i+1)/4.) for i in range(0,4)]
for i in range(0,4):
label = str(i/4.)+' < purity < '+str((i+1)/4.)
plt.hist(table['snr_iso'][masks[i]],range=(0,20),bins=40, label=label, alpha=0.5, normed=True)
plt.legend()
plt.figure()
for i in range(0,4):
label = str(i/4.)+' < purity < '+str((i+1)/4.)
plt.hist(table['snr_grpf'][masks[i]],range=(0,20),bins=40, label=label, alpha=0.5, normed=True)
plt.legend()
ex4()
Exercise: Repeat that with snr_grpf
In [35]:
#We are going to use some columns of the table above to produce a useful pairplot
#We make use of numpy masks!
selection = np.empty(len(table['snr_grpf']),dtype='a20')
mask_03 = table['purity']<=0.3
mask_06 = np.logical_and(table['purity']>0.3,table['purity']<=0.6)
mask_09 = np.logical_and(table['purity']>0.6,table['purity']<=0.9)
mask_1 = table['purity']>0.9
selection[mask_03]="purity<=0.3"
selection[mask_06]="0.3<purity<=0.6"
selection[mask_09]="0.6<purity<=0.9"
selection[mask_1]="purity>0.9"
#We require the values dg1 and dg2 to be finite in order that seaborn creates automatically the histograms
masked_array = np.logical_not(np.logical_or(np.isinf(table['dg1_grp']),np.isinf(table['dg2_grp'])))
#We are going to plot just 1000 points
nobj=500
#We will use certain columns of the table
cols = [selection[masked_array][0:nobj],table['dg1_grp'][masked_array][0:nobj], \
table['dg2_grp'][masked_array][0:nobj],table['e1'][masked_array][0:nobj], \
table['e2'][masked_array][0:nobj]]
new_table = astropy.table.Table(cols,names=('selection','dg1_grp','dg2_grp','e1','e2'))
#Seaborn pairplot requires a pandas data frame
df = new_table.to_pandas()
In [36]:
sns.pairplot(df, hue='selection')
Out[36]:
In [37]:
#We are going to check the correlations using heatmap
corr = df.corr()
sns.heatmap(corr)
Out[37]:
Sometimes it is difficult to keep track of which units you are using when you write very long programs. This is simplified when you use astropy.units (http://docs.astropy.org/en/stable/units/). The package also handles equivalences and makes easy the unit conversion. It raises an error if you are operating with incompatible units.
In [38]:
import astropy.units as u
In [39]:
x = 10*u.km
In [40]:
x.to(u.imperial.mile) + 10*u.Mpc
Out[40]:
Let's see an example where some units are assumed
In [41]:
#We read a quasar-catalog data table
quasar_table = astropy.table.Table.read('./downloaded_data/quasar_table.fits')
In [44]:
#We import speclite to compute magnitudes
import speclite
import speclite.filters
sdss = speclite.filters.load_filters('sdss2010-*')
#Spectrum of quasar #40
wave = np.load('./downloaded_data/wave.npy') #No units included but units are Angstroms
flux = np.load('./downloaded_data/flux.npy') #It comes without units but they're 1e-17 erg/cm**2/s/AA
#We use get magnitudes to compute the magnitudes. If the units are not included, it assumes (erg/cm**2/s/AA, AA)<-(flux, wave)
mags = sdss.get_ab_magnitudes(flux*1e-17*u.erg/u.cm**2/u.s/u.AA,wave*u.AA)
#If we don't use the correct units...
mags_wrong = sdss.get_ab_magnitudes(flux,wave)
mags_boss = np.hstack(quasar_table['PSFMAG_%d' %f][40] for f in range(0,5))
print(mags)
print(mags_boss)
print(mags_wrong)
In [45]:
#Now we are going to prepare a Boosted decision tree photo-z estimator
from sklearn.ensemble import GradientBoostingRegressor
#Prepare the training array
mags = np.vstack([quasar_table['PSFMAG_%d' % f] for f in range(0,5)]).T
z = quasar_table['Z_VI']
print(len(z))
#train on 20% of the points
mag_train = mags[::5]
z_train = z[::5]
print(len(z_train))
#test on 5% of the points
mag_test = mags[::18]
z_test = z[::18]
#Set up the tree
clf = GradientBoostingRegressor(n_estimators=500, learning_rate=0.1,max_depth=3, random_state=0)
#Train the tree
clf.fit(mag_train, z_train)
#Test it!
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
#Compute rms in the training set and test set
rms_train = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test = np.mean(np.sqrt((z_fit - z_test) ** 2))
In [46]:
plt.scatter(z_test,z_fit, color='k', s=0.1)
plt.plot([-0.1, 6], [-0.1, 6], ':k')
plt.text(0.04, 5, "rms = %.3f" % (rms_test))
plt.xlabel('$z_{true}$')
plt.ylabel('$z_{fit}$')
Out[46]:
Exercise: Train and evaluate the performance of the tree using colors instead of the magnitudes themselves
In [47]:
# %load ex6.py
def ex6():
colors = np.vstack([quasar_table['PSFMAG_%d' % f]-quasar_table['PSFMAG_%d' % (f+1)] for f in range(0,4)]).T
color_train = colors[::5]
color_test = colors[::18]
clf.fit(color_train, z_train)
#Test it!
z_fit_train = clf.predict(color_train)
z_fit = clf.predict(color_test)
#Compute rms in the training set and test set
rms_train = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test = np.mean(np.sqrt((z_fit - z_test) ** 2))
plt.scatter(z_test,z_fit, color='k', s=0.1)
plt.plot([-0.1, 6], [-0.1, 6], ':k')
plt.text(0.04, 5, "rms = %.3f" % (rms_test))
plt.xlabel('$z_{true}$')
plt.ylabel('$z_{fit}$')
In [48]:
ex6()
Optional exercise: Create a nearest-neighbors estimator (KNN) using from sklearn.neighbors import KNeighborsRegressor
In [ ]:
# %load opt_ex1.py
I am going to use a Recurrent Neural network, it may not be the optimal choice but, this is to illustrate how to set up the network. More on recurrent neural networks here: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Optional exercise: Create your own Neural Network photo-z estimator
In [ ]:
# %load opt_nn.py
We will use randomfield to create a gaussian random field
In [ ]:
import randomfield
In [ ]:
%time generator = randomfield.Generator(8, 128, 1024, grid_spacing_Mpc_h=1.0, verbose=True)
In [ ]:
delta = generator.generate_delta_field(smoothing_length_Mpc_h=2.0, seed=123, show_plot=True)
We will try to calculate the correlation function in the direction of the line-of-sight:
$$\xi_{\parallel}(r)=\langle \delta(r') \delta(r+r')\rangle$$
In [ ]:
%%time
#Let's compute a simple version of the correlation function in the direction of the direction of the line-of-sight
corr = np.zeros(delta.shape[2])
for i in range(1,delta.shape[2]-1):
corr[i]=np.sum(delta[:,:,i:]*delta[:,:,:-i])/(delta.shape[0]*delta.shape[1]*(delta.shape[2]-1))
In [ ]:
r = np.linspace(0,delta.shape[2],delta.shape[2]+1)
plt.plot(r[1:-1],r[1:-1]**2*corr[1:])
plt.xlim(0,200)
plt.xlabel(r'$r_{\parallel}$ [Mpc h$^{-1}$]')
plt.ylabel(r'$r_{\parallel}^{2}*\xi_{\parallel}(r_{\parallel})$ [Mpc$^{2}$ h$^{-2}$]')
plt.ylim(-4500,300);
Healpy (http://healpy.readthedocs.io/en/latest/) includes tools for visualizing skymaps but, what if we want to use different projections? Or what if we cannot use healpy? See here, and here for more info.
In [ ]:
def plot_sky(ra, dec, data=None, nside=4, label='', projection='eck4', cmap=plt.get_cmap('jet'), norm=None,
hide_galactic_plane=False, healpy=False):
from mpl_toolkits.basemap import Basemap
from matplotlib.collections import PolyCollection
from astropy.coordinates import SkyCoord
ra=ra.to(u.deg).value
dec=dec.to(u.deg).value
if(healpy):
import healpy as hp
# get pixel area in degrees
pixel_area = hp.pixelfunc.nside2pixarea(nside, degrees=True)
# find healpixels associated with input vectors
pixels = hp.ang2pix(nside, 0.5*np.pi-np.radians(dec), np.radians(ra))
# find unique pixels
unique_pixels = np.unique(pixels)
# count number of points in each pixel
bincounts = np.bincount(pixels)
# if no data provided, show counts per sq degree
# otherwise, show mean per pixel
if data is None:
values = bincounts[unique_pixels]/pixel_area
else:
weighted_counts = np.bincount(pixels, weights=data)
values = weighted_counts[unique_pixels]/bincounts[unique_pixels]
# find pixel boundaries
corners = hp.boundaries(nside, unique_pixels, step=1)
corner_theta, corner_phi = hp.vec2ang(corners.transpose(0,2,1))
corner_ra, corner_dec = np.degrees(corner_phi), np.degrees(np.pi/2-corner_theta)
# set up basemap
m = Basemap(projection=projection, lon_0=-90, resolution='c', celestial=True)
m.drawmeridians(np.arange(0, 360, 30), labels=[0,0,1,0], labelstyle='+/-')
m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-')
m.drawmapboundary()
# convert sky coords to map coords
x,y = m(corner_ra, corner_dec)
# regroup into pixel corners
verts = np.array([x.reshape(-1,4), y.reshape(-1,4)]).transpose(1,2,0)
# Make the collection and add it to the plot.
coll = PolyCollection(verts, array=values, cmap=cmap, norm=norm, edgecolors='none')
plt.gca().add_collection(coll)
plt.gca().autoscale_view()
if not hide_galactic_plane:
# generate vector in galactic coordinates and convert to equatorial coordinates
galactic_l = np.linspace(0, 2*np.pi, 1000)
galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5
# project to map coordinates
galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree)
m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k')
# Add a colorbar for the PolyCollection
plt.colorbar(coll, orientation='horizontal', pad=0.01, aspect=40, label=label)
else:
nx, ny = nside, nside
ra_bins = numpy.linspace(-180, 180, nx+1)
cth_bins = numpy.linspace(-1., 1., ny+1)
ra[ra>180]=ra[ra>180]-360
density, _, _ = numpy.histogram2d(ra, np.sin(dec*np.pi/180.), [ra_bins, cth_bins])
ra_bins_2d, cth_bins_2d = numpy.meshgrid(ra_bins, cth_bins)
m = Basemap(projection=projection, lon_0=0, resolution='l', celestial=True)
m.drawmeridians(np.arange(0, 360, 60), labels=[0,0,1,0], labelstyle='+/-')
m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-')
m.drawmapboundary()
xs, ys = m(ra_bins_2d, np.arcsin(cth_bins_2d)*180/np.pi)
pcm = plt.pcolormesh(xs, ys, density)
plt.colorbar(pcm,orientation='horizontal', pad=0.04, label=label)
if not hide_galactic_plane:
# generate vector in galactic coordinates and convert to equatorial coordinates
galactic_l = np.linspace(0, 2*np.pi, 1000)
galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5
# project to map coordinates
galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree)
m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k')
In [ ]:
ra = 360*np.random.random(10000)*u.deg
dec = np.arcsin(-1+2*np.random.random(10000))*180/np.pi*u.deg
plot_sky(ra,dec,healpy=False, nside=16, projection='eck4', label='Galaxies per pixel')
Exercise: Plot the positions of the quasars
In [ ]:
# %load ex7.py
def ex7():
plot_sky(quasar_table['RA']*u.deg,quasar_table['DEC']*u.deg,nside=128, healpy=False)
In [ ]:
ex7()
astropy.cosmology is a subpackage that contains several cosmologies implemented (LCDM, wCDM, etc) and computes some useful quantities for them such as: comoving distance, $H(z)$ or transverse separations from angular separations at redshift $z$.
Example: Using $\Lambda CDM$ with Planck 2015 cosmological parameters
In [ ]:
from astropy.cosmology import Planck15
In [ ]:
print(Planck15.__doc__)
In [ ]:
z=np.logspace(-4,4,30)
om=Planck15.Om(z)
ob=Planck15.Ob(z)
plt.plot(z,om,label=r'$\Omega_{m}(z)$')
plt.plot(z,ob,label=r'$\Omega_{b}(z)$')
plt.legend(loc=2)
plt.xscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$\Omega(z)$')
In [ ]:
h=Planck15.H(z)
plt.plot(z,h,label=r'$H(z)$')
plt.legend(loc=2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$H(z)$ %s' % h.unit)
In [ ]:
from astropy.cosmology import z_at_value
z_at_value(Planck15.comoving_distance, 1200 *u.Mpc)
In [ ]:
from astropy.cosmology import w0waCDM
cosmo = w0waCDM(H0=75*u.km/u.s/u.Mpc,Om0=0.3,Ode0=0.7,w0=-1.2,wa=-3,Neff=4,Ob0=0.044,m_nu=1e-5*u.eV)
In [ ]:
h_cosmo = cosmo.H(z)
plt.plot(z,h_cosmo, label='Random cosmology')
plt.plot(z,h, label='Planck15')
plt.legend(loc=2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$H(z)$ %s' % h.unit)
In [ ]:
plt.plot(z,h_cosmo/h-1)
plt.legend(loc=2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$H_{cosmo}(z)/H_{Planck15}(z)$')
In [ ]: