The Copernicus Algorithm

This is a collection of scripts written to fullfil the rerquirements of my Ph.D. The Copernicus algorithm is meant to reconstruct the background geometry of a the Universe without invoking the Copernicus principle (CP). As a result it can be used to test the CP. At the time of writing current data were not sufficient to confirm or refute the validity of the CP (see [Bester, 2015 ](http://inspirehep.net/search?p=find+eprint+1506.01591)). The code is therefore made available albeit with no gaurantee that it is able to perform its advertised duty.

Compiling the Fortran modules

After downloading the Copernicus directory we first need to compile a few files using f2py. The Fortran modules do not depend on any existing libraries so there shouldn't be any dependency issues (except for having a working versions of numpy and scipy installed). Open the terminal in the folder called fortran_mods and run the command

$ f2py -c -m CIVP solver.f90

You should see some output. If it ends with

$ Removing build directory ...

we can be sure the code compliled without any errors (ignore the warnings). You should have a file called CIVP.so or CIVP.pyd depending on whether you are running Linux or Windows respectively. This file is automatically imported by the various scripts, it is the main module performing the integrations. If you renamed the CIVP module at compile time be sure to change the name in the corresponding import list of the file you are trying to run. The usage documentation of the CIVP module can be inspected by running the following command inside the python environment

>>> CIVP.subroutine_name.__doc__

where subroutine_name is the name of the subroutine you need documentation about. This will work for any module compiled with f2py. Alternatively if you are running an IDE with an object inspector (such as Spyder2) you can simply import the module and type CIVP.subroutine_name into the object inspector's text box. To make sure everything is working as expected we will have a quick look at the convergence diagnostics.

Convergence diagnostic

The convergence test is rather uncreatively named Test_Convergence.py. First lets set up the environment to render figures inline and ignore irritating warning messages with


In [4]:
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')

Next we run the program using ipython's run command


In [6]:
run Test_Convergence.py


0
1
2
3
4
5
6
7
8
9
2.00164228526 0.000429824554643
2.00088680001 0.000292557620511
2.03012814483 0.0160196723476
2.00959876234 0.0257234073866
2.04622885524 0.0329766647402
2.01275230486 0.194532089938
32.5866903466 1.86903586316
2.8798384867 0.788774312782

The first bit of output is just the progress report, the numbers indicate the sample number. Each sample is integrated on three grids, each time successively halving the spatial grid spacing. The solution on these three grids is then used to perform a three level self convergence test. The output following the progress report is just the average convergence factors as well as their standard deviations. Clearly the code is at least second order. (Include the consistency check figures).

Example usage (simple)

The directories in the Copernicus folder have been structured to allow for easy customisation (at least I hope so). Running a simulation is as easy as running the sampler.py script. This script uses the classes contained in the Master.py file to implement an MCMC that infers the joint distribution of $H_{\parallel}(z)$, $\rho(z)$ and $\Lambda$ compatible with the data. Before getting into the details of how to modify the input data let run the script to see it in action


In [ ]:
run sampler.py

The outputs of this script are, firstly another progress report indicating how many samples have been drawn, secondly the acceptance rate of the MCMC and, finally, a file called samps.npz in the ProcessedData directory which contains the quantities of interrest. We will go into the details of how to plot these shortly. First lets see how to change the config inside sampler.py.

To run an actual simulation you will need to modify a couple of files. Firstly the RawData directory contains the raw data files as .txt files that are to be used in the simulation. This includes the data to set the priors and to perform inference. You can use any combination of data as long as you modify the Master.py file to load it. The main class implementing the algorithm is the SSU class (for Spherically Symmetric Universe) in the Master.py file. This class has a method called loadDat() that should be modified to load the relevant data. There are at least two compulsary data sets viz. the longitudinal expansion rate $H{\parallel}(z)$ and the matter density $\rho(z)$. The data for these two functions will be used to set the priors but they may also be used for inference. If you want to incorporate a specific data set to perform inference with you should modify the get_Chi2() method to compute the $\chi^2$ for that particular data set and add it to the others. If you are using an observable that I have not yet computed you will have to add a method that computes that observable using the output from the CIVP. But more on this later.


In [ ]: