In [1]:
# from https://randomastronomy.wordpress.com/2014/07/14/a-python-with-a-long-memory-fitting-1f-noise-with-pymc/

In [ ]:
import datetime

import pymc
import numpy as np
import spacepy.plot as spp # for the style
import matplotlib.pyplot as plt
import spacepy.toolbox as tb
import spacepy.plot as spp
%matplotlib inline

datetime.datetime.now()

In [ ]:


In [ ]:


In [ ]:

randomastronomy Astronomy, physics, computers and probability — all here!

Search Search Main menu Skip to primary content Home About Post navigation← PreviousNext → A python with a long memory: fitting 1/f noise with PyMC Posted on July 14, 2014 Introduction

Since ancient times, astronomy has been observing long-memory processes (i.e., short frequency noise) all over the place. The problem has been a tough one for lots of people working on time-series, specially for the ones involved in exoplanets, as the lightcurves (which show an apparent decrease in the observed flux of the star as the result of the planet eclipsing the star) usually show signs of red noise which can’t be neglected, as they affect the physical properties of the planet and the star derived from these measurements (Pont, Zucker & Queloz, 2006).

A special case of these kinds of long-memory processes, flicker (or 1/f, or pink) noise, has attracted the attention of the community in general for more than forty years (see, e.g., Press, 1978) but, until recent years, little work has been done to account for that kind of noise, despite the fact that approximate solutions for the problem have been around since almost twenty years (Wornell, 1995). However, a few years ago, Carter & Winn (2009) published a very interesting paper in which they derive and implement a very interesting methodology following the approaches given in Wornell (1995), in which a wavelet-based approach is used to transform the time series. The trick is that in wavelet space, 1/f processes have nearly diagonal covariance matrices, so the problem is extremly simplified when wavelet-transforming the time-series. The implementation assumes a model of the form:

f(t,\theta) + \epsilon{\textnormal{1/f}}(t)+\epsilon{\textnormal{wn}}(t),

where the first term is a deterministic model with parameters \theta, the second term is a zero mean 1/f stochastic process (which is defined by only one parameter, \sigma_r, which defines its amplitude but is not the square root of its variance), implying its Power Spectral Density is proportional to 1/f and the third term is a zero mean white noise process, here assumed to be gaussian (and hence with only one parameter defining it, \sigma_w, the square root of its variance). Furthermore, the model assumes the time-sampling is uniform.

The work of Carter & Winn (2009) was implemented in a code called Transit Analysis Package (TAP; Gazack et al., 2012), which was coded in Interactive Data Language (IDL), a very nice coding plataform/language, but also a very expensive one for a poor graduate student like myself1. So I decided to make my own implementation in Python of the work of Carter & Winn (2009), but using PyMC, a really amazing python module that implements Bayesian statistical models and fitting algorithms, which includes Markov chain Monte Carlo. In this blog post I want to make a little tutorial on how to use this implementation; you can download all the codes for your research in my personal webpage [here]. If you use it, please, cite the work of Carter & Winn (2009) and acknowledge the hard work I have put in this implementation.

Installing the package

To install this package, download the code from my the package repository, and unpack it to a folder. Inside you will find a install.py file that you have to run, and this will create a file called FWT.so inside the WaveletCode folder; this is a Discrete Wavelet Transform and Inverse Wavelet Transform implementation in C that I wrote, based on the algorithm in the Numerical Recipes. The coding in C is done mainly because it is fast, and it can be called from Python as you can check on the Wavelets.py code in that same folder.

After the above is done you are done! From now on, I will assume you are working on the same directory as the FlickerLikelihood.py file that we will use (which is on the same folder as the install.py file). Let’s make a little experiment now!

A simple problem: fitting a line with 1/f + white gaussian noise

Let’s solve the simple problem of fitting a line of the form f(t)=at+b which has both additive white and 1/f noise. In the following plot, I have created 1/f noise following the method of Paul Burke, with \beta=1 (in red), and I added white gaussian noise in order to make the problem even more challenging (black dots):

noise_figure

The next step is to add a model to that noise. I added a model of the form f(t)=at+b, with a=0.5 and b=3, which is plotted in the following figure (you can download the dataset from here to run the codes showed here):

dataset_figure

And now comes the code. Using the FlickerLikelihood.py file contained in the package, which calculates the likelihood derived by Carter & Winn (2009) with the function get_likelihood, which takes as inputs (1) the residuals, (2) the standard deviation of the white noise, \sigma_w and (3) a parameter that defines the amplitude of the flicker noise model, \sigma_r, one can create a likelihood using our model with PyMC, which I called LineFlickerModel.py:

LineFlickerModel.py 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 import sys sys.path.append("WaveletCode") import FlickerLikelihood import numpy as np from pymc import * import Wavelets

Functions

def model(t,a,b): return a*t+b

Get the data

t,data = np.loadtxt('flicker_dataset.dat',unpack=True)

Priors

b = Uniform('b',-10,10) a = Uniform('a',-1,1) sigma_w = Uniform('sigma_w',0,100) sigma_r = Uniform('sigma_r',0,100)

Likelihood

@observed def likelihood(value=data,t=t,a=a,b=b,sigma_w=sigma_w,sigma_r=sigma_r): residuals=data-model(t,a,b) return FlickerLikelihood.get_likelihood(residuals,sigma_w,sigma_r) With this code, now I can run some MCMC links using PyMC; the following code uses the above code and draws samples from it:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import numpy as np import matplotlib.pyplot as plt import pymc import LineFlickerModel

Set the number of iterations and burnins:

niterations = 1e5 nburn = 1e4

Set MAP estimates as starting point of MCMC. First, calculate MAP estimates:

M = pymc.MAP(LineFlickerModel) M.fit()

Now set this as starting point for the MCMC:

mc=pymc.MCMC(M.variables)

And start the sample!

mc.sample(iter=niterations+nburn,burn=nburn)

Plot the final samples (posterior samples):

pymc.Matplot.plot(mc) plt.show() The results you will get will be the posterior samples for each parameter. In particular, for the fitted parameters I get:

a=0.501 \pm 0.008

b=3.04 \pm 2.8

(Here I cited the standard deviations of the posterior samples as errors). Note the huge errorbar on b: this is good! It is good because due to the correlated noise, we are very uncertain about the value (recall that the 1/f noise + white gaussian noise is a stochastic process and, as such, we only observe one realization of it; we can take into account this, but this won’t make our measurements more precise, only more accurate). What if we try fitting a white gaussian noise model (as most people would do!), not taking into account that we have correlated noise? I will leave that problem to the reader, but the MCMC samples of both, the gaussian white noise run (black points) and the 1/f + gaussian white noise run (red points), are plotted on the following figure:

posterior_figure

The results I get for the parameters are:

a_{\textnormal{g}}=0.493 \pm 0.002

b_{\textnormal{g}}=4.56 \pm 0.63

Note the very little errorbar on the estimated parameter b_{\textnormal{g}}! This is BAD! What this shows (and you can generate more datasets and simulate this claim; which is actually something Carter & Winn (2009) already did) is that, in general, the errorbars on the parameters will be more realistic if you take into account the proper noise model: on average you will get the same results, but for a particular dataset, the errorbars taking into account the 1/f noise model will be more realistic than the ones you would get by assuming a gaussian (or any) white noise model. For the statisticians on the crowd: the estimator taking into account the 1/f noise has a lower variance than the estimator taking into account white gaussian noise.

The challenge in real life is, of course, to know a-priori what kind of noise you actually have. There are several tools for this that are out of the scope of this blog post but, if you are really interested, you can check some of them on our work on this subject on Jordán, Espinoza, et. al (2013); the paper is a little technical in general, but the relevant section to this problem is Section 4.2.

Conclusions

I want to make the conclusions super short (because this was intended only as a tutorial on how to use my code), but in summary what I want to say is:

Know your noise (or at least try to understand it). This is the best advice I can give to anyone around the world: small errorbars don’t imply high accuracy. This is it! If you have any questions, remarks, thanks or find any bugs, please, the comment section is open! All the codes used in this example are on the GitHub repository of this package here: https://github.com/nespinoza/flicker-noise/tree/master/example.

1: To be fair, the real reasons of why I wanted to make a Python implementation were that: (1) I stopped coding in IDL as soon as I knew Python existed, (2) most of my colleagues work in python and (3) I really wanted to make use of PyMC for this kind of analyses.

SHARE THIS: TwitterFacebook

This entry was posted in Statistics and tagged noise models by Néstor. Bookmark the permalink. Leave a Reply

Enter your comment here... The Twenty Eleven Theme. | Blog at WordPress.com. Follow Follow “randomastronomy”

Get every new post delivered to your Inbox.

Enter your email address

Sign me up

Build a website with WordPress.com :)


In [ ]: