In [2]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Out[2]:
In [4]:
from IPython.display import HTML
HTML('../style/code_toggle.html')
Out[4]:
4.1.1.A Characteristics of a Periodic Function
4.1.1.B Complex Periodic Functions
4.1.2.A Fourier Series
4.1.2.B Fourier Transforms
4.1.2.B.a Boundary Problems
4.1.2.B.b Fourier transform of a sine function
4.1.2.B.c Fourier transform of a Dirac delta
4.1.2.B.d Fourier transform of a Gaussian
4.1.2.C Power Spectra
In this section, we will talk briefly about waves, then introduce Fourier analysis. Our aim is to make Fourier analysis as obvious as possible, while still going into sufficient depth to allow students to develop a good physical intuition.
A short note on nomenclature: in the literature, Fourier space can be referred to as phase space, Fourier space, $k$-space, visibility space, uv-space and more besides depending on the discipline. In this book, these terms will be used interchangeably: they all mean the same thing.
A periodic function is an endlessly repeating function. The simplest periodic function is the humble sine function:
In [16]:
x=(np.arange(800)-400)/200.*np.pi
plt.title("Sine function")
plt.plot(x,np.sin(x))
Out[16]:
Two quantities define a periodic function: amplitude and frequency. Amplitude defines the size of the wave; frequency defines how often the wave repeats. In the example above, both are set to 1; the function therefore repeats every $2\pi$ (since the sin function is the y-projection of a particle moving in a circular orbit), and has an amplitude of 1 (since the aforementioned orbit is of unit radius).
Two constants can also be added to a periodic function: amplitude offset and phase offset. A more general expression for our basic periodic function would therefore be:
In [26]:
amplitude = 1
frequency = 1
phase_offset = np.pi/2.
amplitude_offset = 0
plt.title("General sine function")
plt.plot(x,np.sin(x))
plt.plot(x,amplitude*np.sin(frequency*x+phase_offset)+amplitude_offset)
Out[26]:
I strongly encourage you to play with the graph above - for example, what happens when you set the amplitude to -1 ? What happens if you set the amplitude to -1 and the phase offset to $\pi$?
N.B. Note that setting the phase offset of a sine function to $\frac{\pi}{2}$ gives you a different periodic function: a cosine function.
In many physics textbooks, the frequency $f$ may be referred to as the angular frequency $\omega$. The two are linked by the following relation:
There exist many periodic functions, all built from the humble sine function. One of particular interest to physicists - for it helps us describe electromagnetic waves - is the exponential form of the complex periodic function, as defined by Euler's formula:
FOr the remainder of this section, we will use both sine and exponential forms of periodic functions.
A key discovery of French mathematicien Joseph Fourier was that, within a bounded interval, any integrable function - no matter its shape - could be replicated using a linear combination of sines and cosines. The bounded interval is significant: since we are replicating the original function with periodic functions, the predefined interval will essentially correspond to the maximum period of our sines and cosines.
Note that Fourier series are covered in greater mathematical detail in 2.3 Fourier Series. Here, we will try to give a more physical understanding of the operation.
Consider some arbitrary function (here, $f(x) = (x-0.5)^3+3$):
In [112]:
x=(np.arange(1200)-600.)/200
y=(x-0.5)**3+2
plt.plot(x,y)
plt.title("Arbitrary function")
plt.ylabel("f(x)")
plt.xlabel("x")
Out[112]:
Let us now consider that we are only interested in this function in the range $x=[-2,2]$. Our interval is thus 4: the period over which we will replicate our initial function f(x) will be 4. Our function here is definitely integrable over the period of interest; we can thus perform a fourier series and try to replicate our function over our interval of interest. The formula to do so is as follows:
with
Note the factor of 0.5 in front of $a_0$.
In practice, of course, the upper limit of $n\rightarrow\infty$ is not achievable. Let us try to replicate the arbitrary function above using different values of $n_{max}$.
In [120]:
nmax=10
x=(np.arange(1201)-800.)/200
y=(x-0.5)**3+2
def FourierSeriesApprox(xvals,yvals,nmax):
approx=np.zeros_like(yvals)
T=(xvals[-1]-xvals[0])
w=2*np.pi/T
dt=xvals[1]-xvals[0]
approx=approx+1/T*(np.sum(yvals)*dt)
for t in range(len(xvals)):
for n in (np.arange(nmax)+1):
an=2/T*np.sum(np.cos(w*n*xvals)*yvals)*dt
bn=2/T*np.sum(np.sin(w*n*xvals)*yvals)*dt
approx[t]=approx[t]+an*np.cos(w*n*xvals[t])+bn*np.sin(w*n*xvals[t])
return approx
yApprox=FourierSeriesApprox(x,y,nmax)
#plt.plot(x,y)
plt.plot(x,yApprox,label="Fourier series approx. with nmax=%i"%nmax)
plt.plot(x,y,label="Initial function")
plt.legend(loc="upper left")
Out[120]:
Note that our fit becomes worse at the edges if we are fitting a function with a different period than our sampling period. This is because our sines and cosines must repeat at some point - hence the divergence at the edges of our sampling.
Each $n$ corresponds to a given spatial frequency of our function over the sampling range: there are certain frequencies for which our periodic functions prove better "fits" for our function. This is the key point of Fourier analysis: we seek to find what scales contain most of our function's information. As nmax increases, we pick up finer and finer detail; as it tends to infinity, we get closer to being able to deal with discontinuities.
Physicists tend to discuss this in terms of energy. Each characteristic scale has a certain amount of information - a certain amount of "energy" from the function's total energy budget. If the function is constant, for example, we know that the characteristic scale will be in the constant term in the Fourier series (i.e. $a_0$): equivalently, we can say that all the function's power is concentrated in $a_0$. The reason we talk about "power" is explained in 4.1.2.C Power Spectra .
Note that the Fourier series can also be written in terms of a complex exponential as follows:
In the example above, we have taken a limited time band and found weights associated with discrete spatial frequencies. However, why should we restrict ourselves to discrete spatial frequencies? Indeed, with sufficiently large $n_{max}$, we should approach a continuous sampling of our function's Fourier coefficients.
By considering our function over its full range (i.e. $[x_0,x_1]\rightarrow[-\infty,\infty]$) and sampling infinitesimally finely in space, we can thus take the complex exponential form of the Fourier series to define the Fourier transform $\tilde{f}$ of a function $f$ defined in Note that Fourier series are covered in greater mathematical detail in 2.4 The Fourier Transform:
In other words, we create a continuous description, for every frequency, of the relative "weight" of the scale associated with that frequency in our function $f(t)$. For the purposes of this section, that is as far as we will take our analysis; everything that's been said about Fourier series holds for Fourier transforms, which are a more general case of the former.
Note that this works for both time frequencies and spatial frequencies. This latter quantity is what interests us as astronomers - by taking the (2-dimensional) Fourier transform of a (2-dimensional) image of the sky, we can find where the light of our sources lies. Most importantly, the reverse is true: we can recreate an image if we know its Fourier space well enough!
One key advantage Fourier transforms hold over Fourier series is that we are no longer beholden to poorly-set boundaries. Consider a simple sine wave of the following form:
If we try to find the Fourier coefficient for this function over a range other than a multiple of its period, then we may run into problems. Observe:
In [169]:
def FourierSeriesApprox(xvals,yvals,nmax):
approx=np.zeros_like(yvals)
T=(xvals[-1]-xvals[0])
w=2*np.pi/T
dt=xvals[1]-xvals[0]
approx=approx+1/T*(np.sum(yvals)*dt)
for t in range(len(xvals)):
for n in (np.arange(nmax)+1):
an=2/T*np.sum(np.cos(w*n*xvals)*yvals)*dt
bn=2/T*np.sum(np.sin(w*n*xvals)*yvals)*dt
approx[t]=approx[t]+an*np.cos(w*n*xvals[t])+bn*np.sin(w*n*xvals[t])
return approx
nmax=50
x=(np.arange(1201))/400.
y=np.sin(4*x)
yApprox=FourierSeriesApprox(x,y,nmax)
x1=(np.arange(601-300)*np.pi)/300
y1=np.sin(4*x1)
y1Approx=FourierSeriesApprox(x1,y1,nmax)
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(x,yApprox,label="Fourier series approx. with nmax=%i"%nmax)
plt.plot(x,y,label="Initial function")
plt.legend(loc="upper left")
plt.title("Poorly-set boundary")
#plt.show()
plt.subplot(1,2,2)
plt.plot(x1,y1Approx,label="Fourier series approx. with nmax=%i"%nmax)
plt.plot(x1,y1,label="Initial function")
plt.legend(loc="upper left")
plt.title("Apppropriate boundary")
plt.show()
Here, we have only changed our function's boundaries, but in the second case, we get the exact result with $n_{max}=2$,. The point here is the characteristic Fourier scales of our function are unchanged - but that boundary errors can cause issues when trying to retrieve these scales.
Fourier transforms do not have this problem: they sample over all space, and therefore never sample under the period of any given characteristic scale of our original function.
Having recovered our function with $n_{max}=2$, and knowing that the boundaries used to do so were $x=[0,\pi/2]$: how many non-zero Fourier coefficients do we have, and what's their value? What do we get with the right boundaries and $n_{max}=1$, and what does that tell us?
We know that the function is odd, so the integral over the range will be zero: $a_0 = 0$. Similarly, because the function is odd, all $b_n$ will be zero, since our function has no even characteristic scales. $a_1$ will be zer, because the function averages to zero over the periods this function investigates. $a_2$, however, will pick up the exact function: it has the correct $w_n$!
Putting a larger $n_{max}$ will change the poorly-bound Fourier series, but not the appropriately-bound one; you can try this for yourself. This means that, in the appropriately-bound case, all $a_{n \ne 2} = 0$ ! Knowing that Fourier transforms never suffer from boundary problems: what is the 1-D Fourier transform of this function? What would it be for $f(x) = \cos(2x) $ ?
As it happens, a sine (or cosine) in the image plane is equivalent to a point in the uv-plane: the Fourier transform of a sine or cosine is a Dirac delta placed at the function's frequency. This is what radio astronomers refer to when they talk about interference fringes: they are talking about the image-plane function associated with a uv-plane point.
Consider the extreme case of a point source sitting in the middle of your image. At what scale does our function's energy lie? If we consider its Fourier transform (thus freeing us from questions of scale), we get the following:
By the definition of the Dirac delta, this immediately gives us:
The Fourier transform of a point source (or, in terms of an astronomical image, of a single-pixel source - an unresolved source) is therefore a constant across all Fourier space. What does this mean in terms of fringes, or spatial frequencies? Quite simply that every fringe picks up the point source; it is equally present at all spatial scales. What happens if the Dirac delta is shifted?
Let us consider a point source in a 2-dimensional image. If the source is located at coordinates $(x_0,y_0)$, then the function ${f}(x,y)$ describing the brightness distribution in our image would be:
And the corresponding Fourier space:
Note that here, we define $k_x$ and $k_y$ to be the spatial frequencies in our image, in the $x$ and $y$ directions respectively. Again, by plugging $f(x,y)$ into this equation and using the definition of the Dirac delta, we immediately find:
If our source is at the centre of the image ($x_0=y_0=0$), then we recover our first result: the Fourier plane corresponding to our source is constant. If, however, our source is not in the centre of the image (in radio interferometric parlance, if it is not at phase centre - the reason for this name will be made explicit in later chapters), then our Fourier space consists of the linear combination of two complex waves. In other words, a Dirac delta not located in the centre of the image plane creates fringes in the uv-plane, the periods of which are directly related to its position.
In practice, few sources in the sky are simple point sources. A more "physical" source is a two-dimensional Gaussian:
With different values of $\mu_x,\mu_y,\sigma_x,\sigma_y$, all sorts of "blob-like" sources can be described. More complex sources can be described as a combination of multiple Gaussians. What does the uv-plane of a simple Gaussian source look like? In our case, since $f(x,y) = f(x)f(y)$, we can split the two-dimensional Fourier integral into the product of two one-dimensional Fourier integrals and use the result from Section 2.4.2:
If our Gaussian is located at phase centre (i.e. if $\mu_x=\mu_y=0$), then we are left with another Gaussian:
In other words, the width of our uv-plane Gaussian directly tells us the width of the original Gaussian. If our original Gaussian is located somewhere other than phase centre, we now get a Gaussian multiplied with a fringe pattern in $k$-space (i.e. in the uv-plane). The period of these fringes depends only on $(\mu_x,\mu_y)$ and therefore tell us where the Gaussian is located.
An important - and often unexplained! - concept in physics is the power spectrum of a function. The name comes from signal analysis. If you receive a given signal - described by some arbitrary function $d(x)$ - then the power distribution of this signal - i.e. literally at what frequencies the power is distributed in this signal - is given by the signal's power spectrum:
Note that a function's power spectrum is not just its Fourier transform, but rather the Fourier transform of its absolute magnitude squared. This means that talking of the "power" placed in a given Fourier coefficient of a function is, strictly speaking, incorrect; however, it is a common and conceptually useful way to talk about the relative values of your Fourier coefficients.
Power spectra are used in a very wide range of applications. Among astrophysicists, they are of particular importance to those studying the Epoch of Reionisation.
So far, we have restricted our analyses to images with a single source. Of course, we know that the sky has a lot more than a single source - what then of the Fourier plane of images with more than a single source?
Let us consider an image with $n$ sources of arbitrary shapes, each defined by a functions $f_n(x,y)$. The value of this image at different positions would then be given by:
The corresponding Fourier plane, then, would be the Fourier transform of $B(x,y)$:
In other words, the Fourier plane of our image consists of the linear sum of the Fourier components of each individual source. This means that, in practice, "reading" the Fourier plane of a given image is all but impossible! Some mighty wizards can and do practice such high magic, but they are few and far between.
Why is this relevant? Because interferometers do not sample the image plane: they directly sample the Fourier plane. The remainder of this chapter will be dedicated to outlining how this is done in practice, and the constraints this choice imposes (and avoids!).
In [ ]: