In [ ]:
#Now, let's import some useful libraries
import numpy as np
from matplotlib import pyplot as plt
from adapt import *
from phot_tools import *
from glob import glob
import os
from astropy.io import fits
from astropy.coordinates import SkyCoord
from astropy.table import vstack, Table
%matplotlib inline
#What do all of these libraries do? If you aren't familiar with any of them, please ask!
In [ ]:
#First, point this notebook to the directory where your raw data are in (datadir), and
#directories where you want the reduced master calibrations (caldir) and reduced science
#images (reddir) to go. You'll probably want to make the directories first. The trailing
#slash in the name is important (the code breaks if you don't give it the trailing slash...)
datadir = '/path/to/the/data/'
caldir = '/path/to/caldir/'
reddir = '/path/to/reddir/'
In [ ]:
#Use glob to assemble lists of biases...
biaslist = glob(datadir+'string that glob can use to find biases. use * as a wildcard!')
print(biaslist) #this should be all biases...
In [ ]:
#Now create the master bias. You'll have to decide some parameters. Play with
#overscan_fit_degree and see how it affects the output bias (which will now be
#in caldir/master_bias.fits). Do you notice any trend that affects each column of
#the bias? Try messing with overscan_fit_degree til it goes away. Overwrite determines
#what will happen if the bias already exists.
master_bias(biaslist=biaslist,overscan_fit_degree=?,caldir=caldir,overwrite=?)
Load up the bias and take a look at it. Does it look like there are any systematic trends in the bias -- e.g., the top of the image has more bias than the bottom? Try messing with overscan_fit_degree. You can look at the image in ds9 or QFitsView, or just try loading it up in here!
In [ ]:
#Because darks are dependent on exposure time, we'll have to make one dark for each exposure
#time. For now, just make a 30s dark
darklist = glob(datadir+'string that glob can use to find 30s darks.')
print(darklist) #did you only pick out the 30s darks?
In [ ]:
#Now create the master 30s dark. The free parameters are what the exposure time is, the
#overscan_fit_degree (this should be the same as when you created the bias!), and
#whether or not you want to overwrite the output
master_dark(darklist=darklist,exptime=?,overscan_fit_degree=?,caldir=caldir,overwrite=?)
Now load up the 30s dark. What is the typical value of the dark current? Is it higher or lower than you expected? Is there any structure in the image (i.e., does one part of the detector have more dark current than another?) If so, do you see the same structure in the bias?
In [ ]:
#Flat-fielding in Astronomy can be quite contentious, so let's take a careful look at one
#of the flat images before we do anything else. What is the exposure time listed in the image
#header? Does it match up with the exposure time of the dark we made? If not, there's a nice
#little function in adapt.py that will just scale the longest master_dark we made (which has
#the highest signal) to the exposure time of the flat images. This only works assuming the dark
#current scales linearly with time, which we hope it does...
#Next, do you see any weird structure in the flat fields? Turns out the H-alpha filter was
#placed into the instrument kind of wonky. That square you see on the image IS the filter!
#This means that, for the H-alpha images, anything outside of that square doesn't have the
#filter on it, so it should be ignored.
In [ ]:
#Just like the darks, we'll have to select a subset of the flat fields in datadir:
flatlist_ha = glob(datadir+'string that glob can use to find the H-alpha flats')
print(flatlist_ha) #Did you pick out just the H-alpha flats?
In [ ]:
#Now let's make the master flat! overscan_fit_degree and overwrite do the same thing here.
#filt is a string that is mostly just to help name the file that gets made.
master_flat(flatlist=flatlist_ha,filt=?,overscan_fit_degree=?,caldir=caldir,overwrite=?)
What does the master flat look like? What are the typical values of the pixels? Can you see the residual image of the filter?
In [ ]:
#Let's construct a list of H-alpha images to feed into reduce_science.
sciencelist = glob(datadir+'string to just get the images we want')
print(sciencelist)
In [ ]:
#Now reduce the science! reduce_science uses a couple helper functions to access the correct
#dark and flat images, so all you need to worry about are the overscan fit degree, the
#overwriting behavior, and out_pref, which is a string that gets prepended to the filename
#to distinguish it from the raw image. The default is 'red_'
reduce_science(sciencelist=sciencelist,overscan_fit_degree=?,caldir=caldir,
reddir=reddir,out_pref=?,overwrite=?)
Load up one of the reduced images... what do you see?! You might need to mess with the scale parameters to see the entirety of the nebula. Does the rest of the image look 'flat'? I.e., if you ignore the stars, the sky should be uniformly bright.
In [ ]:
#You should be familiar with all of the free parameters are this point...
run_pipeline_run(datadir=datadir,caldir=caldir,reddir=reddir,overscan_fit_degree=?,
out_pref=?,overwrite=?)
We want to perform aperture photometry on our images to measure the brightness of the objects in them. It's fairly straightforward, and the code in phot_tools.py does a lot of this for you, but you should know what it does. The basic steps are:
Now we calculate the instrumental magnitude ($m_{inst}$), which is defined to be $ \begin{equation} m_{inst} = -2.5\log_{10}({\rm net\:count\:rate}) \end{equation} $ This is a ridiculous formula, and I'm really sorry on behalf of all astronomy. Magnitudes are silly. Like actually, something with a smaller magnitude is brighter, how does that make sense?! The only good thing about magnitudes is that they are logarithmic. So if you take the difference of two magnitudes, you're actually taking the ratio of the count rates. We use this fact in the next step:
Some of our observations weren't of WR124. They were of a star called HIP 107864, also known as BD+28 4211. This object is a standard star, or a star whose brightness is a known quanitity. This means we can transform from $m_{inst}$ (which depends on the telescope setup, the weather, manufacturing imperfections in the filters, what you had for breakfast, etc.) to calibrated magnitudes ($m_{cal}$). We call the difference $Z = m_{cal}-m_{inst}$ the photometric zero point (in reality, we also need to correct for the fact that $m_{inst}$ depends on how high the object is in the sky, but because our observations only cover about an hour of time, that factor doesn't change significantly, but we'll still need to keep it in mind). Because we know both $m_{cal}$ and $m_{inst}$ for our standard star, we can derive $Z$, which we can then add to our measurements of $m_{inst}$ for WR124 to get $m_{cal}$. Unfortunately $Z$ depends on wavelength, so we'll need to calculate $Z$ for each filter we want to do science with (in this case, only three filters). Let's do that!
In [ ]:
#First up: let's look up the true (calibrated) magnitude of BD+28 4211. Go ahead and search
#through standards.txt to find the row with BD+28 4211 in it. Standards.txt has a list of
#standards with their magnitude in the r filter, and a bunch of colors (i.e., the difference
#of the magnitude of an object in two different filters). We want to know how bright the star
#is in g, r, and i. Go ahead and calculate then record those values in variables:
g_stan = ?
r_stan = ?
i_stan = ?
In [ ]:
#Now open up one of the reduced images of the standard star in ds9. It should be the brightest
#star towards the center of the image. Zoom in close, and put your mouse over what appears to
#be the center of the star. Record the Right Ascension (ra, or alpha) and Declination (dec
#or delta) in the following line of code, following the example format
stan_coords = SkyCoord(ra='1h2m3s', dec='+4d5m6s')
#This is a SkyCoord object, which has some pretty useful features. phot_tools.py uses them
#to create apertures to do photometry!
To extract the photometry of an object at some position in some image, we'll use extract_photometry which you can call like this:
extract_photometry(filename,approx_location,centering_width=?,ap_rad=?,in_rad=?,out_rad=?)
filename is a string with the name of the file (pick one of the standard observations in the g filter), approx_location is a SkyCoord object. Because we'll want to be really precise with our apertures, extract_photometry uses the function generate_regions() to search within a small number of pixels (centering_width) for the centroid of the object. It then makes a src aperture with radius ap_rad (measured in arcseconds), and a bkg aperture with inner radius in_rad and outer radius out_rad. It returns an astropy Table object with a whole bunch of information; take a look at the output and see what you get!
To test that you chose the right size parameters, open up the same image in ds9, and create a new region with the center and radius that extract_photometry calculates. Does it capture the entire star? Does it look huge? Is it more-or-less centered? You want to be just large enough to get all of the flux, so adjust the region until it looks ok. Do the same up an annular region for the background. It should be big enough to get a decent chunk of sky, but not contain any sources in it.
Run extract_photometry again with the modified parameters, and then open the image and double check that the apertures look good.
In [ ]:
# extract photometry command goes here:
extract_photometry()
In [ ]:
#Ok now we're ready to do all of our g images.
g_images = glob(reddir+'string that glob can use to find all of the g images')
print(g_images)
In [ ]:
#We have three observations in g. Write some code that loops over those observations, does
#extract_photometry on each, and saves the measured instrumental magnitude from each to an
#array. Also save the error on the measured instrumental magnitude.
#code goes here.
In [ ]:
#Now take the average of all three measurements, and save it in a variable, along with the
#error of the average.
g_inst = ?
g_inst_err = ?
In [ ]:
#Finally calculate the photometric zero point for our g observations, and the error in that
#measurement
Z_g = ?
Z_g_err = ?
In [ ]:
#initial extract_photometry to test parameter values for r
In [ ]:
#Use glob to make a list of r images
In [ ]:
#Loop over images, extract instrumental mags and errors
In [ ]:
#Take the average and the error
In [ ]:
#Calculate zero point for r
In [ ]:
#Repeat for i
Now that you've gotten some experience with extract_photometry, we can extract a lightcurve of WR124. These data are slightly different, because they were taken with the diffuser: the diffuser spreads the light from each star out, which is ordinarily bad, but in this case it makes the size of the star very consistent from observation to observation, so the default values for centering_width, ap_rad, in_rad, and out_rad should work just fine.
In [ ]:
# Step 1: open up one of the WR124 images. Our star is the bright one in the bottom right
#quadrant. Estimate its coordinates and make a SkyCoords object just like you did for the
#standard star.
In [ ]:
# Step 2: Use glob to make a list of WR124 images that are all in the same band. Note that a
#images were taken without the diffuser (they have _phot or _guide) in their names, so try to
#exclude them
In [ ]:
# Step 3: Loop over images, for each one do extract_photometry, and record the time in the
#middle of the observation, the instrumental magnitude, and the error.
In [ ]:
# Step 4: To each point add the corresponding zero point, and make sure to modify the
#associated error.
In [ ]:
# Step 5: Save the array of times, magnitudes, errors to a file. Move on to the next band!
In [ ]: