The Grizli
pipeline allows you to fully reduce a given set of HST grism observations with essentially two steps:
Run an archive query with hsaquery
Process the associations found with the query with grizli.auto_script.go
.
Here, "association" usually simply means "any Hubble exposures that overlap" and doesn't require that all observations were taken with the same observing program, instrument, grism position angle, epoch, filter, etc. The code does all of the exposure-level book-keeping and the products are drizzled image mosaics, extracted 1D and 2D grism spectra and fits to the spectra.
NB: The pipeline works fine with just imaging and no grism exposures!
In [1]:
%matplotlib inline
In [2]:
import glob
import time
import os
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
import astropy.io.fits as pyfits
import drizzlepac
import grizli
from grizli.pipeline import auto_script
from grizli import utils, fitting, multifit
utils.set_warnings()
print('\n Grizli version: ', grizli.__version__)
In [3]:
#os.chdir('/Users/brammer/3DHST/Spectra/Work/Grizli/Demo-18.05.22/')
HOME_PATH = os.getcwd()
print('HOME_PATH = ', HOME_PATH)
The hsaquery
module can be used to programaticaly query the HST archive and find exposures from different programs (and instruments) that overlap on the sky. The example below is tailored for a single pointing from a single program, but the query parameters can be expanded to search much more broadly for archival data.
In [4]:
### Generate a query for the WFC3/ERS grism data
## !! new query tools since ESA database changed in summer 2018
# https://github.com/gbrammer/mastquery
from mastquery import query, overlaps
# "parent" query is grism exposures in GO-11359. Can also query the archive on position with
# box=[ra, dec, radius_in_arcmin]
parent = query.run_query(box=None, proposal_id=[11359], instruments=['WFC3/IR', 'ACS/WFC'],
filters=['G102','G141'])
# ### "overlap" query finds anything that overlaps with the exposures
# ### in the parent query
# extra = query.DEFAULT_EXTRA # ignore calibrations, etc.
# ## To match *just* the grism visits, add, e.g., the following:
# extra += ["TARGET.TARGET_NAME LIKE 'WFC3-ERSII-G01'"]
tabs = overlaps.find_overlaps(parent, buffer_arcmin=0.01,
filters=['F098M', 'F140W', 'F160W','G102', 'G141'],
proposal_id=[11359], instruments=['WFC3/IR','WFC3/UVIS','ACS/WFC'],
extra={'target_name':'WFC3-ERSII-G01'}, close=False)
In [5]:
# Summary of the tables you just generated
foot_files = glob.glob('j[02]*footprint.fits')
print('Footprint files: ', foot_files)
print('\n# id ra dec e(b-v) filters')
for tab in tabs:
print('{0} {1:.5f} {2:.5f} {3:.4f} {4}'.format(tab.meta['NAME'], tab.meta['RA'],
tab.meta['DEC'], tab.meta['MW_EBV'],
','.join(np.unique(tab['filter']))))
In [6]:
#os.chdir('/Users/brammer/3DHST/Spectra/Work/Grizli/Demo-18.05.22/')
HOME_PATH = os.getcwd()
print('HOME_PATH = ', HOME_PATH)
In principle, all of the steps outlined below can be executed with a single call to auto_script.go
, from fetching the data to extracting spectra and performing the redshift / line fits. The processing steps been broken out individually here to show the processing at each step.
The same pipeline can be used to process imaging-only fields. Simply run the queries as above to find the imaging exposures you want to processes and run everything the same way. The pipeline steps related to the grism exposures will simply be skipped.
In [7]:
# Do everything for the query from fetching the data to generating the contamination model
HOME_PATH = os.getcwd()
print('HOME_PATH = ', HOME_PATH)
root = 'j0332m2743'
IS_PARALLEL = False # Set to True for parallel programs like WISPS
if False:
# This line would do everything below
auto_script.go(root=root, maglim=[19,21], HOME_PATH=HOME_PATH, reprocess_parallel=True,
s3_sync='cp', gaia_by_date=True, is_parallel_field=IS_PARALLEL,
run_fit=False, only_preprocess=True, run_extractions=False)
Grizli
can automatically fetch HST data from the ESA Hubble Science archive (and, optionally, the Amazon S3 bucket). The fetch_files
script fetches the exposures listed in the archive query above. It also fetches associated WFC3/IR persistence products from the persistence database.
The first time you run the script, a lot more information will be printed to the screen as the exposures are retrieved and the script runs the reprocessing code to flatten the IR backgrounds. Below the "skip" message simply indicate that files have already been downloaded.
In [8]:
### Fetch data, reprocess WFC3/IR for backgrounds, fetch WFC3/IR persistence productss
# If s3_sync, then sync from the Hubble Amazon S3 bucket with awscli,
# otherwise get from the ESA archive.
os.chdir(HOME_PATH)
# Is awscli available and connected?
s3_status = os.system('aws s3 ls s3://stpubdata --request-payer requester')
if s3_status == 0:
s3_sync='cp' # As of late October 2018, 's3 sync' not working with 'stpubdata'
else:
s3_sync=False # Fetch from ESA archive
auto_script.fetch_files(field_root=root, HOME_PATH=HOME_PATH, remove_bad=True,
reprocess_parallel=True, s3_sync=s3_sync)
Grizli
builds its own associations based on anything it finds in the RAW
directory. Visits are usually defined in the exposure filenames. For example, for the single exposure, ib6o03ntq_flt.fits
, the characters b6o
identify the observing program and the visit identifier is 03
. You can also build visits combining all exposures in a given filter taken at the same position angle, which can be useful for some programs executed in parallel where exposures taken at a similar time could have different visit IDs in the filename.
NB: Generally one should process "visits" as groups of exposures in a given filter that were taken with a single guide star acquisition.
The parsing script also associates grism exposures with corresponding direct images, based on the visit, exposure order and exposure footprints on the sky.
In [9]:
# Demo combining by PA / filter.
# Here it actually gets a bit confused because multiple F098M exposures
# were taken at the same PA but shouldn't be associated with the grism exposures.
os.chdir(os.path.join(HOME_PATH, root, 'Prep'))
visits, all_groups, info = auto_script.parse_visits(field_root=root,
HOME_PATH=HOME_PATH, use_visit=True,
combine_same_pa=True)
print('\n ====== \n')
for visit in visits:
print('{0:30} {1:>2d}'.format(visit['product'], len(visit['files'])))
In [10]:
######################
### Parse visit associations for most normal programs
os.chdir(os.path.join(HOME_PATH, root, 'Prep'))
visits, all_groups, info = auto_script.parse_visits(field_root=root,
HOME_PATH=HOME_PATH, use_visit=True,
combine_same_pa=IS_PARALLEL)
print('\n ====== \n')
for visit in visits:
print('{0:30} {1:>2d}'.format(visit['product'], len(visit['files'])))
grizli.prep.process_direct_grism_visit
The process_direct_grism_visit
script in prep.py provides one-stop-shopping for all of the preprocessing steps required. This includes
./RAW
to ./Prep/
)The products of the script for a given direct/grism pair are
The script also runs on imaging-only visits, performing the background subtraction and astrometric alignment but skipping anything related to grism processing.
The auto_script.preprocess
command below runs the processing script for the two direct/grism pairs of the ERS observations and for the overlapping imaging visits identified in the initial query. It prints a bunch of information to the terminal, primarily from various runs of AstroDrizzle, and takes a few minutes to run per visit. It only needs to be run once.
NB If you restart the pipeline after a previous run, it will skip preprocessing any visit where the file {visit-product}_dr?_sci.fits
is found (i.e., the "Skip" messages below). If you want to force reprocessing of a visit, delete that file.
In [11]:
#####################
### Alignment & mosaics
os.chdir(os.path.join(HOME_PATH, root, 'Prep'))
# Alignment reference catalogs, searched in this order
catalogs = ['NSC', 'PS1','SDSS','GAIA','WISE']
# As of v0.8.0-8, can use the NOAO source catalog (NSC) here, which
# is defined over much of the sky and appears to be well aligned to GAIA.
# However, sometimes it's not clear how to apply the best quality control
# to the NSC sources. Here, for example, there seem to be a number of spurious
# NSC sources that make the initial alignment RMS fairly high.
# This script will do all the preprocessing of the grism *and* imaging visits
# found in your archive query.
auto_script.preprocess(field_root=root, HOME_PATH=HOME_PATH,
make_combined=False, catalogs=catalogs, use_visit=True)
In [12]:
!ls wfc3*sci.fits # individual drizzled visits
In [13]:
# Results of the intra-visit alignment.
# Should be small as these are just FGS drift on a single guide star
!ls *shifts.log
print('')
!cat *shifts.log
In [14]:
# Show the alignment w.r.t the external NOAO Source Catalog (NSC)
Image(filename = "./wfc3-ersii-g01-b6o-21-119.0-f098m_wcs.png")
Out[14]:
In [43]:
# Show the alignment of one HST visit to another, note tight
# plot range compared to previous
Image(filename = "./wfc3-ersii-g01-b6o-23-119.0-f140w_wcs.png")
Out[43]:
In [15]:
# Check wcs.log files that a few objects were found, no large rotations
# and rms (second to last column) isn't too large
# Here, the F098M/G102 visit was the first processed and was aligned
# to the NSC, with RMS~0.8 WFC3/IR pix. Subsequent visits are aligned to
# previously processed HST visits so that at least the relative HST astrometry
# is as good as possible. Here, the F140W/G141 visit was run second and was
# therefore aligned to F098M, resulting in much better precision than with the
# external catalog (RMS < 0.1 pix).
# Cat wcs.log files in order they were generated
!grep " 0 " `ls -ltr *wcs.log | awk '{print $9}'` | sed "s/ */ /g"
# columns:
# "visit" 0 xshift yshift rot scale rms N
The main failure mode of the auto_script.preprocess
script is failure to compute a reliable alignment to the external reference. This can happen, e.g., if there are not enough alignment sources (i.e., zero) within the field of view or if the original astrometry of the exposures obtained from the archive is offset from the reference by more than ~10 pixels. This can almost always be remedied by running grizli.pipeline.auto_script.manual_alignment
after the files have been fetched, which prompts the user to interactively mark sources in the image and reference catalog using DS9.
In [16]:
if False: # Don't run
catalogs = ['PS1','SDSS','GAIA','WISE']
auto_script.manual_alignment(field_root=root, HOME_PATH=HOME_PATH, skip=True,
catalogs=catalogs, radius=15, visit_list=None)
The grism sky backgrounds are subtracted using the "Master sky" images from Brammer, Ryan, & Pirzkal 2015 (available here).
Grizli
ignores any wavelength dependence of the flat-field and applies a grey correction using the F140W (F105W) flat-field for the G141 (G102) grisms.
Residuals after subtracting the master sky images are typically of order 0.005 e-/s, just 0.5-1% overall background level. They are removed by subtracting a column-average of the sky pixels in the grism exposures, and the processing script produces a diagnostic figure like the one shown below. The source of the residuals is still unclear (e.g., perhaps spectra of objects near/below the detection limit). Though they are usually well removed by the column average, they do make extracting continuum spectra of faint sources challenging.
In [17]:
from IPython.display import Image
Image(filename = "./wfc3-ersii-g01-b6o-23-119.0-g141_column.png", width=600, height=600)
Out[17]:
In [18]:
# Fine alignment of the visits relative to each other and absolute to GAIA DR2
if len(glob.glob('{0}*fine.png'.format(root))) == 0:
fine_catalogs = ['GAIA','PS1','SDSS','WISE']
out = auto_script.fine_alignment(field_root=root, HOME_PATH=HOME_PATH,
min_overlap=0.2, stopme=False, ref_err=0.08,
catalogs=fine_catalogs, NITER=1, maglim=[17,23],
shift_only=True, method='Powell', redrizzle=False,
radius=10, program_str=None, match_str=[],
gaia_by_date=True)
# Update headers with the result from the fine alignment
# Original FLTs are archived to FineBkup
auto_script.update_wcs_headers_with_fine(root)
visits, res = np.load('{0}_fine.npy'.format(root))
shifts = res.x.reshape((-1,2))/10.
for i, visit in enumerate(visits):
print('{0:35} {1:6.2f} {2:6.2f}'.format(visit['product'], shifts[i,0], shifts[i,1]))
In [19]:
# Show the results of fine alignment.
# Top panels are alignment between the visits. + in the bottom panels are
# residuals of the external reference, here GAIA DR2.
#
# Small drift between individual visits removed.
# Fairly large GAIA offsets probably due to ~6 years between
# WFC3/ERS and GAIA epoch 2015.5.
Image(filename='{0}_fine.png'.format(root))
Out[19]:
In [20]:
# Drizzle mosaics in each filter and combine all IR filters
combine_all_filters=True
if len(glob.glob('{0}-ir_dr?_sci.fits'.format(root))) == 0:
## Mosaic WCS
wcs_ref_file = '{0}_wcs-ref.fits'.format(root)
if not os.path.exists(wcs_ref_file):
auto_script.make_reference_wcs(info, output=wcs_ref_file,
filters=['G800L', 'G102', 'G141'],
pad_reference=90, pixel_scale=None,
get_hdu=True)
# All combined
IR_filters = ['F105W', 'F110W', 'F125W', 'F140W', 'F160W',
'F098M', 'F139M', 'F127M', 'F153M']
optical_filters = ['F814W', 'F606W', 'F435W', 'F850LP', 'F702W', 'F555W', 'F438W', 'F475W', 'F625W', 'F775W', 'F225W', 'F275W', 'F300W', 'F390W']
if combine_all_filters:
auto_script.drizzle_overlaps(root,
filters=IR_filters+optical_filters,
min_nexp=1,
make_combined=True,
ref_image=wcs_ref_file,
drizzle_filters=False)
## IR filters
auto_script.drizzle_overlaps(root, filters=IR_filters,
min_nexp=1,
make_combined=(not combine_all_filters),
ref_image=wcs_ref_file)
# Fill IR filter mosaics with scaled combined data so they can be used
# as grism reference
auto_script.fill_filter_mosaics(root)
## Optical filters
mosaics = glob.glob('{0}-ir_dr?_sci.fits'.format(root))
auto_script.drizzle_overlaps(root, filters=optical_filters,
make_combined=(len(mosaics) == 0), ref_image=wcs_ref_file,
min_nexp=2)
In [21]:
!ls -1 j03*_dr?_sci.fits
Run source detection on the combined mosaic {root}-ir_dr[cz]_sci.fits
and generates a catalog and segmentation image.
Then perform simple matched-aperture photometry on the different available filter mosaics (in this case F098M and F140W from the direct imaging). In principle the template fitting code shown below can incorporate this photometric information, though that's not currently done by default.
In [22]:
## Run SEP (~SExtractor clone) catalog on the "ir" combined image
## and generate a photometric catalog with aperture photometry in all available bands
if not os.path.exists('{0}_phot.fits'.format(root)):
get_background=False # SExtractor background subtraction
tab = auto_script.multiband_catalog(field_root=root, threshold=1.8,
detection_background=get_background,
photometry_background=get_background)
files = glob.glob('{0}-ir*'.format(root)) + glob.glob('*phot*fits')
for file in files:
print(file)
phot = utils.GTable.gread('{0}_phot.fits'.format(root))
print('{0}Metadata{0}'.format('\n'+'='*20+'\n'))
for k in phot.meta:
print('{0}:\t{1}'.format(k, phot.meta[k]))
In [23]:
phot[:2].show_in_notebook()
Out[23]:
multifit.GroupFLT
With the preprocessing done, we can now start on the analysis of the spectra. Grizli
is built around low-level tools for modeling and analyzing each individual grism exposure individually. Though once multiple exposures are available (e.g., exposures within a visit or separate visits with different grisms and/or orients) the information from each can be combined for analyzing the spectrum of a given object. A benefit of the exposure-level processing is that all of the model-to-data comparisons (i.e. chi-squared) are done in the space of the original detector pixels, with their well-calibrated and well-understood noise properties.
The GroupFLT
class provides a container for processing multiple FLT exposures simultanously.
grism_files
= list of grism exposure filenamesdirect_files
= (optional) list of direct exposure filenamesref_file
= (optional) reference direct image mosaic (one or the other of ref_file
or direct_files
should be specified.)seg_file
, catalog
= segmentation image and catalog, usually generated with SExtractorcpu_count
= set to > 0 for parallel processingpad
parameter (default=200 pixels). If set, then add padding around the FLTs to enable modeling of objects that would fall off of the direct image but that still disperse spectra onto the grism exposure (assuming they fall in the ref_file
and seg_file
mosaics).The contents of the grism_files
list can contain pretty much anything, with the result limited by memory / cpu power. For example, you can provide a list of all 112 of the 3D-HST G141 exposures in the COSMOS field (4 exposures x 28 pointings), along with the field mosaic and segmentation images. This example is actually fairly easy to process as individual objects will fall in typically 4, perhaps 8 individual exposures in some overlap areas. Another example is a list of exposures from multiple instruments / grisms / orients of a single field, thought the subsequent fits can be slow if an object has spectra in many individual exposures.
Reference images are blotted to the distorted exposure frame with AstroDrizzle.ablot
. Messages go by, as below, when you load the GroupFLT
object talking about "cutouts" because the script tries to make smaller cutouts of large reference images to speed up the blot processing.
NB Seems to often have memory leak problems if seg_file
isn't significantly larger than the footprint of a given FLT file. Drizzle blot
segfaults out but the script just hangs since the multiprocessing threads don't get the message.
Once the GroupFLT
object is initialized, compute a first-pass model of the full detector field of view assuming simple linear continua for all objects in the field (brighter than mag_limit
). By default this assumes a somewhat blue continuum suitable for the Rayleigh-Jeans tail of low-redshift objects. It's far from perfect but the initial model does provide a good way of at least identifying which pixels of a given object could be contaminated by neighbors, even if the quantitative model is not precise.
After computing the simple continuum model, refine the model spectra for brighter objects using higher-order polynomials fit directly to the spectra themselves. The refine_list
method marches through objects starting with the brightest and fits a polynomial of order poly_order
to the observed spectrum after subtracting off the model for contaminants. Note that if the list of grism exposures contained multiple orientations covering a single field, this fit can be well constrained even in the presence of contamination.
The grism_prep
script iterates on the refined polynomial model refine_niter
times.
You can optionally dump saved data (i.e., grp.save_full_data()
) for fast restart and avoid recomputing the contamination models, for example in a new Python session. This can be done at any time after you've made changes to the GroupFLT data that you'd like to store for later. The grism_prep
script does this automatically.
In [24]:
files = glob.glob('*GrismFLT.fits')
if len(files) == 0:
### Grism contamination model
os.chdir(os.path.join(HOME_PATH, root, 'Prep'))
# Which filter to use as direct image? Will try in order of the list until a match is found.
gris_ref = {'G141': ['F140W', 'F160W'],
'G102': ['F105W', 'F098M', 'F110W']}
x = auto_script.grism_prep(field_root=root, refine_niter=3,
gris_ref_filters=gris_ref)
grp = multifit.GroupFLT(grism_files=glob.glob('*GrismFLT.fits'),
catalog='{0}-ir.cat.fits'.format(root),
cpu_count=-1, sci_extn=1, pad=256)
else:
grp = multifit.GroupFLT(grism_files=glob.glob('*GrismFLT.fits'),
catalog='{0}-ir.cat.fits'.format(root),
cpu_count=-1, sci_extn=1, pad=256)
In [25]:
# Show the results of the contamination model
### Show FLT residuals
cmap = 'viridis_r'
vm = [-0.01, 0.1]
fig = plt.figure(figsize=[8,8])
ax = fig.add_subplot(221)
flt = grp.FLTs[0] # G102
ax.imshow(flt.grism['SCI'], vmin=vm[0], vmax=vm[1], cmap=cmap,
interpolation='Nearest', origin='lower')
ax.set_title('{0}, {1}'.format(flt.grism.filter, flt.grism.parent_file))
ax.set_xticklabels([])
ax.set_ylabel('Data')
ax = fig.add_subplot(223)
ax.imshow(flt.grism['SCI'] - flt.model, vmin=vm[0], vmax=vm[1], cmap=cmap,
interpolation='Nearest', origin='lower')
ax.set_ylabel('Data - contam model')
ax = fig.add_subplot(222)
flt = grp.FLTs[4] # G141
ax.imshow(flt.grism['SCI'], vmin=vm[0], vmax=vm[1], cmap=cmap,
interpolation='Nearest', origin='lower')
ax.set_title('{0}, {1}'.format(flt.grism.filter, flt.grism.parent_file))
ax.set_xticklabels([]); ax.set_yticklabels([])
ax = fig.add_subplot(224)
ax.imshow(flt.grism['SCI'] - flt.model, vmin=vm[0], vmax=vm[1], cmap=cmap,
interpolation='Nearest', origin='lower')
ax.set_yticklabels([])
for ax in fig.axes:
ax.set_xlim(510,790); ax.set_ylim(610,890)
fig.tight_layout(pad=1)
In [26]:
#### Store fit parameters to `fit_args.npy` for batch-mode processing
# Drizzle parameters for line maps
pline = auto_script.DITHERED_PLINE
print(pline)
# Generate the parameter dictionary
args = auto_script.generate_fit_params(field_root=root, prior=None,
MW_EBV=tabs[0].meta['MW_EBV'],
pline=pline, fit_only_beams=True, run_fit=True, poly_order=7,
fsps=True, sys_err = 0.03, fcontam=0.2, zr=[0.05, 3.4],
save_file='fit_args.npy')
Make an average effective PSF for each available IR filter by evaluating the field-dependent PSF across the final mosaic and drizzling to a common output. Also make an extension with a PSF on the pixel grid of the drizzled line map parameters generated above (pline
). Each PSF is generated with the native pixel grid and 2/4x oversampling for use with, e.g., GALFIT.
NB There is currently no ePSF for F098M, so F105W is used (http://www.stsci.edu/~jayander/STDPSFs/WFC3IR/).
In [27]:
# Make PSF file
if not os.path.exists('{0}-f140w_psf.fits'.format(root)):
auto_script.field_psf(root=root, HOME_PATH=HOME_PATH)
In [28]:
# Show the PSFs
print(glob.glob('*psf.fits'),'\n')
im = pyfits.open('{0}-f140w_psf.fits'.format(root))
im.info()
fig = plt.figure(figsize=[6,6])
for i, ext in enumerate([1,2,4,5]):
ax = fig.add_subplot(2,2,i+1)
ax.imshow(np.log10(im[ext].data))
h = im[ext].header
label = '{0}-{1} {2}"/pix \npixf={3} kernel={4} \nfilter={5}'.format(h['EXTNAME'],
h['EXTVER'], h['PSCALE'], h['PIXFRAC'], h['KERNEL'], h['FILTER'])
ax.text(0.08, 0.95, label, ha='left', va='top',
transform=ax.transAxes, size=8, color='k',
bbox={'fc':'w'})
ax.set_xticklabels([]); ax.set_yticklabels([])
sh = im[ext].data.shape
ax.set_xlim(sh[1]/2-40, sh[1]/2+40)
ax.set_ylim(sh[0]/2-40, sh[0]/2+40)
fig.tight_layout(pad=1)
In [29]:
os.chdir('../Extractions')
In [30]:
### Find IDs of specific objects to extract
import astropy.units as u
tab = utils.GTable()
tab['ra'] = [53.0657456, 53.0624459]
tab['dec'] = [-27.720518, -27.707018]
idx, dr = grp.catalog.match_to_catalog_sky(tab)
source_ids = grp.catalog['NUMBER'][idx]
tab['id'] = source_ids
tab['dr'] = dr.to(u.mas)
tab['dr'].format='.1f'
tab.show_in_notebook()
Out[30]:
The GroupFLT
object contains the entire exposure information, from which we can make cutouts of spectra for individual objects with the get_beams
method. These cutouts are more managable and portable than the entire exposures, though currently the processing does work in the paradigm of having a static contamination model for a given object.
In pipeline mode, the function below is called with ids=[], maglim=[mag_min, mag_max]
and all objects in the reference catalog with mag_min < MAG_AUTO < mag_max
are extracted. The redshift fits are performed if run_fit=True
.
In [31]:
id=source_ids[0]
auto_script.extract(field_root=root, ids=[id], MW_EBV=tabs[0].meta['MW_EBV'],
pline=pline, run_fit=False, grp=grp, diff=True)
Out[31]:
The spectral extraction produces two versions of the extracted 2D spectra:
{root}_{id:05d}.beams.fits
: Multi-extension FITS file with sets of extensions for 2D cutouts from each individual grism exposure. Fitting in this space is most robust as the grism dispersion is defined in the "FLT" coordinates and the model comparison is done directly on un-resampled image pixels with relatively well-understood noise properties.{root}_{id:05d}.stack.fits
: Multi-extension FITS file with extension with combinations of all exposures in a given grism & position angle. The fitting tools can be used with these products as well, where the fits are much faster as 2D models at each trial redshift are produced for N_PA x N_grism
combinations, where often N_PA x N_grism << N_exposure
. The fits are evaluated in the resampled drizzled pixel space, and they are often less robust than fits to the full "beams" spectra, particularly at low S/N.
The {root}_{id:05d}.stack.png
files, shown below, are often useful for visual inspection of the 2D products. Note that the bottom panel of the stack.png
files is the drizzled combination of all PAs for a given grism, and with a polynomial continuum model subtracted if diff=True
in the extraction script above.
In [32]:
Image(filename='{0}_{1:05d}.stack.png'.format(root, id))
Out[32]:
In [33]:
# 1D spectrum with polynomial model
Image(filename='{0}_{1:05d}.1D.png'.format(root, id))
Out[33]:
The redshift fit is performed in the following steps:
On a coarse redshift grid (dz/1+z ~ 0.005) fit continuum templates along with line complex templates for a) [OII]+[NeIII], b) [OIII]+Hbeta, and c) Halpha+[SII]+weaker red lines. These line complexes have fixed line ratios but are useful for breaking redshift degeneracies as these lines do, usually, come in groups. Leaving all line strengths free would allow for perfect degeneracy between, e.g., Halpha and [OII] (assuming no significant continuum features).
Find peaks (minima) in the chi-squared on the coarse grid and zoom in on them now allowing for more freedom in the indifidual line strengths, as well as fitting on a fine redshift grid sufficient to resolve the best redshift.
NB Continuum templates are needed in the directory ${GRIZLI}/templates
. The template names are currently hard-coded in multifit.py and the easiest way to make them available is to symlink them from the data/templates
directory that accompanies the grizli
code distribution:
Once we've computed the full continuum + line model, we can create 2D drizzled maps at any desired output wavelength, for example to make emission line maps. This makes use of the WCS information in the individual grism FLT exposures and the outputs can have any desired WCS (e.g., pixel scale & dimensions) and can be used to compare directly to imaging data.
The emission line maps are generated by subtracting the best-fit continuum model, assuming that the direct image is representative of the continuum morphology. This should be a reasonable assumption for objects other than, perhaps, those with extreme line equivalent widths.
In [34]:
# Fit it. The "run_all_parallel" function defaults to all of the parameters set in 'fit_args.npy'
fitting.run_all_parallel(id)
Out[34]:
A number of files are produced that contain the results of the redshift fit. The NewSpectrumFits.ipynb
notebook describes how to interact with these products in some greater detail.
In [35]:
files = glob.glob('*{0:05d}*'.format(id))
for file in files:
print(file)
In [36]:
for file in files:
if not file.endswith('.fits'):
continue
im = pyfits.open(file)
print('\n\n{1}\n{0}\n{1}\n\n'.format(file, '='*len(file)))
im.info()
The object below is the dominated by strong Balmer break and absorption lines (see van Dokkum & Brammer 2010). The redshift fit and spectral constraints are precise even without any supporting photometric data.
In [37]:
# Continuum source
id=source_ids[1]
auto_script.extract(field_root=root, ids=[id], MW_EBV=tabs[0].meta['MW_EBV'],
pline=pline, run_fit=False, grp=grp, diff=True)
Out[37]:
In [38]:
# Stacked 2D spectrum
Image(filename='{0}_{1:05d}.stack.png'.format(root, id))
Out[38]:
In [39]:
# 1D spectrum with polynomial model
Image(filename='{0}_{1:05d}.1D.png'.format(root, id))
Out[39]:
In [40]:
## Run the fit
fitting.run_all_parallel(id)
Out[40]:
Another option is fitting the grism spectra along with ancillary photometry, described here: Fit-with-Photometry.ipynb.