In [1]:
%pylab inline
import os
import pickle
import warnings
import pandas as pd
from astropy.wcs import WCS
import katalogss.fhd_pype as fp
import katalogss.katalogss as kg
warnings.filterwarnings('ignore')
The first part of this tutorial shows you how to get the data you need from the FHD output directories. This relies mostly on functions in the fhd_pype module.
The function fp.fhd_base()
returns the full path to the directory where the FHD output is located. It defaults to the common Aug23 output folder on the MIT cluster, but this can be assigned to another string using fp.set_fhd_base(<new_path>)
.
In [2]:
print 'The default path: '+fp.fhd_base()
fp.set_fhd_base(os.getcwd().strip('scripts')+'katalogss/data')
print 'Our path: '+fp.fhd_base()
We also need to define the version specifying the name of the run. This is equivalent to the case string in eor_firstpass_versions.pro, or the string following "fhd_" in the output directory name.
The katalogss data directory includes a mock FHD output run and the relevant files for a single obsid.
In [3]:
fhd_run = 'mock_run'
s = '%sfhd_%s'%(fp.fhd_base(),fhd_run)
!ls -R $s
Next we want to define the list of obsids we want to process. You can do this manually, or you can easily grab all obsids with deconvolution output for your run using:
In [4]:
obsids = fp.get_obslist(fhd_run)
obsids
Out[4]:
Next we'll grab the source components and metadata for each obsid in the list. Note that if you don't supply the list of obsids, it will automatically run for all obsids.
In [5]:
comps = fp.fetch_comps(fhd_run, obsids=obsids)
meta = fp.fetch_meta(fhd_run,obsids=obsids)
This returns a dictionary of dictionaries. Since it takes some work to run, let's cache it a new katalogss output directory.
In [8]:
kgs_out = '%sfhd_%s/katalogss/'%(fp.fhd_base(),fhd_run)
if not os.path.exists(kgs_out): os.mkdir(kgs_out)
print 'saving %scomponent_arrays.p'
pickle.dump([comps,meta], open(kgs_out+'components.p','w'))
comps
Out[8]:
If you need to come back to this later, restore it with
comps, meta = pickle.load(open(kgs_out+'component_arrays.p'))
In [9]:
#comps, meta = pickle.load(open(kgs_out+'components.p'))
Now we want to get the beam and residual maps. These are stored in fits files, and we read them into HDU objects with data
and header
attributes.
The get_maps
function works for any of the 2D images stored as fits in the 'output_data' directory.
In [10]:
beamxx, beamyy, residual = [fp.get_maps(fhd_run, obsids=obsids, imtype=imtype)
for imtype in ('Beam_XX','Beam_YY','uniform_Residual_I')]
To convert the residual maps from Jy/pixel to Jy/beam, we need the map of pixel areas in units of beam.
In [11]:
pix2beam = fp.pixarea_maps(fhd_run, obsids=obsids, map_dir=kgs_out+'area_maps/')
for o in obsids: residual[o].data*=pix2beam[o]
Now we're ready to start source finding using the katalogss module.
In [14]:
#clustering parameters
eps_factor = 0.25
min_samples = 1
catalog={}
for obsid in obsids:
cmps = pd.DataFrame(comps[obsid])
cmps = kg.clip_comps(cmps)
beam = beamxx[obsid].copy()
beam.data = np.sqrt(np.mean([beamxx[obsid].data**2, beamyy[obsid].data**2],axis=0))
eps = eps_factor * meta[obsid]['beam_width']
cmps = kg.cluster_sources(cmps, eps, min_samples)
catalog[obsid] = kg.catalog_sources(cmps, meta[obsid], residual[obsid], beam)
In [15]:
cat = catalog[obsid]
cat.head(10)
Out[15]:
In [16]:
wcs = WCS(resi.header)
plt.figure(figsize=(10,8))
ax = plt.subplot(111,projection=wcs)
x,y = wcs.wcs_world2pix(cat.ra, cat.dec,1)
ax.scatter(x,y,s=cat.flux,edgecolor='none')
lon,lat = ax.coords
lon.set_axislabel('RA',fontsize=16)
lat.set_axislabel('Dec',fontsize=16)
ax.set_title(obsid,fontsize=20)
Out[16]:
In [17]:
pickle.dump(catalog, open(kgs_out+'catalogs.p','w'))
In [ ]: