The purpose of this demo is to showcase how KBMOD can be used to search through images for moving objects. The images used here are from the Subaru telescope and were processed using the LSST Science Pipelines.
This version of KBMOD requires an NVIDIA GPU in order to run due to our use of the CUDA API. Additional requirements and setup information is documented in our README.
In [1]:
import numpy as np
import matplotlib.pyplot as plt
import subprocess
%matplotlib inline
%load_ext autoreload
%autoreload 2
In [6]:
results_name = "demo_1"
gpu_code_path = "../code/gpu/"
real_image_path = "../../fraser/chip_0/"
psi_image_path = gpu_code_path+"output-images/psi"
phi_image_path = gpu_code_path+"output-images/phi"
In [7]:
paramsFile = open('../code/gpu/debug/parameters.config', 'w')
paramsFile.write(
"""Debug ................ : 1
PSF Sigma ............ : 1.0
Mask Threshold ....... : 0.75
Mask Penalty ......... : -0.05
Angles to Search ..... : 120
Minimum Angle ........ : 0.0
Maximum Angle ........ : 6.283
Velocities to Search . : 90
Minimum Velocity ..... : 24.
Maximum Velocity ..... : 600.
Psi/Phi to file ...... : 1
Source Images Path ... : ../../{source}/
Psi Images Path ...... : ../../{psi}/
Phi Images Path....... : ../../{phi}/
Results Path ......... : ../../../data/results/{name}.txt
""".format( source=real_image_path, psi=psi_image_path, phi=phi_image_path, name=results_name ))
paramsFile.close()
In [8]:
!cd ~/cuda-workspace/kbmod/code/gpu/debug/; ./clearImages.sh
In [9]:
!cd ~/cuda-workspace/kbmod/code/gpu/debug/; ./CudaTracker
First we create a mask to use in the images. Since some moving objects are bright enough to be masked in some of the images we set a fraction of the images that a pixel has to be masked in order to add it to the master mask. Below that setting is 75% of the images.
In [3]:
mask = si.createMask('data_repo/chip_0/', 0.75)
In [4]:
fig = plt.figure(figsize=(12,12))
plt.imshow(mask, origin='lower', cmap=plt.cm.Greys_r)
plt.xlabel('X Pixels')
plt.ylabel('Y Pixels')
Out[4]:
Next we use the mask and original images to build the Psi and Phi likelihood images.
In [5]:
psi_array, phi_array = si.calcPsiPhi('data_repo/chip_0', mask)
Here we calculate the image time from the header and also load the MJD values of each image in order to calculate orbit details later.
In [6]:
image_times, image_mjd = si.loadImageTimes('data_repo/chip_0')
Finally, we load the images themselves with the mask on top in order to build the postage stamps we will use to look at the potential objects we discover.
In [7]:
im_array = si.loadMaskedImages('data_repo/chip_0', mask)
We also need the wcs for the images.
In [8]:
wcs_list = si.loadWCSList('data_repo/chip_0')
Our algorithm uses the PSF kernel from the LSST data management pipeline, but below we show that is it comparable to a 2-d Gaussian with a sigma of 1 pixel.
In [9]:
psf_array = si.loadPSF('data_repo/chip_0')
In [10]:
from createImage import createImage as ci
#Create a 2-d Gaussian with sigma 1 pixel in x and y directions and in the center of a 41 x 41 pixel grid with
#total flux equal to 1.0
gauss2d = ci().createGaussianSource([20., 20.], [1., 1.], [41., 41.], 1.)
In [11]:
fig = plt.figure(figsize=(12,12))
fig.add_subplot(1,2,1)
plt.imshow(psf_array[0], origin='lower', interpolation='None')#, vmin = -111, vmax = 119, cmap=plt.cm.Greys_r)
plt.title('LSST DM Kernel')
plt.colorbar()
fig.add_subplot(1,2,2)
plt.imshow(gauss2d, origin='lower', interpolation='None')
plt.title('2-D Gaussian')
plt.colorbar()
Out[11]:
Here we set up a grid of angles from the ecliptic of -15 to 15 degrees and velocities ranging from 0.4 to 4.5 arcsec per hour in order to line up with the grid used in Fraser and Kavelaars (2009).
In [12]:
angles = [-15., -7.5, 0, 7.5, 15]
rates = np.arange(0.4, 4.5, .2277)
para_steps = []
perp_steps = []
for rate in rates:
for angle in angles:
para_steps.append(rate*np.cos(np.radians(angle)))
perp_steps.append(rate*np.sin(np.radians(angle)))
para_steps = -1.*np.array(para_steps)
perp_steps = np.array(perp_steps)
We currently have trouble with slower trajectories so we set a lower bound on the velocity for the demo.
In [13]:
para_fast = para_steps[45:]
perp_fast = perp_steps[45:]
In [14]:
vel_grid = np.array([para_fast, perp_fast]).T
We now have everything we need set up to begin searching in likelihood space for the images. We use the method findObjectsEcliptic now. Below we are setup to search a section of the images between pixels 1024 and 2048 in both the x and y axes.
In order to filter out results that start in the same place and travel along similar trajectories we cluster results in a 4-dimensional space: x starting position, y starting position, total velocity and slope.
In [15]:
x_quad_size = 1024
y_quad_size = 1024
x_offset = 1024
y_offset = 1024
quad_results = {}
for quadrant_x in range(0,1):
for quadrant_y in range(0,1):
x_range = [x_offset+x_quad_size*quadrant_x, x_offset+x_quad_size*(quadrant_x+1)]
y_range = [y_offset+y_quad_size*quadrant_y, y_offset+y_quad_size*(quadrant_y+1)]
topResults = si.findObjectsEcliptic(psi_array, # The psi images
phi_array, # The phi images
vel_grid, # The velocity search grid
2.0, # The likelihood threshold
image_times, # The times after image 1, each image was taken.
[wcs_list[0]]*55, # The wcs values
xRange = x_range, # The x pixel coordinate range
yRange = y_range, # The y pixel coordinate range
out_file='results/chip_0/quad_%i_%i_test_test.txt' % (quadrant_x, quadrant_y))
Now we will load the results and try to find the best possible objects. For this we use the methods in analyzeImage.
In [17]:
results = np.genfromtxt('results/chip_0/quad_0_0_test_test.txt', names=True)
In [2]:
ai = analyzeImage()
We sort these results by the ratio of maximum brightness within an aperture centered on a coadded postage stamp along the trajectory to the maximum brightness outside of this aperture. A stationary unmasked object that is in the results will have a streak of fairly uniform brightness along the trajectory's slope in a coadded postage stamp while a moving object will have a bright center since the single images that make the coadd of a trajectory move along with the object.
In [1]:
best_targets = ai.sortCluster(results, im_array, image_times)
print best_targets
Notice that the best_targets
array that comes out of the sorting algorithm actually rearranged the results. Here we plot the most likely objects and see that the sorting algorithm gave us an actual asteroid as the most likely object while the highest likelihood object was moved down the rankings (the best_targets
output above shows it is the third in the images below).
In [32]:
fig = plt.figure(figsize=(18,12))
i=0
for imNum in range(5):
fig.add_subplot(1,5,imNum+1)
try:
plt.imshow(ai.createPostageStamp(im_array,
list(results[['t0_x', 't0_y']][best_targets][imNum]),
list(results[['v_x', 'v_y']][best_targets][imNum]),
image_times, [25., 25.])[0],
origin='lower', cmap=plt.cm.Greys_r, interpolation='None')
plt.title(str(' [x,y] = ' + str(list(results[['t0_x', 't0_y']][best_targets][imNum]))))
except:
continue
plt.tight_layout()
We have also written methods to plot information related to a trajectory. Below we plot its path through one of the images and its light curve.
In [33]:
return_result = best_targets[0]
In [34]:
fig = plt.figure(figsize=(12,12))
ax = ai.plotTrajectory(results[return_result],
image_times, im_array[0], im_plot_args={'vmin':-111, 'vmax':111})
plt.xlabel('Pixel X')
plt.ylabel('Pixel Y')
Out[34]:
In [35]:
fig = plt.figure()
ax = ai.plotLightCurves(im_array, results[return_result], image_times)
Finally, we can output the coordinates of the object throughout its trajectory and use this as input to orbit fitting software such as that of Bernstein and Khushalani (2000).
In [36]:
test_coords = ai.return_ra_dec(np.array(list(results[['t0_x', 't0_y']][return_result])),
np.array(list(results[['v_x', 'v_y']][return_result])),
image_times, image_mjd, wcs_list[0], np.ones(len(image_times))*.1, 568)
In [37]:
test_coords
Out[37]:
In [ ]: