This notebook shows examples of using an interactive Ginga viewer running in an HTML5 canvas with an IPython Notebook. You do not need a special widget set to run this, just an HTML5 compliant browser.
In [1]:
# Requirements:
from ginga.version import version
version
# Get ginga from github (https://github.com/ejeschke/ginga) or
# pypi (https://pypi.python.org/pypi/ginga)
# Ginga documentation at: http://ginga.readthedocs.io/en/latest/
Out[1]:
In [2]:
# setup
from ginga.web.pgw import ipg
# Set this to True if you have a non-buggy python OpenCv bindings--it greatly speeds up some operations
use_opencv = False
server = ipg.make_server(host='localhost', port=9914, use_opencv=use_opencv)
In [3]:
# Start viewer server
# IMPORTANT: if running in an IPython/Jupyter notebook, use the no_ioloop=True option
server.start(no_ioloop=True)
In [4]:
# Get a viewer
# This will get a handle to the viewer v1 = server.get_viewer('v1')
v1 = server.get_viewer('v1')
In [5]:
# where is my viewer
v1.url
Out[5]:
In [9]:
# open the viewer in a new window
v1.open()
NOTE: if you don't have the webbrowser
module, open the link that was printed in the cell above in a new window to get the viewer.
You can open as many of these viewers as you want--just keep a handle to it and use a different name for each unique one.
Keyboard/mouse bindings in the viewer window: http://ginga.readthedocs.io/en/latest/quickref.html
You will want to check the box that says "I'm using a trackpad" if you are--it makes zooming much smoother
In [10]:
# Load an image into the viewer
# (change the path to where you downloaded the sample images, or use your own)
v1.load('camera.fits')
In [10]:
# Example of embedding a viewer
v1.embed(height=650)
Out[10]:
In [8]:
# capture the screen
v1.show()
Out[8]:
In [9]:
# Let's get the pan position we just set
dx, dy = v1.get_pan()
dx, dy
Out[9]:
In [10]:
# Getting values from the FITS header is also easy
img = v1.get_image()
hdr =img.get_header()
hdr['OBJECT']
Out[10]:
In [11]:
# What are the coordinates of the pan position?
# This uses astropy.wcs under the hood if you have it installed
img.pixtoradec(dx, dy)
Out[11]:
In [12]:
# Set cut level algorithm to use
v1.set_autocut_params('zscale', contrast=0.25)
# Auto cut levels on the image
v1.auto_levels()
In [13]:
# Let's do an example of the two-way interactivity
# First, let's add a drawing canvas
canvas = v1.add_canvas()
In [14]:
# delete all objects on the canvas
canvas.delete_all_objects()
# set the drawing parameters
canvas.set_drawtype('point', color='black')
Now, in the Ginga window, draw a point using the right mouse button (if you only have one mouse button (e.g. Mac) press and release spacebar, then click and drag)
In [15]:
# get the pixel coordinates of the point we just drew
p = canvas.objects[0]
p.x, p.y
Out[15]:
In [16]:
# Get the RA/DEC in degrees of the point
img.pixtoradec(p.x, p.y)
Out[16]:
In [17]:
# Get RA/DEC in H M S sign D M S
img.pixtoradec(p.x, p.y, format='hms')
Out[17]:
In [18]:
# Get RA/DEC in classical string notation
img.pixtoradec(p.x, p.y, format='str')
Out[18]:
In [19]:
# Verify we have a valid coordinate system defined
img.wcs.coordsys
Out[19]:
In [20]:
# Get viewer model holding data
image = v1.get_image()
image.get_minmax()
Out[20]:
In [21]:
# get viewer data
data_np = image.get_data()
import numpy as np
np.mean(data_np)
Out[21]:
In [22]:
# Set viewer cut levels
v1.cut_levels(170, 2000)
In [23]:
# set a color map on the viewer
v1.set_color_map('smooth')
In [24]:
# Image will appear in this output
v1.show()
Out[24]:
In [25]:
# Set color distribution algorithm
# choices: linear, log, power, sqrt, squared, asinh, sinh, histeq,
v1.set_color_algorithm('linear')
In [26]:
# Example of setting another draw type.
canvas.delete_all_objects()
canvas.set_drawtype('rectangle')
Now right-drag to draw a small rectangle in the Ginga image. Remember: On a single button pointing device, press and release spacebar, then click and drag.
Try to include some objects.
In [27]:
# Find approximate bright peaks in a sub-area
from ginga.util import iqcalc
iq = iqcalc.IQCalc()
img = v1.get_image()
r = canvas.objects[0]
data = img.cutout_shape(r)
peaks = iq.find_bright_peaks(data)
peaks[:20]
Out[27]:
In [28]:
# evaluate peaks to get FWHM, center of each peak, etc.
objs = iq.evaluate_peaks(peaks, data)
# how many did we find with standard thresholding, etc.
# see params for find_bright_peaks() and evaluate_peaks() for details
len(objs)
Out[28]:
In [29]:
# example of what is returned
o1 = objs[0]
o1
Out[29]:
In [30]:
# pixel coords are for cutout, so add back in origin of cutout
# to get full data coords RA, DEC of first object
x1, y1, x2, y2 = r.get_llur()
img.pixtoradec(x1+o1.objx, y1+o1.objy)
Out[30]:
In [31]:
# Draw circles around all objects
Circle = canvas.get_draw_class('circle')
for obj in objs:
x, y = x1+obj.objx, y1+obj.objy
if r.contains(x, y):
canvas.add(Circle(x, y, radius=10, color='yellow'))
# set pan and zoom to center
v1.set_pan((x1+x2)/2, (y1+y2)/2)
v1.scale_to(0.75, 0.75)
In [32]:
v1.show()
Out[32]:
How about some plots...?
In [33]:
# Load an image from a spectrograph at least 1000x1000 (e.g. spectra.fits)
v1.load('spectra.fits')
In [34]:
# swap XY, flip Y, change colormap back to "ramp"
v1.set_color_map('gray')
v1.transform(False, True, True)
v1.auto_levels()
In [35]:
# Programmatically add a line along the figure at designated coordinates
canvas.delete_all_objects()
Line = canvas.get_draw_class('line')
l1 = Line(0, 512, 250, 512)
tag = canvas.add(l1)
In [36]:
# Set the pan position and zoom to 1:1. Show what we did.
v1.set_pan(125, 512)
v1.scale_to(1.0, 1.0)
In [37]:
v1.show()
Out[37]:
In [38]:
# Get the pixel values along this line
img = v1.get_image()
values = img.get_pixels_on_line(l1.x1, l1.y1, l1.x2, l1.y2)
values[:10]
Out[38]:
In [39]:
# Plot the 'cuts'
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.cla()
plt.plot(values)
plt.ylabel('Pixel value')
plt.show()
In [40]:
# Plot the cuts that we will draw interactively
canvas.delete_all_objects()
canvas.set_drawtype('line')
Now draw a line through the image (remember to use right mouse btn or else press space bar first)
In [41]:
# show our line we drew
v1.show()
Out[41]:
In [42]:
def getplot(v1):
l1 = canvas.objects[0]
img = v1.get_image()
values = img.get_pixels_on_line(l1.x1, l1.y1, l1.x2, l1.y2)
plt.cla()
plt.plot(values)
plt.ylabel('Pixel value')
plt.show()
In [43]:
getplot(v1)
In [44]:
# make some random data in a numpy array
import numpy as np
import random
data_np = np.random.rand(512, 512)
In [45]:
# example of loading numpy data directly to the viewer
v1.load_data(data_np)
v1.show()
Out[45]:
In [46]:
# example of loading astropy.io.fit HDUs
from astropy.io import fits
fits_f = fits.open('camera.fits', 'readonly')
hdu = fits_f[0]
v1.load_hdu(hdu)
In [21]:
# the default setting for HTML5 canvas rendering is 'jpeg'
settings = v1.get_settings()
settings.get('html5_canvas_format')
Out[21]:
In [20]:
# using PNG will result in slightly greater clarity (especially for text overlay), but does introduce a
# slight performance hit because more data needs to be transferred each redraw between client and server
settings.set(html5_canvas_format='png')
In [19]:
# If you want to resize the viewer, use this method
v1.resize(1000, 700)
Th-th-th-that's all folks!
Needed packages for this notebook:
ginga
, jupyter/ipython w/notebook featurenumpy
, scipy
, astropy
aggdraw
module module. PIL is included in anaconda, so is usually all you need.webbrowser
, OpenCvLatest Ginga documentation, including detailed installation instructions, can be found here.
In [ ]: