Images are simply n-dimensional regular arrays of pixels where n >= 2. Each pixel has k-channels of information. All pixels and all channels are of the same data type, but there is no restriction in general on what that data type is.
There are two main subclasses of Image
- BooleanImage
and MaskedImage
. The vast majority of the functionality is provided by Image
and hence available on all three Image types - the subclasses are simple specializations which will be explained in this notebook.
Note that for efficieny reasons, we store images by having the channels in the first axis!
Thus, we use the syntax
([k], i, j, ...., l)
to declare an image shape. The i
is the size of the first image dimension, j
the second. This image is of n
dimensions - the final spatial dimension being of size l
. The []
sybolizes that this is the channel axis - this image has k
channels. A few examples for clarity:
([1], 320, 240)
([3], 1700, 1650)
([1], 1024, 1024, 1024)
commonly the channel has an explicit meaning - these are symbolized by a <>
. For example:
(<I>, 320, 240)
(<R, G, B>, 1700, 1650)
(<Z>, 1024, 768)
We now use this notation to explain all of the image classes. As a final note - some classes are fixed to have only one channel. The constructors for these images don't expect you to pass a numpy array in with a dead axis on the end all the time. To signify this, the channel signature includes an exclamation mark to show it is implictly generated for you, for example
(!<Z>!, 1024, 768)
To aid with the explanations, lets import the good old Takeo and Lena images. import_builtin_asset(asset_name)
or import_builtin_asset.asset_name()
allows us to quickly grab a few builtin images
In [1]:
%matplotlib inline
import numpy as np
import menpo.io as mio
lenna = mio.import_builtin_asset('lenna.png')
takeo_rgb = mio.import_builtin_asset.takeo_ppm() # equivalent to: mio.import_builtin_asset('takeo.ppm')
# Takeo is RGB with repeated channels - convert to greyscale
takeo = takeo_rgb.as_greyscale(mode='average')
In [2]:
print('Lenna is a {}'.format(type(lenna)))
print('Takeo is a {}'.format(type(takeo)))
All images are Image
instance, and a large bulk of functionality can be explored in this one class.
In [3]:
from menpo.image import Image
print("Lenna 'isa' Image: {}".format(isinstance(lenna, Image)))
print("Takeo 'isa' Image: {}".format(isinstance(takeo, Image)))
pixels
self.pixels
property. self.pixels[0]
is refered to as the channel axis - it is always present on an instantiated subclass of Image
(even if for instance we know the number of channels to always be 1)
In [4]:
print('Takeo shape: {}'.format(takeo.pixels.shape))
print('The number of channels in Takeo is {}'.format(takeo.pixels.shape[0]))
print("But the right way to find out is with the 'n_channels' property: {}".format(takeo.n_channels))
print('n_channels for Lenna is {}'.format(lenna.n_channels))
shape
self.shape
of the image is the spatial dimension of the array- that's (i, j, ..., n)
In [5]:
print('Takeo has a shape of {}'.format(takeo.shape))
print('Lenna has a shape of {}'.format(lenna.shape))
width and height
0
'th axis of pixels is the 'height' or 'y' axis, and it starts at the top of the image and runs down. The 1
'st axis is the 'width' or 'x' axis - it starts from the left of the image and runs across.Most of the time worrying about this will lead you into hot water - it's a lot better to not get bogged down in the terminology and just consider the image as an array, just like all our other data. As a result, all our algorithms, such as gradient, will be ordered by axis 0,1,...,n
not x,y, z
(as this would be axis 1,0,2
). The self.shape
we printed above was the shape of the underlying array, and so was semantically (height, width)
. You can use the self.width
and self.height
properties to check this for yourself if you ever get confused though
In [6]:
print('Takeo\'s arrangement in memory (for maths) is {}'.format(takeo.shape))
print('Semantically, Takeo has W:{} H:{}'.format(takeo.width, takeo.height))
print(takeo) # shows the common semantic labels
centre
In [7]:
# note that this is (axis0, axis1), which is (height, width) or (Y, X)!
print('The centre of Takeo is {}'.format(takeo.centre()))
counts
n_pixels
is channel independent - to find the total size of the array (including channels) use n_elements
.
In [8]:
print('Lenna n_dims : {}'.format(lenna.n_dims))
print('Lenna n_channels : {}'.format(lenna.n_channels))
print('Lenna n_pixels : {}'.format(lenna.n_pixels))
print('Lenna n_elements: {}'.format(lenna.n_elements))
view
In [9]:
takeo.view();
view()
function has several options, such as render_axes
, axes_font_weight
, alpha
, figure_size
and others.
In [10]:
# render lenna with axes and transparency!
lenna.view(render_axes=True, axes_font_weight='bold', alpha=0.8, figure_size=(7, 5));
channels=x
to inspect a single channel of the image...
In [11]:
# viewing Lenna's red channel...
lenna.view(channels=0);
...or more than one channels, e.g. channels=[1, 2]
In [12]:
# viewing Lenna's green and blue channels...
lenna.view(channels=[1, 2]);
crop
crop_inplace()
, which is inplace, and crop()
which returns the cropped image without damaging the instance it is called on. Both execute identical code paths. To crop we provide the minimum values per dimension where we want the crop to start, and the maximum values where we want the crop to end. For example, to crop Takeo from the centre down to the bottom corner, we could do
In [13]:
takeo_cropped = takeo.crop(takeo.centre(), np.array(takeo.shape))
takeo_cropped.view();
rescale
In [14]:
lenna_double = lenna.rescale(2.0)
print(lenna_double)
landmark support
In [15]:
breakingbad = mio.import_builtin_asset('breakingbad.jpg')
breakingbad.view_landmarks(group='PTS');
In [16]:
print(breakingbad.landmarks)
boundary
argument...
In [17]:
bb = breakingbad.crop_to_landmarks(boundary=20)
# note that this method is smart enough to not stray outside the boundary of the image
bb.view_landmarks(group='PTS', marker_size=10, marker_edge_colour='k', marker_face_colour='w', marker_edge_width=2);
...and the other a proportion
value
In [18]:
bb = breakingbad.crop_to_landmarks_proportion(0.3)
# note that this method is smart enough to not stray outside the boundary of the image
bb.view_landmarks(group='PTS', render_numbering=True, numbers_font_size=14, numbers_font_weight='bold');
Note that the view_landmarks()
has lots of rendering options regarding axes, numbers, lines, marker and legend.
The first concrete Image subclass we will look at is BooleanImage
. This is an n-dimensional image with a single channel per pixel. The datatype of this image is np.bool
. First, remember that BooleanImage
is a subclass of Image
and so all of the above features apply again.
In [19]:
from menpo.image import BooleanImage
random_seed = np.random.random(lenna.shape) # shape doesn't include channel - and that's what we want
random_mask = BooleanImage(random_seed > 0.5)
print("the mask's shape is as expected: {}".format(random_mask.shape))
print("the channel has been added to the mask's pixel's shape for us: {}".format(random_mask.pixels.shape))
random_mask.view();
Note that the constructor for the Boolean Image doesn't require you to pass in the redundant channel axis - it's created for you.
blank()
blank()
method. You can rely on this existing on every concrete Image class.
In [20]:
all_true_mask = BooleanImage.init_blank((120, 240))
all_false_mask = BooleanImage.init_blank((120, 240), fill=False)
metrics
In [21]:
print('n_pixels on random_mask: {}'.format(random_mask.n_pixels))
print('n_true pixels on random_mask: {}'.format(random_mask.n_true()))
print('n_false pixels on random_mask: {}'.format(random_mask.n_false()))
print('proportion_true on random_mask: {:.3}'.format(random_mask.proportion_true()))
print('proportion_false on random_mask: {:.3}'.format(random_mask.proportion_false()))
true_indices/false_indices
BooleanImage
has functionality that aids in the use of the class as a mask to another image. The indices properties give you access to the coordinates of the True and False values as if the mask had been flattened.
In [22]:
from copy import deepcopy
small_amount_true = deepcopy(all_false_mask)
small_amount_true.pixels[0, 4, 8] = True
small_amount_true.pixels[0, 15, 56] = True
small_amount_true.pixels[0, 0, 4] = True
print(small_amount_true.true_indices()) # note the ordering is incremental C ordered
print('The shape of true indices: {}'.format(small_amount_true.true_indices().shape))
print('The shape of false indices: {}'.format(small_amount_true.false_indices().shape))
indices
In [23]:
print('The shape of indices: {}'.format(small_amount_true.indices().shape))
# note that indices = true_indices + false_indices
mask
mask
property. This is used heavily in MaskedImage
.
In [24]:
lenna_masked_pixels_flatted = lenna.pixels[0, random_mask.mask]
lenna_masked_pixels_flatted.shape
# note we can only do this as random_mask is the shape of lenna
print('Is Lenna and random mask the same shape? {}'.format(lenna.shape == random_mask.shape))
In [25]:
print(random_mask)
print(lenna)
print(takeo)
The last Image
subclass is MaskedImage
. Note that all images imported through menpo.io are instances of Image and you should manually convert them to MaskedImage
instances if you wish. Just like you would expect, MaskedImage
s have a mask attached to them which augments their usual behavior.
mask
MaskedImage
s have a BooleanImage
of appropriate size attached to them at the mask property. On construction, a mask can be specified at the mask
kwarg (either a boolean ndarray
or a BooleanImage
instance). If nothing is provided, the mask is set to all true. A MaskedImage
with an all true mask behaves exactly as an Image - abeit with a performance penalty. An Image
instance can be converted to MaskedImage
using the as_masked()
method.
In [26]:
takeo_masked = takeo.as_masked()
print(takeo_masked.mask)
takeo_masked.mask.view(figure_size=(6, 4), render_axes=True);
breakingbad_masked = breakingbad.as_masked()
print(breakingbad_masked.mask)
breakingbad_masked.mask.view(new_figure=True, figure_size=(6, 4), render_axes=True);
constrain_mask_to_landmarks
In [27]:
bb_masked_constrained = breakingbad_masked.constrain_mask_to_landmarks()
bb_masked_constrained.mask.view();
bb_masked_constrained.view_landmarks(new_figure=True);
view behavior
In [28]:
bb_masked_constrained.view();
masked=False
to see everything.
In [29]:
bb_masked_constrained.view(masked=False);
as_vector() / from_vector() behavior
as_vector()
and from_vector()
methods on MaskedImage
s only returns True
mask values flattened.
In [30]:
print('bb_masked_constrained has {} pixels, but only {} are '
'masked.'.format(bb_masked_constrained.n_pixels, bb_masked_constrained.n_true_pixels()))
print('bb_masked_constrained has {} elements (3 x n_pixels)'.format(bb_masked_constrained.n_elements))
vectorized_bad = bb_masked_constrained.as_vector()
print('vector of bb_masked_constrained is of shape {}'.format(vectorized_bad.shape))
constrain_mask_to_landmarks
Allows us to update the mask to equal the convex hull around some landmarks on the image. You can choose a particular group of landmarks (e.g. PTS
) and then a specific label (e.g. perimeter
). By default, if neither are provided (and if their is only one landmark group) all the landmarks are used to form a convex hull.
Finally, it is worth noticing than most of the previous image information and visualization options (and others, like image saving) can be conveniently accessed from a specifically designed Image
IPython Notebook Widget by simply calling the method view_widget
on any Menpo
- Image
object. Note that the widget functionality is provided by the menpowidgets project and should be installed separately using conda (conda install -c menpo menpowidgets
).
The widget has several options related to channels, landmarks and the renderer itself.
In [31]:
breakingbad.view_widget()
Similarly, a list
of images can be viewed using the visualize_images()
widget. The widget allows to animate through the images.
In [32]:
from menpowidgets import visualize_images
visualize_images([takeo, breakingbad_masked, lenna])