In [2]:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline

Extract Images

Included in these workshop materials is a compressed file ("data.tar.gz") containg the images that we'll be classifying today. Once you extract this file, you should have a directory called "data" which contains the following directories:

Directory Contents
I Contains rectangle tag images
O Contains circle tag images
Q Contains blank tag images

Feel free to have a look through these directories, and we'll show you how to load these images into Python using OpenCV next.

Reading Images

We're now going to be using OpenCV's "imread" command to load one of the images from each type of tag into Python and then use Matplotlib to plot the images:


In [6]:
rect_image =    cv2.imread('data/I/27.png', cv2.IMREAD_GRAYSCALE)
circle_image =  cv2.imread('data/O/11527.png', cv2.IMREAD_GRAYSCALE)
queen_image =   cv2.imread('data/Q/18027.png', cv2.IMREAD_GRAYSCALE)

plt.figure(figsize = (10, 7))
plt.title('Rectangle Tag')
plt.axis('off')
plt.imshow(rect_image,  cmap = cm.Greys_r)

plt.figure(figsize = (10, 7))
plt.title('Circle Tag')
plt.axis('off')
plt.imshow(circle_image,  cmap = cm.Greys_r)

plt.figure(figsize = (10, 7))
plt.title('Queen Tag')
plt.axis('off')
plt.imshow(queen_image,  cmap = cm.Greys_r)


Out[6]:
<matplotlib.image.AxesImage at 0x11702df98>

Image Properties

One of the really useful things about using OpenCV to manipulate images in Python is that all images are treated as NumPy matrices. This means we can use NumPy's functions to manipulate and understand the data we're working with. To demonstrate this, we'll use use NumPy's "shape" and "dtype" commands to take a closer look at the rectangular tag image we just read in:


In [8]:
print (rect_image.shape)
print (rect_image.dtype)


(24, 24)
uint8

This tells us that this image is 24x24 pixels in size, and that the datatype of the values it stores are unsigned 8 bit integers. While the explanation of this datatype isn't especially relevant to the lesson, the main point is that it is extremely important to double check the size and structure of your data. Let's do the same thing for the circular tag image too:


In [9]:
print (circle_image.shape)
print (circle_image.dtype)


(24, 24)
uint8

This holds the same values, which is good. When you're working with your own datasets in the future, it would be highly beneficial to write your own little program to check the values and structure of your data to ensure that subtle bugs don't creep in to your analysis.

Feature Engineering

When people think of machine learning, the first thing that comes to mind tends to be the fancy algorithms that will train the computer to solve your problem. Of course this is important, but the reality of the matter is that the way you process the data you'll eventually feed into the machine learning algorithm is often the thing you'll spend the most time doing and will have the biggest effect on the accuracy of your results.

Now, when most people think of features in data, they think that this is what it is:


In [10]:
plt.figure(figsize = (10, 7))
plt.title('Rectangle Tag')
plt.axis('off')
plt.imshow(rect_image,  cmap = cm.Greys_r)


Out[10]:
<matplotlib.image.AxesImage at 0x1179c2668>

In fact this is not actualy the case. In the case of this dataset, the features are actually the pixel values that make up the images - those are the values we'll be training the machine learning algorithm with:


In [13]:
print(rect_image)


[[ 42  40  42  46  53  56  50  46  70  79  84  84  97 104 108 115 100  88
   87  94  73  63  53  38]
 [ 43  36  37  49  59  71  63  75  86  95 121 137 168 193 196 184 162 128
  100 105  85  75  66  61]
 [ 37  29  35  51  65  87  88 118 127 132 174 197 225 238 240 231 219 195
  162 165 123  94  73  59]
 [ 26  35  46  46  71 100 117 162 187 202 228 232 236 238 242 243 241 236
  229 212 170 116  77  58]
 [ 37  39  58  56  89 130 156 207 233 240 245 242 244 244 241 241 244 241
  244 238 227 175 138 110]
 [ 47  65  87  93 125 169 215 240 243 245 244 243 244 244 243 233 214 220
  240 234 236 227 193 166]
 [ 36  79 111 109 159 209 240 246 246 245 245 244 240 239 239 225 173 188
  234 236 238 239 227 200]
 [ 38  81 111 143 207 240 247 245 242 242 244 238 228 194 178 160 132 142
  222 244 243 244 242 212]
 [ 45 102 136 164 223 242 244 244 242 241 239 217 187 151 118 109 100 121
  193 235 241 244 244 224]
 [ 68 118 146 215 243 242 244 245 243 234 189 149 128 115 111  97  90 103
  149 221 241 243 243 229]
 [ 90 132 162 239 248 244 245 238 222 194 144 105  82  89 112  97  87  94
  132 196 237 243 243 238]
 [ 89 148 223 245 239 216 191 169 143 129 106  67  58  85 111  99  84  87
  123 172 214 236 244 239]
 [115 173 222 237 212 175 145 128  97  93  96  72  65  81  94  88  82  94
  117 150 197 233 240 233]
 [122 192 237 234 199 169 132 115  85  79  82  77  68  66  73  90 116 123
  132 179 221 235 228 199]
 [135 203 244 241 232 200 142 124  96  84  72  73  71  79 113 134 166 181
  208 223 233 234 220 186]
 [143 205 240 240 243 239 193 140 109  89  89 113 113 108 125 178 227 237
  242 244 242 236 215 162]
 [131 192 226 228 238 233 188 135 116 114 104 138 141 134 160 219 238 244
  244 245 241 224 192 139]
 [ 97 148 189 217 236 229 191 130 130 122  98 147 168 195 232 243 242 240
  241 243 242 214 164 104]
 [ 76 128 192 203 234 242 235 164 108 102 137 200 237 241 236 245 241 238
  245 246 244 207 151 102]
 [ 64 100 170 188 234 247 240 159 111 134 212 234 242 243 241 242 237 237
  235 235 221 168 116  84]
 [ 60  77 103 131 177 226 220 174 166 196 239 242 241 242 241 236 233 214
  180 175 156 111  72  54]
 [ 65  85  92 100 123 167 200 188 188 193 206 236 248 249 244 222 185 151
  136 118 116  85  65  54]
 [ 59  64  66  73 107 143 157 143 142 149 175 198 214 221 207 167 138 127
  104 102  93  71  57  44]
 [ 54  54  51  63  78  87  84  85  81  92 132 157 160 160 155 118 116 107
   73  75  65  56  46  18]]

So what can we do to manipulate the features in out dataset to improve our results? We'll explore three methods to acheive this:

  1. Image smoothing
  2. Modifying brightness
  3. Modifying contrast

Techniques like image smoothing can be useful when improving the features you train the machine learning algorithm on as you can eliminate some of the potential noise in the image that could confuse the program.

Smoothing

Image smoothing is another name for blurring the image. It involves passing a rectangular box (called a kernel) over the image and modifying pixels in the image based on the surrounding values.

As part of this exercise, we'll explore 3 different smoothing techniques:

Smoothing Method Explanation
Mean Replaces pixel with the mean value of the surrounding pixels
Median Replaces pixel with the median value of the surrounding pixels
Gaussian Replaces pixel by placing different weightings on surrrounding pixels according to the gaussian distribution

In [ ]:
mean_smoothed = cv2.blur(rect_image, (5, 5))
median_smoothed = cv2.medianBlur(rect_image, 5)
gaussian_smoothed = cv2.GaussianBlur(rect_image, (5, 5), 0)

Feel free to have a play with the different parameters for these smoothing operations. We'll now write some code to place the original images next to their smoothed counterparts in order to compare them:


In [17]:
mean_compare = np.hstack((rect_image, mean_smoothed))
median_compare = np.hstack((rect_image, median_smoothed))
gaussian_compare = np.hstack((rect_image, gaussian_smoothed))

plt.figure(figsize = (15, 12))
plt.title('Mean')
plt.axis('off')
plt.imshow(mean_compare, cmap = cm.Greys_r) 

plt.figure(figsize = (15, 12))
plt.title('Median')
plt.axis('off')
plt.imshow(median_compare, cmap = cm.Greys_r)

plt.figure(figsize = (15, 12))
plt.title('Gaussian')
plt.axis('off')
plt.imshow(gaussian_compare, cmap = cm.Greys_r)


Out[17]:
<matplotlib.image.AxesImage at 0x11b1c4080>

Brightness and Contrast

Modifying the brightness and contrast of our images is a surprisingly simple task: to increase/decrease the brightness, you just have to add/subtract from every pixel value in the image, while to increase the contrast, you can multiply the pixel values by a number larger than one or to decrease the contrast you multiple the pixel values by a floating point number less than one. Let's see how this works:


In [ ]:
increase_contrast = rect_image * 3
decrease_contrast = rect_image * 0.5
increase_brightness = rect_image + 30
decrease_brightness = rect_image - 30

plt.figure(figsize = (10, 7))
plt.title('HeLa Cells')
plt.axis('off')
plt.imshow(decrease_contrast)

Visualising Data

Important to visualise, etc

Clustering - PCA, K means, supervised clustering - LDA

Splitting Training/Test Data

SVM Classification

Include pic of SVM theory.

Code from notebook talk:


In [ ]:
import cv2
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import glob
%matplotlib inline

def read_gray_img(img_loc):
    image = cv2.imread(img_loc, 0)
    return image

def flatten_image(roi):
    flat_roi = roi.flatten()
    return flat_roi

In [ ]:
tags = []
classify = []

tags.extend(glob.glob('I/*.png'))
classify.extend(len(glob.glob('I/*.png')) * [1])
tags.extend(glob.glob('O/*.png'))
classify.extend(len(glob.glob('O/*.png')) * [2])
tags.extend(glob.glob('Q/*.png'))
classify.extend(len(glob.glob('Q/*.png')) * [3])

plt.figure()
plt.imshow(read_tags[0], cmap = cm.Greys_r)
plt.figure()
plt.imshow(read_tags[108], cmap = cm.Greys_r)
plt.figure()
plt.imshow(read_tags[211], cmap = cm.Greys_r)

In [ ]:
tag_flat = list(map(flatten_image, read_tags))
X = np.array(tag_flat)
print(X.shape)
y = np.array(classify)
print(y.shape)

In [ ]:
from sklearn.cross_validation import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=4)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)

In [ ]:
from sklearn.decomposition import PCA
from sklearn.lda import LDA

pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
plt.figure(figsize = (35, 20))
plt.scatter(X_r[:, 0], X_r[:, 1], c=y, s=200)

lda = LDA(n_components=2)
lda = lda.fit(X_train, y_train)
X_lda = lda.transform(X_train)
Z = lda.transform(X_test)
plt.figure(figsize = (35, 20))
plt.scatter(X_lda[:, 0], X_lda[:, 1], c=y_train, s=200)

In [ ]:
from sklearn import svm
from sklearn import metrics

clf = svm.SVC(gamma=0.001, C=10)
clf.fit(X_lda, y_train)
y_pred = clf.predict(Z)

print metrics.accuracy_score(y_test, y_pred)