In [1]:
import graphlab

Load a common image analysis dataset


In [2]:
image_train = graphlab.SFrame('image_train_data/')


[INFO] This non-commercial license of GraphLab Create is assigned to eroicaleo@yahoo.com and will expire on September 28, 2016. For commercial licensing options, visit https://dato.com/buy/.

[INFO] Start server at: ipc:///tmp/graphlab_server-820 - Server binary: /Users/yang/anaconda/lib/python2.7/site-packages/graphlab/unity_server - Server log: /tmp/graphlab_server_1447033135.log
[INFO] GraphLab Server Version: 1.6.1

In [3]:
image_test = graphlab.SFrame('image_test_data/')

Exploring the image data


In [4]:
graphlab.canvas.set_target('ipynb')

In [5]:
image_train['image'].show()


Train a classifier on the raw image pixels


In [6]:
raw_pixel_model = graphlab.logistic_classifier.create(image_train, target='label',
                                                     features=['image_array'])


PROGRESS: Creating a validation set from 5 percent of training data. This may take a while.
          You can set ``validation_set=None`` to disable validation tracking.

PROGRESS: Logistic regression:
PROGRESS: --------------------------------------------------------
PROGRESS: Number of examples          : 1911
PROGRESS: Number of classes           : 4
PROGRESS: Number of feature columns   : 1
PROGRESS: Number of unpacked features : 3072
PROGRESS: Number of coefficients    : 9219
PROGRESS: Starting L-BFGS
PROGRESS: --------------------------------------------------------
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | Iteration | Passes   | Step size | Elapsed Time | Training-accuracy | Validation-accuracy |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | 1         | 6        | 0.000013  | 3.336818     | 0.327577          | 0.308511            |
PROGRESS: | 2         | 8        | 1.000000  | 4.262746     | 0.362114          | 0.372340            |
PROGRESS: | 3         | 9        | 1.000000  | 4.789429     | 0.421769          | 0.393617            |
PROGRESS: | 4         | 10       | 1.000000  | 5.307778     | 0.445317          | 0.382979            |
PROGRESS: | 5         | 11       | 1.000000  | 5.841142     | 0.450026          | 0.382979            |
PROGRESS: | 6         | 12       | 1.000000  | 6.403698     | 0.443223          | 0.404255            |
PROGRESS: | 10        | 16       | 1.000000  | 8.548032     | 0.522763          | 0.478723            |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+

Make a prediction with the simple model based on raw pixels


In [7]:
image_test[0:3]['image'].show()



In [9]:
image_test[0:3]['label']


Out[9]:
dtype: str
Rows: 3
['cat', 'automobile', 'cat']

In [10]:
raw_pixel_model.predict(image_test[0:3])


Out[10]:
dtype: str
Rows: 3
['bird', 'cat', 'bird']

Evaluating raw pixel model on test data


In [12]:
raw_pixel_model.evaluate(image_test)


Out[12]:
{'accuracy': 0.47925, 'confusion_matrix': Columns:
 	target_label	str
 	predicted_label	str
 	count	int
 
 Rows: 16
 
 Data:
 +--------------+-----------------+-------+
 | target_label | predicted_label | count |
 +--------------+-----------------+-------+
 |     cat      |       cat       |  351  |
 |     bird     |       cat       |  170  |
 |     dog      |       dog       |  410  |
 |     cat      |       dog       |  281  |
 |     dog      |    automobile   |  109  |
 |  automobile  |       bird      |  100  |
 |     bird     |       bird      |  518  |
 |     bird     |       dog       |  166  |
 |     cat      |    automobile   |  170  |
 |     dog      |       bird      |  224  |
 +--------------+-----------------+-------+
 [16 rows x 3 columns]
 Note: Only the head of the SFrame is printed.
 You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.}

Can we improve the model with deep features


In [13]:
len(image_train)


Out[13]:
2005

In [15]:
# deep_learning_model = graphlab.load_model('imagenet_model')
# image_train['deep_features'] = deep_learning_model.extract_features(image_train)
image_train.head()


Out[15]:
id image label deep_features image_array
24 Height: 32 Width: 32 bird [0.242871761322,
1.09545373917, 0.0, ...
[73.0, 77.0, 58.0, 71.0,
68.0, 50.0, 77.0, 69.0, ...
33 Height: 32 Width: 32 cat [0.525087952614, 0.0,
0.0, 0.0, 0.0, 0.0, ...
[7.0, 5.0, 8.0, 7.0, 5.0,
8.0, 5.0, 4.0, 6.0, 7.0, ...
36 Height: 32 Width: 32 cat [0.566015958786, 0.0,
0.0, 0.0, 0.0, 0.0, ...
[169.0, 122.0, 65.0,
131.0, 108.0, 75.0, ...
70 Height: 32 Width: 32 dog [1.12979578972, 0.0, 0.0,
0.778194487095, 0.0, ...
[154.0, 179.0, 152.0,
159.0, 183.0, 157.0, ...
90 Height: 32 Width: 32 bird [1.71786928177, 0.0, 0.0,
0.0, 0.0, 0.0, ...
[216.0, 195.0, 180.0,
201.0, 178.0, 160.0, ...
97 Height: 32 Width: 32 automobile [1.57818555832, 0.0, 0.0,
0.0, 0.0, 0.0, ...
[33.0, 44.0, 27.0, 29.0,
44.0, 31.0, 32.0, 45.0, ...
107 Height: 32 Width: 32 dog [0.0, 0.0,
0.220677852631, 0.0, ...
[97.0, 51.0, 31.0, 104.0,
58.0, 38.0, 107.0, 61.0, ...
121 Height: 32 Width: 32 bird [0.0, 0.23753464222, 0.0,
0.0, 0.0, 0.0, ...
[93.0, 96.0, 88.0, 102.0,
106.0, 97.0, 117.0, ...
136 Height: 32 Width: 32 automobile [0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 7.5737862587, 0.0, ...
[35.0, 59.0, 53.0, 36.0,
56.0, 56.0, 42.0, 62.0, ...
138 Height: 32 Width: 32 bird [0.658935725689, 0.0,
0.0, 0.0, 0.0, 0.0, ...
[205.0, 193.0, 195.0,
200.0, 187.0, 193.0, ...
[10 rows x 5 columns]

Given the deep features, let's train a classifier


In [17]:
deep_feature_model = graphlab.logistic_classifier.create(image_train, target='label', features=['deep_features'])


PROGRESS: Creating a validation set from 5 percent of training data. This may take a while.
          You can set ``validation_set=None`` to disable validation tracking.

PROGRESS: WARNING: Detected extremely low variance for feature(s) 'deep_features' because all entries are nearly the same.
Proceeding with model training using all features. If the model does not provide results of adequate quality, exclude the above mentioned feature(s) from the input dataset.
PROGRESS: Logistic regression:
PROGRESS: --------------------------------------------------------
PROGRESS: Number of examples          : 1908
PROGRESS: Number of classes           : 4
PROGRESS: Number of feature columns   : 1
PROGRESS: Number of unpacked features : 4096
PROGRESS: Number of coefficients    : 12291
PROGRESS: Starting L-BFGS
PROGRESS: --------------------------------------------------------
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | Iteration | Passes   | Step size | Elapsed Time | Training-accuracy | Validation-accuracy |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+
PROGRESS: | 1         | 5        | 0.000131  | 2.522195     | 0.763103          | 0.711340            |
PROGRESS: | 2         | 9        | 0.250000  | 4.840162     | 0.772537          | 0.649485            |
PROGRESS: | 3         | 10       | 0.250000  | 5.584430     | 0.776205          | 0.659794            |
PROGRESS: | 4         | 11       | 0.250000  | 6.412836     | 0.784067          | 0.680412            |
PROGRESS: | 5         | 12       | 0.250000  | 7.154815     | 0.793501          | 0.670103            |
PROGRESS: | 6         | 13       | 0.250000  | 7.907898     | 0.797694          | 0.659794            |
PROGRESS: | 7         | 14       | 0.250000  | 8.761979     | 0.815514          | 0.690722            |
PROGRESS: | 8         | 15       | 0.250000  | 9.537302     | 0.840147          | 0.680412            |
PROGRESS: | 9         | 16       | 0.250000  | 10.423513    | 0.867925          | 0.701031            |
PROGRESS: | 10        | 17       | 0.250000  | 11.193860    | 0.885220          | 0.690722            |
PROGRESS: +-----------+----------+-----------+--------------+-------------------+---------------------+

Compute test data accuracy of deep features model


In [18]:
deep_feature_model.evaluate(image_test)


Out[18]:
{'accuracy': 0.788, 'confusion_matrix': Columns:
 	target_label	str
 	predicted_label	str
 	count	int
 
 Rows: 16
 
 Data:
 +--------------+-----------------+-------+
 | target_label | predicted_label | count |
 +--------------+-----------------+-------+
 |  automobile  |       cat       |   10  |
 |     bird     |       dog       |   61  |
 |     cat      |       bird      |   80  |
 |  automobile  |       dog       |   7   |
 |     cat      |    automobile   |   36  |
 |     dog      |       bird      |   55  |
 |     bird     |       cat       |  103  |
 |     dog      |    automobile   |   18  |
 |     dog      |       dog       |  721  |
 |     cat      |       dog       |  226  |
 +--------------+-----------------+-------+
 [16 rows x 3 columns]
 Note: Only the head of the SFrame is printed.
 You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.}

Apply the deep features model to first few images of test data


In [19]:
deep_feature_model.predict(image_test[0:3])


Out[19]:
dtype: str
Rows: 3
['cat', 'automobile', 'cat']

In [ ]: