Image features exercise

Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.

We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.

All of your work for this exercise will be done in this notebook.


In [1]:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2

Load data

Similar to previous exercises, we will load CIFAR-10 data from disk.


In [2]:
from cs231n.features import color_histogram_hsv, hog_feature

def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
  # Load the raw CIFAR-10 data
  cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
  X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
  
  # Subsample the data
  mask = range(num_training, num_training + num_validation)
  X_val = X_train[mask]
  y_val = y_train[mask]
  mask = range(num_training)
  X_train = X_train[mask]
  y_train = y_train[mask]
  mask = range(num_test)
  X_test = X_test[mask]
  y_test = y_test[mask]

  return X_train, y_train, X_val, y_val, X_test, y_test

X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()

Extract Features

For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors.

Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section.

The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image.


In [3]:
from cs231n.features import *

num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)

# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat

# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat

# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])


Done extracting features for 1000 / 49000 images
Done extracting features for 2000 / 49000 images
Done extracting features for 3000 / 49000 images
Done extracting features for 4000 / 49000 images
Done extracting features for 5000 / 49000 images
Done extracting features for 6000 / 49000 images
Done extracting features for 7000 / 49000 images
Done extracting features for 8000 / 49000 images
Done extracting features for 9000 / 49000 images
Done extracting features for 10000 / 49000 images
Done extracting features for 11000 / 49000 images
Done extracting features for 12000 / 49000 images
Done extracting features for 13000 / 49000 images
Done extracting features for 14000 / 49000 images
Done extracting features for 15000 / 49000 images
Done extracting features for 16000 / 49000 images
Done extracting features for 17000 / 49000 images
Done extracting features for 18000 / 49000 images
Done extracting features for 19000 / 49000 images
Done extracting features for 20000 / 49000 images
Done extracting features for 21000 / 49000 images
Done extracting features for 22000 / 49000 images
Done extracting features for 23000 / 49000 images
Done extracting features for 24000 / 49000 images
Done extracting features for 25000 / 49000 images
Done extracting features for 26000 / 49000 images
Done extracting features for 27000 / 49000 images
Done extracting features for 28000 / 49000 images
Done extracting features for 29000 / 49000 images
Done extracting features for 30000 / 49000 images
Done extracting features for 31000 / 49000 images
Done extracting features for 32000 / 49000 images
Done extracting features for 33000 / 49000 images
Done extracting features for 34000 / 49000 images
Done extracting features for 35000 / 49000 images
Done extracting features for 36000 / 49000 images
Done extracting features for 37000 / 49000 images
Done extracting features for 38000 / 49000 images
Done extracting features for 39000 / 49000 images
Done extracting features for 40000 / 49000 images
Done extracting features for 41000 / 49000 images
Done extracting features for 42000 / 49000 images
Done extracting features for 43000 / 49000 images
Done extracting features for 44000 / 49000 images
Done extracting features for 45000 / 49000 images
Done extracting features for 46000 / 49000 images
Done extracting features for 47000 / 49000 images
Done extracting features for 48000 / 49000 images

Train SVM on features

Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.


In [5]:
# Use the validation set to tune the learning rate and regularization strength

from cs231n.classifiers.linear_classifier import LinearSVM

learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]

results = {}
best_val = -1
best_svm = None

pass
################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained classifer in best_svm. You might also want to play          #
# with different numbers of bins in the color histogram. If you are careful    #
# you should be able to get accuracy of near 0.44 on the validation set.       #
################################################################################
pass
for learning_rate in learning_rates:
    for regularization_strength in regularization_strengths:
        svm = LinearSVM()
        loss_hist = svm.train(X_train_feats, y_train, learning_rate=learning_rate, reg=regularization_strength,
                      num_iters=400, verbose=True)
        y_train_pred = svm.predict(X_train_feats)
        y_val_pred = svm.predict(X_val_feats)
        current_val = np.mean(y_val == y_val_pred)
        if current_val > best_val:
            best_val = current_val
            best_svm = svm
        results[(learning_rate, regularization_strength)] = (np.mean(y_train == y_train_pred),  current_val)
        
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy)
    
print 'best validation accuracy achieved during cross-validation: %f' % best_val


iteration 0 / 400: loss 82.668244
iteration 100 / 400: loss 79.778498
iteration 200 / 400: loss 76.999469
iteration 300 / 400: loss 74.342322
iteration 0 / 400: loss 781.561704
iteration 100 / 400: loss 526.632695
iteration 200 / 400: loss 355.851430
iteration 300 / 400: loss 241.406962
iteration 0 / 400: loss 8411.412038
iteration 100 / 400: loss 156.780581
iteration 200 / 400: loss 11.599411
iteration 300 / 400: loss 9.045744
iteration 0 / 400: loss 86.773988
iteration 100 / 400: loss 61.133068
iteration 200 / 400: loss 43.933105
iteration 300 / 400: loss 32.390354
iteration 0 / 400: loss 728.864476
iteration 100 / 400: loss 21.662442
iteration 200 / 400: loss 9.222616
iteration 300 / 400: loss 9.003914
iteration 0 / 400: loss 7928.404012
iteration 100 / 400: loss 8.999997
iteration 200 / 400: loss 8.999997
iteration 300 / 400: loss 8.999997
iteration 0 / 400: loss 88.881400
iteration 100 / 400: loss 10.406188
iteration 200 / 400: loss 9.024586
iteration 300 / 400: loss 9.000215
iteration 0 / 400: loss 792.186446
iteration 100 / 400: loss 8.999980
iteration 200 / 400: loss 8.999976
iteration 300 / 400: loss 8.999975
iteration 0 / 400: loss 7635.473855
iteration 100 / 400: loss 7635.479409
iteration 200 / 400: loss 7635.408338
iteration 300 / 400: loss 7635.430304
lr 1.000000e-09 reg 1.000000e+05 train accuracy: 0.111204 val accuracy: 0.119000
lr 1.000000e-09 reg 1.000000e+06 train accuracy: 0.116551 val accuracy: 0.123000
lr 1.000000e-09 reg 1.000000e+07 train accuracy: 0.098224 val accuracy: 0.089000
lr 1.000000e-08 reg 1.000000e+05 train accuracy: 0.124388 val accuracy: 0.127000
lr 1.000000e-08 reg 1.000000e+06 train accuracy: 0.253286 val accuracy: 0.280000
lr 1.000000e-08 reg 1.000000e+07 train accuracy: 0.404204 val accuracy: 0.402000
lr 1.000000e-07 reg 1.000000e+05 train accuracy: 0.404306 val accuracy: 0.404000
lr 1.000000e-07 reg 1.000000e+06 train accuracy: 0.384347 val accuracy: 0.370000
lr 1.000000e-07 reg 1.000000e+07 train accuracy: 0.093408 val accuracy: 0.106000
best validation accuracy achieved during cross-validation: 0.404000

In [6]:
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy


0.413

In [7]:
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".

examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
    idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
    idxs = np.random.choice(idxs, examples_per_class, replace=False)
    for i, idx in enumerate(idxs):
        plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
        plt.imshow(X_test[idx].astype('uint8'))
        plt.axis('off')
        if i == 0:
            plt.title(cls_name)
plt.show()


Inline question 1:

Describe the misclassification results that you see. Do they make sense?

The mistaken result are somewhat related to the correct class template. For the plane class, many picture with monochromatic background and long object body. Trucks are often mistaken as cars, and birds class results contain images with green backgrounds. Animals are mistaken with each other more often, so as human-made transportation.

Neural Network on image features

Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.

For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.


In [8]:
print X_train_feats.shape


(49000, 155)

In [12]:
from cs231n.classifiers.neural_net import TwoLayerNet

input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10

net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None

################################################################################
# TODO: Train a two-layer neural network on image features. You may want to    #
# cross-validate various parameters as in previous sections. Store your best   #
# model in the best_net variable.                                              #
################################################################################
pass
for learning_rate in np.arange(1e-4, 1e-3, 4e-4):
    for num_iters in [1000, 1500]:
        for reg in [.1, .3, .5]:
             net = TwoLayerNet(input_dim, hidden_dim, num_classes)
             stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
                            num_iters=num_iters, batch_size=200,
                            learning_rate=learning_rate, learning_rate_decay=0.95,
                            reg=reg, verbose=False)
            # Predict on the validation set
             val_acc = (net.predict(X_val_feats) == y_val).mean()
             print 'Validation accuracy: ', val_acc
             if val_acc > best_val:
                 best_val = val_acc
                 best_net = net
################################################################################
#                              END OF YOUR CODE                                #
################################################################################


Validation accuracy:  0.079
Validation accuracy:  0.079
Validation accuracy:  0.098
Validation accuracy:  0.079
Validation accuracy:  0.078
Validation accuracy:  0.096
Validation accuracy:  0.078
Validation accuracy:  0.119
Validation accuracy:  0.102
Validation accuracy:  0.087
Validation accuracy:  0.079
Validation accuracy:  0.087
Validation accuracy:  0.102
Validation accuracy:  0.087
Validation accuracy:  0.098
Validation accuracy:  0.078
Validation accuracy:  0.113
Validation accuracy:  0.078

In [13]:
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.

test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc


0.09

Bonus: Design your own features!

You have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.

For bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.

Bonus: Do something extra!

Use the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!