Softmax exercise

Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.

This exercise is analogous to the SVM exercise. You will:

  • implement a fully-vectorized loss function for the Softmax classifier
  • implement the fully-vectorized expression for its analytic gradient
  • check your implementation with numerical gradient
  • use a validation set to tune the learning rate and regularization strength
  • optimize the loss function with SGD
  • visualize the final learned weights

In [1]:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt

from __future__ import print_function

%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2

In [2]:
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
    """
    Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
    it for the linear classifier. These are the same steps as we used for the
    SVM, but condensed to a single function.  
    """
    # Load the raw CIFAR-10 data
    cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
    X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
    
    # subsample the data
    mask = list(range(num_training, num_training + num_validation))
    X_val = X_train[mask]
    y_val = y_train[mask]
    mask = list(range(num_training))
    X_train = X_train[mask]
    y_train = y_train[mask]
    mask = list(range(num_test))
    X_test = X_test[mask]
    y_test = y_test[mask]
    mask = np.random.choice(num_training, num_dev, replace=False)
    X_dev = X_train[mask]
    y_dev = y_train[mask]
    
    # Preprocessing: reshape the image data into rows
    X_train = np.reshape(X_train, (X_train.shape[0], -1))
    X_val = np.reshape(X_val, (X_val.shape[0], -1))
    X_test = np.reshape(X_test, (X_test.shape[0], -1))
    X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
    
    # Normalize the data: subtract the mean image
    mean_image = np.mean(X_train, axis = 0)
    X_train -= mean_image
    X_val -= mean_image
    X_test -= mean_image
    X_dev -= mean_image
    
    # add bias dimension and transform into columns
    X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
    X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
    X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
    X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
    
    return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev


# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)


Train data shape:  (49000, 3073)
Train labels shape:  (49000,)
Validation data shape:  (1000, 3073)
Validation labels shape:  (1000,)
Test data shape:  (1000, 3073)
Test labels shape:  (1000,)
dev data shape:  (500, 3073)
dev labels shape:  (500,)

Softmax Classifier

Your code for this section will all be written inside cs231n/classifiers/softmax.py.


In [7]:
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.

from cs231n.classifiers.softmax import softmax_loss_naive
import time

# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)

# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))


loss: 2.330779
sanity check: 2.302585

Inline Question 1:

Why do we expect our loss to be close to -log(0.1)? Explain briefly.**

Your answer: Fill this in


In [18]:
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)

# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)

# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)


numerical: -0.243426 analytic: -0.243426, relative error: 5.400341e-08
numerical: -3.670198 analytic: -3.670198, relative error: 5.260364e-09
numerical: 1.120211 analytic: 1.120210, relative error: 1.580881e-08
numerical: 3.075610 analytic: 3.075610, relative error: 1.679090e-09
numerical: -1.872674 analytic: -1.872675, relative error: 1.523375e-08
numerical: -1.561098 analytic: -1.561098, relative error: 1.554188e-08
numerical: -0.660366 analytic: -0.660366, relative error: 2.611285e-08
numerical: -3.457356 analytic: -3.457356, relative error: 1.943059e-08
numerical: -4.203186 analytic: -4.203186, relative error: 1.563455e-08
numerical: -0.236900 analytic: -0.236900, relative error: 4.959811e-08
numerical: 1.329285 analytic: 1.329285, relative error: 6.795195e-08
numerical: -0.650009 analytic: -0.650009, relative error: 2.876717e-08
numerical: -3.272960 analytic: -3.272960, relative error: 3.822477e-09
numerical: 0.249259 analytic: 0.249259, relative error: 1.046018e-07
numerical: -2.236934 analytic: -2.236934, relative error: 9.979275e-09
numerical: -0.702286 analytic: -0.702286, relative error: 1.330391e-09
numerical: -0.028009 analytic: -0.028009, relative error: 5.569374e-08
numerical: 3.579758 analytic: 3.579758, relative error: 4.694451e-09
numerical: 0.824018 analytic: 0.824018, relative error: 3.039356e-08
numerical: 1.892937 analytic: 1.892937, relative error: 1.680149e-09

In [24]:
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))

from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))

# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)


naive loss: 2.330779e+00 computed in 0.306717s
vectorized loss: 2.330779e+00 computed in 0.017724s
Loss difference: 0.000000
Gradient difference: 0.000000

In [27]:
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]

################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained softmax classifer in best_softmax.                          #
################################################################################
for lr in learning_rates:
    for reg in regularization_strengths:
        clf = Softmax()
        clf.train(X_train, y_train, learning_rate=lr, reg=reg,
                      num_iters=1000, verbose=True)
        y_val_pred = clf.predict(X_val)
        y_train_pred = clf.predict(X_train)
        ta = np.mean(y_train == y_train_pred)
        va = np.mean(y_val == y_val_pred)
        results[(lr, reg)] = (ta, va)
        if(va > best_val):
            best_val = va
            best_softmax = clf
################################################################################
#                              END OF YOUR CODE                                #
################################################################################
    
# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy))
    
print('best validation accuracy achieved during cross-validation: %f' % best_val)


iteration 0 / 1000: loss 786.916200
iteration 100 / 1000: loss 288.993660
iteration 200 / 1000: loss 107.151616
iteration 300 / 1000: loss 40.582767
iteration 400 / 1000: loss 16.133427
iteration 500 / 1000: loss 7.210398
iteration 600 / 1000: loss 3.975678
iteration 700 / 1000: loss 2.748336
iteration 800 / 1000: loss 2.357780
iteration 900 / 1000: loss 2.211608
iteration 0 / 1000: loss 1537.610992
iteration 100 / 1000: loss 207.055282
iteration 200 / 1000: loss 29.503943
iteration 300 / 1000: loss 5.846908
iteration 400 / 1000: loss 2.596489
iteration 500 / 1000: loss 2.193284
iteration 600 / 1000: loss 2.119779
iteration 700 / 1000: loss 2.102660
iteration 800 / 1000: loss 2.125780
iteration 900 / 1000: loss 2.141603
iteration 0 / 1000: loss 784.791289
iteration 100 / 1000: loss 6.983516
iteration 200 / 1000: loss 2.100496
iteration 300 / 1000: loss 2.066497
iteration 400 / 1000: loss 2.089499
iteration 500 / 1000: loss 2.067634
iteration 600 / 1000: loss 2.175303
iteration 700 / 1000: loss 2.122115
iteration 800 / 1000: loss 2.097006
iteration 900 / 1000: loss 2.106875
iteration 0 / 1000: loss 1555.499840
iteration 100 / 1000: loss 2.227616
iteration 200 / 1000: loss 2.145034
iteration 300 / 1000: loss 2.166736
iteration 400 / 1000: loss 2.127339
iteration 500 / 1000: loss 2.137198
iteration 600 / 1000: loss 2.141441
iteration 700 / 1000: loss 2.115738
iteration 800 / 1000: loss 2.096106
iteration 900 / 1000: loss 2.128386
lr 1.000000e-07 reg 2.500000e+04 train accuracy: 0.322878 val accuracy: 0.335000
lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.301673 val accuracy: 0.314000
lr 5.000000e-07 reg 2.500000e+04 train accuracy: 0.330245 val accuracy: 0.352000
lr 5.000000e-07 reg 5.000000e+04 train accuracy: 0.297143 val accuracy: 0.306000
best validation accuracy achieved during cross-validation: 0.352000

In [28]:
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))


softmax on raw pixels final test set accuracy: 0.345000

In [29]:
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)

w_min, w_max = np.min(w), np.max(w)

classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
    plt.subplot(2, 5, i + 1)
    
    # Rescale the weights to be between 0 and 255
    wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
    plt.imshow(wimg.astype('uint8'))
    plt.axis('off')
    plt.title(classes[i])



In [ ]: