Softmax exercise

Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.

This exercise is analogous to the SVM exercise. You will:

  • implement a fully-vectorized loss function for the Softmax classifier
  • implement the fully-vectorized expression for its analytic gradient
  • check your implementation with numerical gradient
  • use a validation set to tune the learning rate and regularization strength
  • optimize the loss function with SGD
  • visualize the final learned weights

In [1]:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2

In [2]:
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
  """
  Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
  it for the linear classifier. These are the same steps as we used for the
  SVM, but condensed to a single function.  
  """
  # Load the raw CIFAR-10 data
  cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
  X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
  
  # subsample the data
  mask = range(num_training, num_training + num_validation)
  X_val = X_train[mask]
  y_val = y_train[mask]
  mask = range(num_training)
  X_train = X_train[mask]
  y_train = y_train[mask]
  mask = range(num_test)
  X_test = X_test[mask]
  y_test = y_test[mask]
  mask = np.random.choice(num_training, num_dev, replace=False)
  X_dev = X_train[mask]
  y_dev = y_train[mask]
  
  # Preprocessing: reshape the image data into rows
  X_train = np.reshape(X_train, (X_train.shape[0], -1))
  X_val = np.reshape(X_val, (X_val.shape[0], -1))
  X_test = np.reshape(X_test, (X_test.shape[0], -1))
  X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
  
  # Normalize the data: subtract the mean image
  mean_image = np.mean(X_train, axis = 0)
  X_train -= mean_image
  X_val -= mean_image
  X_test -= mean_image
  X_dev -= mean_image
  
  # add bias dimension and transform into columns
  X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
  X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
  X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
  X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
  
  return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev


# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
print 'dev data shape: ', X_dev.shape
print 'dev labels shape: ', y_dev.shape


Train data shape:  (49000L, 3073L)
Train labels shape:  (49000L,)
Validation data shape:  (1000L, 3073L)
Validation labels shape:  (1000L,)
Test data shape:  (1000L, 3073L)
Test labels shape:  (1000L,)
dev data shape:  (500L, 3073L)
dev labels shape:  (500L,)

Softmax Classifier

Your code for this section will all be written inside cs231n/classifiers/softmax.py.


In [3]:
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.

from cs231n.classifiers.softmax import softmax_loss_naive
import time

# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)

# As a rough sanity check, our loss should be something close to -log(0.1).
print 'loss: %f' % loss
print 'sanity check: %f' % (-np.log(0.1))


loss: 2.319763
sanity check: 2.302585

Inline Question 1:

Why do we expect our loss to be close to -log(0.1)? Explain briefly.**

Your answer: Fill this in


In [4]:
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)

# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)

# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)


numerical: -1.595254 analytic: -1.595254, relative error: 2.976931e-08
numerical: 0.654537 analytic: 0.654537, relative error: 5.389168e-08
numerical: 0.353732 analytic: 0.353732, relative error: 1.652855e-08
numerical: -0.879983 analytic: -0.879983, relative error: 4.808942e-08
numerical: 2.604907 analytic: 2.604907, relative error: 3.086584e-08
numerical: 0.480543 analytic: 0.480543, relative error: 1.168150e-07
numerical: -0.470106 analytic: -0.470106, relative error: 5.136744e-08
numerical: 0.294344 analytic: 0.294344, relative error: 2.115843e-07
numerical: 1.927960 analytic: 1.927960, relative error: 2.385262e-08
numerical: -0.827883 analytic: -0.827883, relative error: 2.114048e-09
numerical: -2.052653 analytic: -2.052653, relative error: 2.471790e-09
numerical: 1.668631 analytic: 1.668631, relative error: 3.582667e-08
numerical: -1.597597 analytic: -1.597597, relative error: 1.104140e-08
numerical: 1.590456 analytic: 1.590456, relative error: 3.098170e-08
numerical: -1.132914 analytic: -1.132914, relative error: 3.057214e-08
numerical: -2.612797 analytic: -2.612797, relative error: 3.331214e-09
numerical: -1.684667 analytic: -1.684667, relative error: 1.191162e-08
numerical: -0.097458 analytic: -0.097458, relative error: 2.151376e-07
numerical: -4.199895 analytic: -4.199895, relative error: 1.685242e-09
numerical: -1.240784 analytic: -1.240785, relative error: 4.772572e-08

In [5]:
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'naive loss: %e computed in %fs' % (loss_naive, toc - tic)

from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)

# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'Loss difference: %f' % np.abs(loss_naive - loss_vectorized)
print 'Gradient difference: %f' % grad_difference


naive loss: 2.319763e+00 computed in 0.131000s
vectorized loss: 2.319763e+00 computed in 0.000000s
Loss difference: 0.000000
Gradient difference: 0.000000

In [6]:
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None

### try different hyperparameters bellow
learning_rates = np.logspace(-7, -4, 4)
regularization_strengths = np.logspace(-3, 4, 8)

################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained softmax classifer in best_softmax.                          #
################################################################################
for lr in learning_rates:
    for reg in regularization_strengths:
        softmax = Softmax()
        loss_hist = softmax.train(X_train, y_train, learning_rate=lr, reg=reg, 
                              num_iters=1500, verbose=True)
        training_accuracy = np.mean(softmax.predict(X_train) == y_train)
        validation_accuracy = np.mean(softmax.predict(X_val) == y_val)
        if best_val < validation_accuracy:
            best_val = validation_accuracy
            best_softmax = softmax
        results[(lr, reg)] = (training_accuracy, validation_accuracy)
################################################################################
#                              END OF YOUR CODE                                #
################################################################################
    
# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy)
    
print 'best validation accuracy achieved during cross-validation: %f' % best_val


iteration 0 / 1500: loss 5.286110
iteration 100 / 1500: loss 4.205975
iteration 200 / 1500: loss 3.687268
iteration 300 / 1500: loss 3.440013
iteration 400 / 1500: loss 3.206917
iteration 500 / 1500: loss 3.160448
iteration 600 / 1500: loss 2.957978
iteration 700 / 1500: loss 3.101380
iteration 800 / 1500: loss 3.029590
iteration 900 / 1500: loss 2.840128
iteration 1000 / 1500: loss 2.860221
iteration 1100 / 1500: loss 2.679336
iteration 1200 / 1500: loss 2.852229
iteration 1300 / 1500: loss 2.594600
iteration 1400 / 1500: loss 2.748703
iteration 0 / 1500: loss 5.117019
iteration 100 / 1500: loss 4.173704
iteration 200 / 1500: loss 3.544794
iteration 300 / 1500: loss 3.348661
iteration 400 / 1500: loss 3.267185
iteration 500 / 1500: loss 3.274752
iteration 600 / 1500: loss 3.012565
iteration 700 / 1500: loss 2.748014
iteration 800 / 1500: loss 2.620795
iteration 900 / 1500: loss 2.863127
iteration 1000 / 1500: loss 2.604532
iteration 1100 / 1500: loss 2.715151
iteration 1200 / 1500: loss 2.682197
iteration 1300 / 1500: loss 2.789639
iteration 1400 / 1500: loss 2.842594
iteration 0 / 1500: loss 6.337205
iteration 100 / 1500: loss 3.822574
iteration 200 / 1500: loss 3.543173
iteration 300 / 1500: loss 3.287425
iteration 400 / 1500: loss 3.026748
iteration 500 / 1500: loss 2.983586
iteration 600 / 1500: loss 2.820273
iteration 700 / 1500: loss 3.184293
iteration 800 / 1500: loss 2.675225
iteration 900 / 1500: loss 2.937895
iteration 1000 / 1500: loss 2.857066
iteration 1100 / 1500: loss 2.621716
iteration 1200 / 1500: loss 2.694296
iteration 1300 / 1500: loss 2.588945
iteration 1400 / 1500: loss 2.472076
iteration 0 / 1500: loss 6.282650
iteration 100 / 1500: loss 4.666122
iteration 200 / 1500: loss 3.831162
iteration 300 / 1500: loss 3.478032
iteration 400 / 1500: loss 3.608809
iteration 500 / 1500: loss 3.454816
iteration 600 / 1500: loss 3.037519
iteration 700 / 1500: loss 2.911631
iteration 800 / 1500: loss 2.727305
iteration 900 / 1500: loss 2.769898
iteration 1000 / 1500: loss 2.788938
iteration 1100 / 1500: loss 3.107562
iteration 1200 / 1500: loss 2.712681
iteration 1300 / 1500: loss 2.780486
iteration 1400 / 1500: loss 2.460965
iteration 0 / 1500: loss 5.435726
iteration 100 / 1500: loss 4.380597
iteration 200 / 1500: loss 4.022970
iteration 300 / 1500: loss 3.514586
iteration 400 / 1500: loss 3.482999
iteration 500 / 1500: loss 3.460035
iteration 600 / 1500: loss 3.284053
iteration 700 / 1500: loss 3.294309
iteration 800 / 1500: loss 3.113828
iteration 900 / 1500: loss 3.120375
iteration 1000 / 1500: loss 2.808608
iteration 1100 / 1500: loss 3.034136
iteration 1200 / 1500: loss 2.964480
iteration 1300 / 1500: loss 2.708266
iteration 1400 / 1500: loss 2.553513
iteration 0 / 1500: loss 7.492105
iteration 100 / 1500: loss 5.809937
iteration 200 / 1500: loss 4.791476
iteration 300 / 1500: loss 4.732164
iteration 400 / 1500: loss 4.588691
iteration 500 / 1500: loss 4.519712
iteration 600 / 1500: loss 4.693537
iteration 700 / 1500: loss 4.518772
iteration 800 / 1500: loss 4.261757
iteration 900 / 1500: loss 4.372720
iteration 1000 / 1500: loss 4.376302
iteration 1100 / 1500: loss 4.169687
iteration 1200 / 1500: loss 4.149408
iteration 1300 / 1500: loss 4.143099
iteration 1400 / 1500: loss 4.139568
iteration 0 / 1500: loss 20.931112
iteration 100 / 1500: loss 19.301256
iteration 200 / 1500: loss 18.206232
iteration 300 / 1500: loss 17.652634
iteration 400 / 1500: loss 17.092341
iteration 500 / 1500: loss 16.742127
iteration 600 / 1500: loss 16.221514
iteration 700 / 1500: loss 15.865673
iteration 800 / 1500: loss 15.572861
iteration 900 / 1500: loss 15.266052
iteration 1000 / 1500: loss 14.721533
iteration 1100 / 1500: loss 14.610695
iteration 1200 / 1500: loss 14.251522
iteration 1300 / 1500: loss 13.901901
iteration 1400 / 1500: loss 13.905075
iteration 0 / 1500: loss 159.487329
iteration 100 / 1500: loss 129.741630
iteration 200 / 1500: loss 105.693975
iteration 300 / 1500: loss 86.827429
iteration 400 / 1500: loss 71.271871
iteration 500 / 1500: loss 58.646325
iteration 600 / 1500: loss 48.287413
iteration 700 / 1500: loss 39.708887
iteration 800 / 1500: loss 32.816911
iteration 900 / 1500: loss 27.224461
iteration 1000 / 1500: loss 22.535238
iteration 1100 / 1500: loss 18.765868
iteration 1200 / 1500: loss 15.644170
iteration 1300 / 1500: loss 13.176062
iteration 1400 / 1500: loss 11.202799
iteration 0 / 1500: loss 4.901783
iteration 100 / 1500: loss 2.665877
iteration 200 / 1500: loss 2.744710
iteration 300 / 1500: loss 2.516615
iteration 400 / 1500: loss 2.201400
iteration 500 / 1500: loss 2.364080
iteration 600 / 1500: loss 2.211282
iteration 700 / 1500: loss 1.991853
iteration 800 / 1500: loss 2.261258
iteration 900 / 1500: loss 2.220120
iteration 1000 / 1500: loss 2.009766
iteration 1100 / 1500: loss 2.171672
iteration 1200 / 1500: loss 1.958542
iteration 1300 / 1500: loss 1.952763
iteration 1400 / 1500: loss 1.769754
iteration 0 / 1500: loss 4.958313
iteration 100 / 1500: loss 3.090838
iteration 200 / 1500: loss 2.463325
iteration 300 / 1500: loss 2.477525
iteration 400 / 1500: loss 2.029573
iteration 500 / 1500: loss 2.095451
iteration 600 / 1500: loss 2.348378
iteration 700 / 1500: loss 2.150453
iteration 800 / 1500: loss 2.301631
iteration 900 / 1500: loss 2.006041
iteration 1000 / 1500: loss 2.152169
iteration 1100 / 1500: loss 2.032981
iteration 1200 / 1500: loss 1.901603
iteration 1300 / 1500: loss 1.973788
iteration 1400 / 1500: loss 2.020305
iteration 0 / 1500: loss 5.170529
iteration 100 / 1500: loss 2.549236
iteration 200 / 1500: loss 2.675337
iteration 300 / 1500: loss 2.385279
iteration 400 / 1500: loss 2.365217
iteration 500 / 1500: loss 2.404772
iteration 600 / 1500: loss 2.178595
iteration 700 / 1500: loss 2.060265
iteration 800 / 1500: loss 2.191136
iteration 900 / 1500: loss 1.901707
iteration 1000 / 1500: loss 2.042744
iteration 1100 / 1500: loss 1.925940
iteration 1200 / 1500: loss 1.976329
iteration 1300 / 1500: loss 1.862855
iteration 1400 / 1500: loss 2.156395
iteration 0 / 1500: loss 5.747443
iteration 100 / 1500: loss 2.561205
iteration 200 / 1500: loss 2.386634
iteration 300 / 1500: loss 2.284225
iteration 400 / 1500: loss 2.146066
iteration 500 / 1500: loss 2.106666
iteration 600 / 1500: loss 2.237640
iteration 700 / 1500: loss 2.018110
iteration 800 / 1500: loss 2.195790
iteration 900 / 1500: loss 2.219202
iteration 1000 / 1500: loss 2.004385
iteration 1100 / 1500: loss 2.009026
iteration 1200 / 1500: loss 2.072135
iteration 1300 / 1500: loss 2.108632
iteration 1400 / 1500: loss 1.966782
iteration 0 / 1500: loss 5.598497
iteration 100 / 1500: loss 3.127693
iteration 200 / 1500: loss 2.604067
iteration 300 / 1500: loss 2.668781
iteration 400 / 1500: loss 2.565107
iteration 500 / 1500: loss 2.563070
iteration 600 / 1500: loss 2.534730
iteration 700 / 1500: loss 2.352610
iteration 800 / 1500: loss 2.146446
iteration 900 / 1500: loss 2.233870
iteration 1000 / 1500: loss 2.258031
iteration 1100 / 1500: loss 2.092170
iteration 1200 / 1500: loss 2.028748
iteration 1300 / 1500: loss 2.046234
iteration 1400 / 1500: loss 2.142480
iteration 0 / 1500: loss 7.006053
iteration 100 / 1500: loss 4.206389
iteration 200 / 1500: loss 3.971408
iteration 300 / 1500: loss 3.807506
iteration 400 / 1500: loss 3.589918
iteration 500 / 1500: loss 3.471997
iteration 600 / 1500: loss 3.407598
iteration 700 / 1500: loss 3.303056
iteration 800 / 1500: loss 3.294188
iteration 900 / 1500: loss 3.198771
iteration 1000 / 1500: loss 3.145535
iteration 1100 / 1500: loss 3.097511
iteration 1200 / 1500: loss 3.010227
iteration 1300 / 1500: loss 3.110239
iteration 1400 / 1500: loss 3.177773
iteration 0 / 1500: loss 20.438395
iteration 100 / 1500: loss 15.084462
iteration 200 / 1500: loss 12.425783
iteration 300 / 1500: loss 10.357658
iteration 400 / 1500: loss 8.755901
iteration 500 / 1500: loss 7.292757
iteration 600 / 1500: loss 6.293411
iteration 700 / 1500: loss 5.497281
iteration 800 / 1500: loss 4.758970
iteration 900 / 1500: loss 4.184158
iteration 1000 / 1500: loss 3.777651
iteration 1100 / 1500: loss 3.637936
iteration 1200 / 1500: loss 3.176419
iteration 1300 / 1500: loss 2.941268
iteration 1400 / 1500: loss 2.538991
iteration 0 / 1500: loss 160.906679
iteration 100 / 1500: loss 22.399505
iteration 200 / 1500: loss 4.664877
iteration 300 / 1500: loss 2.326442
iteration 400 / 1500: loss 2.053133
iteration 500 / 1500: loss 1.896632
iteration 600 / 1500: loss 2.079204
iteration 700 / 1500: loss 1.995931
iteration 800 / 1500: loss 2.020123
iteration 900 / 1500: loss 1.850506
iteration 1000 / 1500: loss 1.949593
iteration 1100 / 1500: loss 1.885962
iteration 1200 / 1500: loss 2.024795
iteration 1300 / 1500: loss 1.866664
iteration 1400 / 1500: loss 1.893471
iteration 0 / 1500: loss 5.151206
iteration 100 / 1500: loss 3.903185
iteration 200 / 1500: loss 2.569884
iteration 300 / 1500: loss 2.420673
iteration 400 / 1500: loss 2.705887
iteration 500 / 1500: loss 2.661448
iteration 600 / 1500: loss 2.546931
iteration 700 / 1500: loss 3.015874
iteration 800 / 1500: loss 4.129143
iteration 900 / 1500: loss 3.559166
iteration 1000 / 1500: loss 2.300789
iteration 1100 / 1500: loss 3.559606
iteration 1200 / 1500: loss 2.981869
iteration 1300 / 1500: loss 3.000468
iteration 1400 / 1500: loss 2.603594
iteration 0 / 1500: loss 5.092839
iteration 100 / 1500: loss 5.328205
iteration 200 / 1500: loss 3.422072
iteration 300 / 1500: loss 2.351522
iteration 400 / 1500: loss 3.980728
iteration 500 / 1500: loss 2.964015
iteration 600 / 1500: loss 4.189287
iteration 700 / 1500: loss 2.845130
iteration 800 / 1500: loss 2.248301
iteration 900 / 1500: loss 3.617486
iteration 1000 / 1500: loss 2.832260
iteration 1100 / 1500: loss 2.430958
iteration 1200 / 1500: loss 3.038887
iteration 1300 / 1500: loss 2.539361
iteration 1400 / 1500: loss 2.379750
iteration 0 / 1500: loss 5.048351
iteration 100 / 1500: loss 2.929414
iteration 200 / 1500: loss 2.509299
iteration 300 / 1500: loss 2.774580
iteration 400 / 1500: loss 2.522824
iteration 500 / 1500: loss 2.863767
iteration 600 / 1500: loss 2.474957
iteration 700 / 1500: loss 3.536570
iteration 800 / 1500: loss 2.213946
iteration 900 / 1500: loss 2.320838
iteration 1000 / 1500: loss 3.187223
iteration 1100 / 1500: loss 2.671701
iteration 1200 / 1500: loss 3.146156
iteration 1300 / 1500: loss 2.646435
iteration 1400 / 1500: loss 2.957298
iteration 0 / 1500: loss 5.402419
iteration 100 / 1500: loss 3.748157
iteration 200 / 1500: loss 2.368774
iteration 300 / 1500: loss 2.398448
iteration 400 / 1500: loss 3.608959
iteration 500 / 1500: loss 2.241908
iteration 600 / 1500: loss 3.849809
iteration 700 / 1500: loss 2.626239
iteration 800 / 1500: loss 2.628146
iteration 900 / 1500: loss 2.370541
iteration 1000 / 1500: loss 2.219168
iteration 1100 / 1500: loss 2.389193
iteration 1200 / 1500: loss 2.925570
iteration 1300 / 1500: loss 3.214038
iteration 1400 / 1500: loss 2.438564
iteration 0 / 1500: loss 6.225698
iteration 100 / 1500: loss 2.743738
iteration 200 / 1500: loss 3.221966
iteration 300 / 1500: loss 2.416528
iteration 400 / 1500: loss 3.420051
iteration 500 / 1500: loss 3.182121
iteration 600 / 1500: loss 2.501501
iteration 700 / 1500: loss 3.413312
iteration 800 / 1500: loss 2.701945
iteration 900 / 1500: loss 3.005072
iteration 1000 / 1500: loss 2.478632
iteration 1100 / 1500: loss 3.833054
iteration 1200 / 1500: loss 3.842808
iteration 1300 / 1500: loss 3.053127
iteration 1400 / 1500: loss 2.761802
iteration 0 / 1500: loss 7.769458
iteration 100 / 1500: loss 3.452395
iteration 200 / 1500: loss 3.611177
iteration 300 / 1500: loss 4.101410
iteration 400 / 1500: loss 4.076071
iteration 500 / 1500: loss 2.688310
iteration 600 / 1500: loss 4.087048
iteration 700 / 1500: loss 2.610926
iteration 800 / 1500: loss 3.560754
iteration 900 / 1500: loss 2.701827
iteration 1000 / 1500: loss 2.568535
iteration 1100 / 1500: loss 2.355929
iteration 1200 / 1500: loss 3.056105
iteration 1300 / 1500: loss 3.226170
iteration 1400 / 1500: loss 2.763098
iteration 0 / 1500: loss 20.746003
iteration 100 / 1500: loss 5.537666
iteration 200 / 1500: loss 3.744819
iteration 300 / 1500: loss 2.610591
iteration 400 / 1500: loss 2.570764
iteration 500 / 1500: loss 3.697643
iteration 600 / 1500: loss 2.839917
iteration 700 / 1500: loss 3.097601
iteration 800 / 1500: loss 4.039046
iteration 900 / 1500: loss 3.796106
iteration 1000 / 1500: loss 3.279864
iteration 1100 / 1500: loss 4.097733
iteration 1200 / 1500: loss 2.963869
iteration 1300 / 1500: loss 3.146953
iteration 1400 / 1500: loss 2.783273
iteration 0 / 1500: loss 157.923674
iteration 100 / 1500: loss 3.844778
iteration 200 / 1500: loss 4.718402
iteration 300 / 1500: loss 3.970691
iteration 400 / 1500: loss 4.664274
iteration 500 / 1500: loss 3.163936
iteration 600 / 1500: loss 4.659812
iteration 700 / 1500: loss 4.239939
iteration 800 / 1500: loss 4.570399
iteration 900 / 1500: loss 5.084806
iteration 1000 / 1500: loss 4.608230
iteration 1100 / 1500: loss 6.233337
iteration 1200 / 1500: loss 4.979935
iteration 1300 / 1500: loss 3.750144
iteration 1400 / 1500: loss 3.794940
iteration 0 / 1500: loss 6.366929
iteration 100 / 1500: loss 40.986039
iteration 200 / 1500: loss 21.956902
iteration 300 / 1500: loss 29.895830
iteration 400 / 1500: loss 22.113148
iteration 500 / 1500: loss 23.564981
iteration 600 / 1500: loss 25.750129
iteration 700 / 1500: loss 35.635554
iteration 800 / 1500: loss 30.114289
iteration 900 / 1500: loss 42.793521
iteration 1000 / 1500: loss 33.084218
iteration 1100 / 1500: loss 31.156761
iteration 1200 / 1500: loss 14.511136
iteration 1300 / 1500: loss 25.980406
iteration 1400 / 1500: loss 25.721642
iteration 0 / 1500: loss 5.347603
iteration 100 / 1500: loss 36.962273
iteration 200 / 1500: loss 29.482578
iteration 300 / 1500: loss 36.733626
iteration 400 / 1500: loss 38.418271
iteration 500 / 1500: loss 27.276598
iteration 600 / 1500: loss 32.940141
iteration 700 / 1500: loss 35.881753
iteration 800 / 1500: loss 20.387461
iteration 900 / 1500: loss 21.791596
iteration 1000 / 1500: loss 22.378971
iteration 1100 / 1500: loss 37.640743
iteration 1200 / 1500: loss 22.201011
iteration 1300 / 1500: loss 22.523809
iteration 1400 / 1500: loss 21.376463
iteration 0 / 1500: loss 4.387403
iteration 100 / 1500: loss 23.835813
iteration 200 / 1500: loss 25.016132
iteration 300 / 1500: loss 35.769541
iteration 400 / 1500: loss 28.336261
iteration 500 / 1500: loss 28.800784
iteration 600 / 1500: loss 23.128674
iteration 700 / 1500: loss 26.499505
iteration 800 / 1500: loss 20.484155
iteration 900 / 1500: loss 25.917246
iteration 1000 / 1500: loss 17.241236
iteration 1100 / 1500: loss 28.893562
iteration 1200 / 1500: loss 30.043794
iteration 1300 / 1500: loss 24.744453
iteration 1400 / 1500: loss 33.436272
iteration 0 / 1500: loss 6.360422
iteration 100 / 1500: loss 23.106647
iteration 200 / 1500: loss 35.941186
iteration 300 / 1500: loss 28.803493
iteration 400 / 1500: loss 23.557038
iteration 500 / 1500: loss 29.288655
iteration 600 / 1500: loss 23.887642
iteration 700 / 1500: loss 38.350957
iteration 800 / 1500: loss 33.103777
iteration 900 / 1500: loss 22.798567
iteration 1000 / 1500: loss 22.016051
iteration 1100 / 1500: loss 29.204093
iteration 1200 / 1500: loss 24.283982
iteration 1300 / 1500: loss 31.556391
iteration 1400 / 1500: loss 29.837465
iteration 0 / 1500: loss 5.566175
iteration 100 / 1500: loss 42.921259
iteration 200 / 1500: loss 38.506362
iteration 300 / 1500: loss 31.876903
iteration 400 / 1500: loss 19.966464
iteration 500 / 1500: loss 33.244232
iteration 600 / 1500: loss 51.227229
iteration 700 / 1500: loss 51.149373
iteration 800 / 1500: loss 15.791577
iteration 900 / 1500: loss 44.239780
iteration 1000 / 1500: loss 37.151737
iteration 1100 / 1500: loss 27.250394
iteration 1200 / 1500: loss 20.331616
iteration 1300 / 1500: loss 26.144921
iteration 1400 / 1500: loss 34.168244
iteration 0 / 1500: loss 6.932495
iteration 100 / 1500: loss 26.170964
iteration 200 / 1500: loss 34.723987
iteration 300 / 1500: loss 22.824417
iteration 400 / 1500: loss 30.599790
iteration 500 / 1500: loss 25.382763
iteration 600 / 1500: loss 48.200218
iteration 700 / 1500: loss 35.678974
iteration 800 / 1500: loss 24.049367
iteration 900 / 1500: loss 29.539141
iteration 1000 / 1500: loss 26.207612
iteration 1100 / 1500: loss 21.687086
iteration 1200 / 1500: loss 25.514694
iteration 1300 / 1500: loss 27.244702
iteration 1400 / 1500: loss 25.310236
iteration 0 / 1500: loss 20.614783
iteration 100 / 1500: loss 60.103538
iteration 200 / 1500: loss 34.159703
iteration 300 / 1500: loss 45.316629
iteration 400 / 1500: loss 38.929275
iteration 500 / 1500: loss 46.568785
iteration 600 / 1500: loss 25.250633
iteration 700 / 1500: loss 36.864303
iteration 800 / 1500: loss 42.305148
iteration 900 / 1500: loss 21.700592
iteration 1000 / 1500: loss 52.802330
iteration 1100 / 1500: loss 36.394998
iteration 1200 / 1500: loss 52.793503
iteration 1300 / 1500: loss 43.974008
iteration 1400 / 1500: loss 33.058689
iteration 0 / 1500: loss 162.586902
iteration 100 / 1500: loss 169.357695
iteration 200 / 1500: loss 163.087823
iteration 300 / 1500: loss 178.252260
iteration 400 / 1500: loss 160.429784
iteration 500 / 1500: loss 158.214068
iteration 600 / 1500: loss 158.565675
iteration 700 / 1500: loss 171.017319
iteration 800 / 1500: loss 169.635945
iteration 900 / 1500: loss 188.021585
iteration 1000 / 1500: loss 151.210685
iteration 1100 / 1500: loss 187.775549
iteration 1200 / 1500: loss 160.146835
iteration 1300 / 1500: loss 150.324544
iteration 1400 / 1500: loss 163.723680
lr 1.000000e-07 reg 1.000000e-03 train accuracy: 0.247388 val accuracy: 0.232000
lr 1.000000e-07 reg 1.000000e-02 train accuracy: 0.247041 val accuracy: 0.250000
lr 1.000000e-07 reg 1.000000e-01 train accuracy: 0.243898 val accuracy: 0.247000
lr 1.000000e-07 reg 1.000000e+00 train accuracy: 0.239735 val accuracy: 0.269000
lr 1.000000e-07 reg 1.000000e+01 train accuracy: 0.243388 val accuracy: 0.235000
lr 1.000000e-07 reg 1.000000e+02 train accuracy: 0.247980 val accuracy: 0.228000
lr 1.000000e-07 reg 1.000000e+03 train accuracy: 0.256510 val accuracy: 0.259000
lr 1.000000e-07 reg 1.000000e+04 train accuracy: 0.329082 val accuracy: 0.337000
lr 1.000000e-06 reg 1.000000e-03 train accuracy: 0.342041 val accuracy: 0.344000
lr 1.000000e-06 reg 1.000000e-02 train accuracy: 0.348286 val accuracy: 0.333000
lr 1.000000e-06 reg 1.000000e-01 train accuracy: 0.347102 val accuracy: 0.321000
lr 1.000000e-06 reg 1.000000e+00 train accuracy: 0.349673 val accuracy: 0.334000
lr 1.000000e-06 reg 1.000000e+01 train accuracy: 0.347878 val accuracy: 0.338000
lr 1.000000e-06 reg 1.000000e+02 train accuracy: 0.358694 val accuracy: 0.344000
lr 1.000000e-06 reg 1.000000e+03 train accuracy: 0.401755 val accuracy: 0.403000
lr 1.000000e-06 reg 1.000000e+04 train accuracy: 0.367633 val accuracy: 0.380000
lr 1.000000e-05 reg 1.000000e-03 train accuracy: 0.315163 val accuracy: 0.314000
lr 1.000000e-05 reg 1.000000e-02 train accuracy: 0.347878 val accuracy: 0.329000
lr 1.000000e-05 reg 1.000000e-01 train accuracy: 0.315980 val accuracy: 0.292000
lr 1.000000e-05 reg 1.000000e+00 train accuracy: 0.344837 val accuracy: 0.308000
lr 1.000000e-05 reg 1.000000e+01 train accuracy: 0.319449 val accuracy: 0.293000
lr 1.000000e-05 reg 1.000000e+02 train accuracy: 0.346755 val accuracy: 0.313000
lr 1.000000e-05 reg 1.000000e+03 train accuracy: 0.285306 val accuracy: 0.278000
lr 1.000000e-05 reg 1.000000e+04 train accuracy: 0.210633 val accuracy: 0.216000
lr 1.000000e-04 reg 1.000000e-03 train accuracy: 0.303245 val accuracy: 0.285000
lr 1.000000e-04 reg 1.000000e-02 train accuracy: 0.296878 val accuracy: 0.297000
lr 1.000000e-04 reg 1.000000e-01 train accuracy: 0.243551 val accuracy: 0.231000
lr 1.000000e-04 reg 1.000000e+00 train accuracy: 0.283306 val accuracy: 0.273000
lr 1.000000e-04 reg 1.000000e+01 train accuracy: 0.268633 val accuracy: 0.264000
lr 1.000000e-04 reg 1.000000e+02 train accuracy: 0.281204 val accuracy: 0.283000
lr 1.000000e-04 reg 1.000000e+03 train accuracy: 0.198061 val accuracy: 0.207000
lr 1.000000e-04 reg 1.000000e+04 train accuracy: 0.073041 val accuracy: 0.076000
best validation accuracy achieved during cross-validation: 0.403000

In [7]:
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )


softmax on raw pixels final test set accuracy: 0.384000

In [8]:
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)

w_min, w_max = np.min(w), np.max(w)

classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
  plt.subplot(2, 5, i + 1)
  
  # Rescale the weights to be between 0 and 255
  wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
  plt.imshow(wimg.astype('uint8'))
  plt.axis('off')
  plt.title(classes[i])



In [ ]: