Classification

Given an input with $D$ dimensions and $k$ classes, the goal of classification if to find the function $f$ such that $$ f:X \Rightarrow K$$

Linear Classification

The simplest function $f$ is Linear of form $$ f(X) = WX + B $$

Given input X is 1-D array with dimension $X_{Dx1}$. The goal is to find a matrix $W_{DxK}$ and bias vector $B_{Kx1}$ For convenience the input can be reshaped to include bias within the weight matrix.


In [1]:
import numpy as np
import matplotlib.pylab as plt
import math
from scipy.stats import mode

%matplotlib inline

Multiclass SVM Loss

The multiclass SVM loss makes use of hinge loss $J(x) = max( 0, x)$


In [18]:
def svm_loss(scores, y, delta=1):
    return np.sum(np.maximum(scores - scores[y] + delta, 0)) - delta

Cross Entropy Loss

The cross entropy loss makes use of log likelihood in place of hinge loss and softmax instead of computing scores


In [12]:
def softmax(scores, y):
    scores -= np.max(scores)
    norm_sum = np.sum(np.exp(scores))
    return np.exp(scores[y]) / norm_sum

In [13]:
def crossentropy(scores, y):
    prob = softmax(scores, y)
    return -1 * np.log(prob)

In [11]:
def l2_regulariser(w):
    return np.sum(np.power(w, 2))

In [ ]: