$\newcommand{\trace}[1]{\operatorname{tr}\left\{#1\right\}}$ $\newcommand{\Norm}[1]{\lVert#1\rVert}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\inner}[2]{\langle #1, #2 \rangle}$ $\newcommand{\DD}{\mathscr{D}}$ $\newcommand{\grad}[1]{\operatorname{grad}#1}$ $\DeclareMathOperator*{\argmin}{arg\,min}$
Setting up the environment
We use the SciPy implementation of the logistic sigmoid function, rather than (naively) implementing it ourselves, to avoid issues relating to numerical computation.
In [1]:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.optimize as opt
from scipy.special import expit # The logistic sigmoid function
%matplotlib inline
We will predict the incidence of diabetes based on various measurements (see description). Instead of directly using the raw data, we use a normalised version, where the label to be predicted (the incidence of diabetes) is in the first column. Download the data from mldata.org.
Read in the data using pandas.
In [2]:
names = ['diabetes', 'num preg', 'plasma', 'bp', 'skin fold', 'insulin', 'bmi', 'pedigree', 'age']
data = pd.read_csv('diabetes_scale.csv', header=None, names=names)
data['diabetes'].replace(-1, 0, inplace=True) # The target variable need be 1 or 0, not 1 or -1
data.head()
Out[2]:
Implement binary classification using logistic regression for a data set with two classes. Make sure you use appropriate python style and docstrings.
Use scipy.optimize.fmin_bfgs to optimise your cost function. fmin_bfgs requires the cost function to be optimised, and the gradient of this cost function. Implement these two functions as cost and grad by following the equations in the lectures.
Implement the function train that takes a matrix of examples, and a vector of labels, and returns the maximum likelihood weight vector for logistic regresssion. Also implement a function test that takes this maximum likelihood weight vector and the a matrix of examples, and returns the predictions. See the section Putting everything together below for expected usage.
We add an extra column of ones to represent the constant basis.
In [3]:
data['ones'] = np.ones((data.shape[0], 1)) # Add a column of ones
data.head()
data.shape
Out[3]:
We have 9 input variables $x_0, \dots, x_8$ where $x_0$ is the dummy input variable fixed at 1. (The fixed dummy input variable could easily be $x_5$ or $x_8$, it's index is unimportant.) We set the basis functions to the simplest choice $\phi_0(\mathbf{x}) = x_0, \dots, \phi_8(\mathbf{x}) = x_8$. Our model then has the form $$ y(\mathbf{x}) = \sigma(\sum_{j=0}^{8} w_j x_j) = \sigma(\mathbf{w}^T \mathbf{x}.) $$ Here we have a dataset, $\{(\mathbf{x}_n, t_n)\}_{n=1}^{N}$ where $t_n \in \{0, 1\}$, with $N=768$ examples. We train our model by finding the parameter vector $\mathbf{w}$ which minimizes the (data-dependent) cross-entropy error function $$ E_D(\mathbf{w}) = - \sum_{n=1}^{N} \{t_n \ln \sigma(\mathbf{w}^T \mathbf{x}_n) + (1 - t_n)\ln(1 - \sigma(\mathbf{w}^T \mathbf{x}_n))\}. $$ The gradient of this function is given by $$ \nabla E(\mathbf{w}) = \sum_{i=1}^{N} (\sigma(\mathbf{w}^T \mathbf{x}_n) - t_n)\mathbf{x}_n. $$
In [4]:
def cost(w, X, y, c=0):
"""
Returns the cross-entropy error function with (optional) sum-of-squares regularization term.
w -- parameters
X -- dataset of features where each row corresponds to a single sample
y -- dataset of labels where each row corresponds to a single sample
c -- regularization coefficient (default = 0)
"""
outputs = expit(X.dot(w)) # Vector of outputs (or predictions)
return -( y.transpose().dot(np.log(outputs)) + (1-y).transpose().dot(np.log(1-outputs)) ) + c*0.5*w.dot(w)
def grad(w, X, y, c=0):
"""
Returns the gradient of the cross-entropy error function with (optional) sum-of-squares regularization term.
"""
outputs = expit(X.dot(w))
return X.transpose().dot(outputs-y) + c*w
def train(X, y,c=0):
"""
Returns the vector of parameters which minimizes the error function via the BFGS algorithm.
"""
initial_values = np.zeros(X.shape[1]) # Error occurs if inital_values is set too high
return opt.fmin_bfgs(cost, initial_values, fprime=grad, args=(X,y,c))
def predict(w, X):
"""
Returns a vector of predictions.
"""
return expit(X.dot(w))
There are many ways to compute the performance of a binary classifier. The key concept is the idea of a confusion matrix or contingency table:
| Label | |||
|---|---|---|---|
| +1 | -1 | ||
| Prediction | +1 | TP | FP |
| -1 | FN | TN |
where
Implement three functions, the first one which returns the confusion matrix for comparing two lists (one set of predictions, and one set of labels). Then implement two functions that take the confusion matrix as input and returns the accuracy and balanced accuracy respectively. The balanced accuracy is the average accuracy of each class.
In [5]:
def confusion_matrix(predictions, y):
"""
Returns the confusion matrix [[tp, fp], [fn, tn]].
predictions -- dataset of predictions (or outputs) from a model
y -- dataset of labels where each row corresponds to a single sample
"""
tp, fp, fn, tn = 0, 0, 0, 0
predictions = predictions.round().values # Converts to numpy.ndarray
y = y.values
for prediction, label in zip(predictions, y):
if prediction == label:
if prediction == 1:
tp += 1
else:
tn += 1
else:
if prediction == 1:
fp += 1
else:
fn += 1
return np.array([[tp, fp], [fn, tn]])
def accuracy(cm):
"""
Returns the accuracy, (tp + tn)/(tp + fp + fn + tn).
"""
return cm.trace()/cm.sum()
def positive_pred_value(cm):
"""
Returns the postive predictive value, tp/p.
"""
return cm[0,0]/(cm[0,0] + cm[0,1])
def negative_pred_value(cm):
"""
Returns the negative predictive value, tn/n.
"""
return cm[1,1]/(cm[1,0] + cm[1,1])
def balanced_accuracy(cm):
"""
Returns the balanced accuracy, (tp/p + tn/n)/2.
"""
return (cm[0,0]/(cm[0,0] + cm[0,1]) + cm[1,1]/(cm[1,0] + cm[1,1]))/2
In [6]:
y = data['diabetes']
X = data[['num preg', 'plasma', 'bp', 'skin fold', 'insulin', 'bmi', 'pedigree', 'age', 'ones']]
theta_best = train(X, y)
print(theta_best)
pred = predict(theta_best, X)
cmatrix = confusion_matrix(pred, y)
[accuracy(cmatrix), balanced_accuracy(cmatrix)]
Out[6]:
To aid our discussion we give the positive predictive value (PPV) and negative predictive value (NPV) also.
In [8]:
[positive_pred_value(cmatrix), negative_pred_value(cmatrix)]
Out[8]:
By splitting the data into two halves, train on one half and report performance on the second half. By repeating this experiment for different values of the regularization parameter $\lambda$ we can get a feeling about the variability in the performance of the classifier due to regularization. Plot the values of accuracy and balanced accuracy for at least 3 different choices of $\lambda$. Note that you may have to update your implementation of logistic regression to include the regularisation parameter.
In [9]:
def split_data(data):
"""
Randomly split data into two equal groups.
"""
np.random.seed(1)
N = len(data)
idx = np.arange(N)
np.random.shuffle(idx)
train_idx = idx[:int(N/2)]
test_idx = idx[int(N/2):]
X_train = data.loc[train_idx].drop('diabetes', axis=1)
t_train = data.loc[train_idx]['diabetes']
X_test = data.loc[test_idx].drop('diabetes', axis=1)
t_test = data.loc[test_idx]['diabetes']
return X_train, t_train, X_test, t_test
def reg_coefficient_comparison(reg_coefficients, X_train, t_train, X_test, t_test):
"""
Returns the accuracy and balanced accuracy for the given regularization coefficient values.
reg_coefficients -- list of regularization coefficient values
X_train -- the input dataset used for training
t_train -- the dataset of labels used for training
X_test -- the input dataset used to make predictions from the trained model
t_test -- dataset of labels for performance assessment
"""
summary = []
for c in reg_coefficients:
w_best = train(X_train, t_train, c)
predictions = predict(w_best, X_test)
cm = confusion_matrix(predictions, t_test)
summary.append([c, accuracy(cm), balanced_accuracy(cm)])
return pd.DataFrame(summary, columns=["regularization coefficient", "accuracy", "balanced accuracy"])
X_train, t_train, X_test, t_test = split_data(data)
reg_coefficients = [0, 0.01, 0.1, 0.25, 0.5, 1, 1.5, 1.75, 2, 5, 9, 10, 11, 20, 100, 150]
reg_coefficient_comparison(reg_coefficients, X_train, t_train, X_test, t_test)
Out[9]:
Here we discuss possible approaches to improve our predictions. We made the decision to set the basis functions to the simplest choice $\phi_0(\mathbf{x}) = x_0, \dots, \phi_8(\mathbf{x}) = x_8$. It is possible that making use of nonlinear basis functions, for instance, polynomial basis functions, may improve our predictive ability. This then raises the question of how to choose appropriate basis functions given that for data higher than 2 or 3 dimensions it is difficult to make choices based off straight-forward visualization. From the description of the dataset we know also that their was missing data.
"Until 02/28/2011 this web page indicated that there were no missing values in the dataset. As pointed out by a repository user, this cannot be true: there are zeros in places where they are biologically impossible, such as the blood pressure attribute. It seems very likely that zero values encode missing data. However, since the dataset donors made no such statement we encourage you to use your best judgement and state your assumptions."
It is likely that if our dataset were more complete, our model would have stronger predictive abilities.