In this example, we'll be training a neural network using particle swarm optimization. For this we'll be using the standard global-best PSO pyswarms.single.GBestPSO for optimizing the network's weights and biases. This aims to demonstrate how the API is capable of handling custom-defined functions.
For this example, we'll try to classify the three iris species in the Iris Dataset.
In [1]:
# Import modules
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
# Import PySwarms
import pyswarms as ps
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
First, we'll load the dataset from scikit-learn. The Iris Dataset contains 3 classes for each of the iris species (iris setosa, iris virginica, and iris versicolor). It has 50 samples per class with 150 samples in total, making it a very balanced dataset. Each sample is characterized by four features (or dimensions): sepal length, sepal width, petal length, petal width.
In [9]:
data = load_iris()
In [10]:
# Store the features as X and the labels as y
X = data.data
y = data.target
Recall that neural networks can simply be seen as a mapping function from one space to another. For now, we'll build a simple neural network with the following characteristics:
Things we'll do:
forward_prop method that will do forward propagation for one particle.f() that will compute forward_prop() for the whole swarm.What we'll be doing then is to create a swarm with a number of dimensions equal to the weights and biases. We will unroll these parameters into an n-dimensional array, and have each particle take on different values. Thus, each particle represents a candidate neural network with its own weights and bias. When feeding back to the network, we will reconstruct the learned weights and biases.
When rolling-back the parameters into weights and biases, it is useful to recall the shape and bias matrices:
By unrolling them together, we have $(4 * 20) + (20 * 3) + 20 + 3 = 163$ parameters, or 163 dimensions for each particle in the swarm.
The negative log-likelihood will be used to compute for the error between the ground-truth values and the predictions. Also, because PSO doesn't rely on the gradients, we'll not be performing backpropagation (this may be a good thing or bad thing under some circumstances).
Now, let's write the forward propagation procedure as our objective function. Let $X$ be the input, $z_l$ the pre-activation at layer $l$, and $a_l$ the activation for layer $l$:
In [11]:
n_inputs = 4
n_hidden = 20
n_classes = 3
num_samples = 150
In [16]:
def logits_function(p):
""" Calculate roll-back the weights and biases
Inputs
------
p: np.ndarray
The dimensions should include an unrolled version of the
weights and biases.
Returns
-------
numpy.ndarray of logits for layer 2
"""
# Roll-back the weights and biases
W1 = p[0:80].reshape((n_inputs,n_hidden))
b1 = p[80:100].reshape((n_hidden,))
W2 = p[100:160].reshape((n_hidden,n_classes))
b2 = p[160:163].reshape((n_classes,))
# Perform forward propagation
z1 = X.dot(W1) + b1 # Pre-activation in Layer 1
a1 = np.tanh(z1) # Activation in Layer 1
logits = a1.dot(W2) + b2 # Pre-activation in Layer 2
return logits # Logits for Layer 2
In [22]:
# Forward propagation
def forward_prop(params):
"""Forward propagation as objective function
This computes for the forward propagation of the neural network, as
well as the loss.
Inputs
------
params: np.ndarray
The dimensions should include an unrolled version of the
weights and biases.
Returns
-------
float
The computed negative log-likelihood loss given the parameters
"""
logits = logits_funciton(params)
# Compute for the softmax of the logits
exp_scores = np.exp(logits)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Compute for the negative log likelihood
corect_logprobs = -np.log(probs[range(num_samples), y])
loss = np.sum(corect_logprobs) / num_samples
return loss
Now that we have a method to do forward propagation for one particle (or for one set of dimensions), we can then create a higher-level method to compute forward_prop() to the whole swarm:
In [23]:
def f(x):
"""Higher-level method to do forward_prop in the
whole swarm.
Inputs
------
x: numpy.ndarray of shape (n_particles, dimensions)
The swarm that will perform the search
Returns
-------
numpy.ndarray of shape (n_particles, )
The computed loss for each particle
"""
n_particles = x.shape[0]
j = [forward_prop(x[i]) for i in range(n_particles)]
return np.array(j)
In [ ]:
%%time
# Initialize swarm
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
dimensions = (n_inputs * n_hidden) + (n_hidden * n_classes) + n_hidden + n_classes
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options)
# Perform optimization
cost, pos = optimizer.optimize(f, iters=1000)
We can then check the accuracy by performing forward propagation once again to create a set of predictions. Then it's only a simple matter of matching which one's correct or not. For the logits, we take the argmax. Recall that the softmax function returns probabilities where the whole vector sums to 1. We just take the one with the highest probability then treat it as the network's prediction.
Moreover, we let the best position vector found by the swarm be the weight and bias parameters of the network.
In [20]:
def predict(pos):
"""
Use the trained weights to perform class predictions.
Inputs
------
pos: numpy.ndarray
Position matrix found by the swarm. Will be rolled
into weights and biases.
"""
logits = logits_funciton(pos)
y_pred = np.argmax(logits, axis=1)
return y_pred
And from this we can just compute for the accuracy. We perform predictions, compare an equivalence to the ground-truth value y, and get the mean.
In [21]:
(predict(pos) == y).mean()
Out[21]: