This notebook illustrates how to train a Support Vector Machine (SVM) classifier using Shogun. The CLibSVM class of Shogun is used to do binary classification. Multiclass classification is also demonstrated using CGMNPSVM.
Support Vector Machines (SVM's) are a learning method used for binary classification. The basic idea is to find a hyperplane which separates the data into its two classes. However, since example data is often not linearly separable, SVMs operate in a kernel induced feature space, i.e., data is embedded into a higher dimensional space where it is linearly separable.
In a supervised learning problem, we are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in\{-1,+1\}$. SVM is a binary classifier that tries to separate objects of different classes by finding a (hyper-)plane such that the margin between the two classes is maximized. A hyperplane in $\mathcal{R}^D$ can be parameterized by a vector $\bf{w}$ and a constant $\text b$ expressed in the equation:$${\bf w}\cdot{\bf x} + \text{b} = 0$$ Given such a hyperplane ($\bf w$,b) that separates the data, the discriminating function is: $$f(x) = \text {sign} ({\bf w}\cdot{\bf x} + {\text b})$$
If the training data are linearly separable, we can select two hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations $$({\bf w}\cdot{\bf x} + {\text b}) = 1$$ $$({\bf w}\cdot{\bf x} + {\text b}) = -1$$ the distance between these two hyperplanes is $\frac{2}{\|\mathbf{w}\|}$, so we want to minimize $\|\mathbf{w}\|$. $$ \arg\min_{(\mathbf{w},b)}\frac{1}{2}\|\mathbf{w}\|^2 \qquad\qquad(1)$$ This gives us a hyperplane that maximizes the geometric distance to the closest data points. As we also have to prevent data points from falling into the margin, we add the following constraint: for each ${i}$ either $$({\bf w}\cdot{x}_i + {\text b}) \geq 1$$ or $$({\bf w}\cdot{x}_i + {\text b}) \leq -1$$ which is similar to $${y_i}({\bf w}\cdot{x}_i + {\text b}) \geq 1 \forall i$$
Lagrange multipliers are used to modify equation $(1)$ and the corresponding dual of the problem can be shown to be:
\begin{eqnarray*} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j {\bf x_i} \cdot {\bf x_j}\\ \mbox{s.t.} && \alpha_i\geq 0\\ && \sum_{i}^{N} \alpha_i y_i=0\\ \end{eqnarray*}
From the derivation of these equations, it was seen that the optimal hyperplane can be written as: $$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i. $$ here most $\alpha_i$ turn out to be zero, which means that the solution is a sparse linear combination of the training data.
Now let us see how one can train a linear Support Vector Machine with Shogun. Two dimensional data (having 2 attributes say: attribute1 and attribute2) is now sampled to demonstrate the classification.
In [ ]:
import matplotlib.pyplot as plt
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import matplotlib.patches as patches
#To import all shogun classes
import shogun as sg
import numpy as np
#Generate some random data
X = 2 * np.random.randn(10,2)
traindata=np.r_[X + 3, X + 7].T
feats_train=sg.features(traindata)
trainlab=np.concatenate((np.ones(10),-np.ones(10)))
labels=sg.BinaryLabels(trainlab)
# Plot the training data
plt.figure(figsize=(6,6))
plt.gray()
_=plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.title("Training Data")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
p1 = patches.Rectangle((0, 0), 1, 1, fc="k")
p2 = patches.Rectangle((0, 0), 1, 1, fc="w")
plt.legend((p1, p2), ["Class 1", "Class 2"], loc=2)
plt.gray()
Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
In [ ]:
#prameters to svm
#parameter C is described in a later section.
C=1
epsilon=1e-3
svm=sg.machine('LibLinear', C1=C, C2=C, liblinear_solver_type='L2R_L2LOSS_SVC', epsilon=epsilon)
#train
svm.put('labels', labels)
svm.train(feats_train)
w=svm.get('w')
b=svm.get('bias')
We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w()
and get_bias()
are used to get the necessary values.
In [ ]:
#solve for w.x+b=0
x1=np.linspace(-1.0, 11.0, 100)
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=list(map(solve, x1))
#plot
plt.figure(figsize=(6,6))
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.plot(x1,x2, linewidth=2)
plt.title("Separating hyperplane")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
The classifier is now applied on a X-Y grid of points to get predictions.
In [ ]:
size=100
x1_=np.linspace(-5, 15, size)
x2_=np.linspace(-5, 15, size)
x, y=np.meshgrid(x1_, x2_)
#Generate X-Y grid test data
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#Distance from hyperplane
z=predictions.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
#Class predictions
z=predictions.get('labels').reshape((size, size))
#plot
plt.subplot(122)
plt.title("Separating hyperplane")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
If the data set is not linearly separable, a non-linear mapping $\Phi:{\bf x} \rightarrow \Phi({\bf x}) \in \mathcal{F} $ is used. This maps the data into a higher dimensional space where it is linearly separable. Our equation requires only the inner dot products ${\bf x_i}\cdot{\bf x_j}$. The equation can be defined in terms of inner products $\Phi({\bf x_i}) \cdot \Phi({\bf x_j})$ instead. Since $\Phi({\bf x_i})$ occurs only in dot products with $ \Phi({\bf x_j})$ it is sufficient to know the formula (kernel function) : $$K({\bf x_i, x_j} ) = \Phi({\bf x_i}) \cdot \Phi({\bf x_j})$$ without dealing with the maping directly. The transformed optimisation problem is:
\begin{eqnarray*} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\\ \mbox{s.t.} && \alpha_i\geq 0\\ && \sum_{i=1}^{N} \alpha_i y_i=0 \qquad\qquad(2)\\ \end{eqnarray*}Shogun provides many options for the above mentioned kernel functions. Kernel is the base class for kernels. Some commonly used kernels :
Gaussian kernel : Popular Gaussian kernel computed as $k({\bf x},{\bf x'})= exp(-\frac{||{\bf x}-{\bf x'}||^2}{\tau})$
Linear kernel : Computes $k({\bf x},{\bf x'})= {\bf x}\cdot {\bf x'}$
Polynomial kernel : Polynomial kernel computed as $k({\bf x},{\bf x'})= ({\bf x}\cdot {\bf x'}+c)^d$
Simgmoid Kernel : Computes $k({\bf x},{\bf x'})=\mbox{tanh}(\gamma {\bf x}\cdot{\bf x'}+c)$
Some of these kernels are initialised below.
In [ ]:
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(100))
#Polynomial kernel of degree 2
poly_kernel=sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
In [ ]:
plt.jet()
def display_km(kernels, svm):
plt.figure(figsize=(20,6))
plt.suptitle('Kernel matrices for different kernels', fontsize=12)
for i, kernel in enumerate(kernels):
kernel.init(feats_train,feats_train)
plt.subplot(1, len(kernels), i+1)
plt.title(kernel.get_name())
km=kernel.get_kernel_matrix()
plt.imshow(km, interpolation="nearest")
plt.colorbar()
display_km(kernels, svm)
Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the other SVM from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
In [ ]:
C=1
epsilon=1e-3
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train()
We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
In [ ]:
libsvm_obj = svm.get('objective')
primal_obj, dual_obj = sg.as_svm(svm).compute_svm_primal_objective(), sg.as_svm(svm).compute_svm_dual_objective()
print(libsvm_obj, primal_obj, dual_obj)
and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
In [ ]:
print("duality_gap", dual_obj-primal_obj)
Let's now apply on the X-Y grid data and plot the results.
In [ ]:
out=svm.apply(grid)
z=out.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
z=out.get('labels').reshape((size, size))
plt.subplot(122)
plt.title("Decision boundary")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Calibrated probabilities can be generated in addition to class predictions using scores_to_probabilities()
method of BinaryLabels, which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a sigmoid function $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
In [ ]:
n=10
x1t_=np.linspace(-5, 15, n)
x2t_=np.linspace(-5, 15, n)
xt, yt=np.meshgrid(x1t_, x2t_)
#Generate X-Y grid test data
test_grid=sg.features(np.array((np.ravel(xt), np.ravel(yt))))
labels_out=svm.apply(test_grid)
#Get values (Distance from hyperplane)
values=labels_out.get('current_values')
#Get probabilities
labels_out.scores_to_probabilities()
prob=labels_out.get('current_values')
#plot
plt.gray()
plt.figure(figsize=(10,6))
p1=plt.scatter(values, prob)
plt.title('Probabilistic outputs')
plt.xlabel('Distance from hyperplane')
plt.ylabel('Probability')
plt.legend([p1], ["Test samples"], loc=2)
If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$. $$ y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
Introducing a linear penalty function leads to $$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
This in its dual form is leads to a slightly modified equation $\qquad(2)$. \begin{eqnarray*} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\\ \mbox{s.t.} && 0\leq\alpha_i\leq C\\ && \sum_{i=1}^{N} \alpha_i y_i=0 \\ \end{eqnarray*}
The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
In [ ]:
def plot_sv(C_values):
plt.figure(figsize=(20,6))
plt.suptitle('Soft and hard margins with varying C', fontsize=12)
for i in range(len(C_values)):
plt.subplot(1, len(C_values), i+1)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
svm1 = sg.machine('LibSVM', C1=C_values[i], C2=C_values[i], kernel=linear_kernel, labels=labels)
svm1 = sg.as_svm(svm1)
svm1.train()
vec1=svm1.get_support_vectors()
X_=[]
Y_=[]
new_labels=[]
for j in vec1:
X_.append(traindata[0][j])
Y_.append(traindata[1][j])
new_labels.append(trainlab[j])
out1=svm1.apply(grid)
z1=out1.get_labels().reshape((size, size))
plt.jet()
c=plt.pcolor(x1_, x2_, z1)
plt.contour(x1_ , x2_, z1, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(X_, Y_, c=new_labels, s=150)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=20)
plt.title('Support vectors for C=%.2f'%C_values[i])
plt.xlabel('attribute1')
plt.ylabel('attribute2')
C_values=[0.1, 1000]
plot_sv(C_values)
You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Two-dimensional Gaussians are generated as data for this section.
$x_-\sim{\cal N_2}(0,1)-d$
$x_+\sim{\cal N_2}(0,1)+d$
and corresponding positive and negative labels. We create traindata and testdata with num
of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class (GMM) from which we sample the data points and plot them.
In [ ]:
num=50;
dist=1.0;
gmm=sg.GMM(2)
gmm.set_nth_mean(np.array([-dist,-dist]),0)
gmm.set_nth_mean(np.array([dist,dist]),1)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),1)
gmm.put('m_coefficients', np.array([1.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
gmm.set_coef(np.array([0.0,1.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
traindata=np.concatenate((xntr,xptr), axis=1)
trainlab=np.concatenate((-np.ones(num), np.ones(num)))
#shogun format features
feats_train=sg.features(traindata)
labels=sg.BinaryLabels(trainlab)
In [ ]:
gaussian_kernel = sg.kernel("GaussianKernel", log_width=np.log(10))
#Polynomial kernel of degree 2
poly_kernel = sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel = sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
In [ ]:
#train machine
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train(feats_train)
Now lets plot the contour output on a $-5...+5$ grid for
In [ ]:
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
grid_out=svm.apply(grid)
z=grid_out.get('labels').reshape((size, size))
plt.jet()
plt.figure(figsize=(16,5))
z=grid_out.get_values().reshape((size, size))
plt.subplot(121)
plt.title('Classification')
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.subplot(122)
plt.title('Original distribution')
gmm.put('m_coefficients', np.array([1.0,0.0]))
gmm.set_features(grid)
grid_out=gmm.get_likelihood_for_all_examples()
zn=grid_out.reshape((size, size))
gmm.set_coef(np.array([0.0,1.0]))
grid_out=gmm.get_likelihood_for_all_examples()
zp=grid_out.reshape((size, size))
z=zp-zn
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Let us visualise the output using different kernels.
In [ ]:
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Binary Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.put('kernel', kernels[i])
svm.train()
grid_out=svm.apply(grid)
z=grid_out.get_values().reshape((size, size))
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The KernelNormalizer class provides tools for kernel normalization. Some of the kernel normalizers in Shogun:
SqrtDiagKernelNormalizer : This normalization in the feature space amounts to defining a new kernel $k'({\bf x},{\bf x'}) = \frac{k({\bf x},{\bf x'})}{\sqrt{k({\bf x},{\bf x})k({\bf x'},{\bf x'})}}$
AvgDiagKernelNormalizer : Scaling with a constant $k({\bf x},{\bf x'})= \frac{1}{c}\cdot k({\bf x},{\bf x'})$
ZeroMeanCenterKernelNormalizer : Centers the kernel in feature space and ensures each feature must have zero mean after centering.
The set_normalizer()
method of Kernel is used to add a normalizer.
Let us try it out on the ionosphere dataset where we use a small training set of 30 samples to train our SVM. Gaussian kernel with and without normalization is used. See reference [1] for details.
In [ ]:
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/ionosphere/ionosphere.data'))
mat = []
labels = []
# read data from file
for line in f:
words = line.rstrip().split(',')
mat.append([float(i) for i in words[0:-1]])
if str(words[-1])=='g':
labels.append(1)
else:
labels.append(-1)
f.close()
mat_train=mat[:30]
mat_test=mat[30:110]
lab_train=sg.BinaryLabels(np.array(labels[:30]).reshape((30,)))
lab_test=sg.BinaryLabels(np.array(labels[30:110]).reshape((len(labels[30:110]),)))
feats_train = sg.features(np.array(mat_train).T)
feats_test = sg.features(np.array(mat_test).T)
In [ ]:
#without normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
gaussian_kernel.init(feats_train, feats_train)
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=lab_train)
_=svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error:', error)
#set normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
# TODO: currently there is a bug that makes it impossible to use Gaussian kernels and kernel normalisers
# See github issue #3504
#gaussian_kernel.set_normalizer(sg.SqrtDiagKernelNormalizer())
gaussian_kernel.init(feats_train, feats_train)
svm.put('kernel', gaussian_kernel)
svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error with normalization:', error)
Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in this notebook. CGMNPSVM class provides a built in one vs rest multiclass classification using GMNPlib. Let us see classification using it on four classes. CGMM class is used to sample the data.
In [ ]:
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1.5,1.5]
means[1]=[1.5,-1.5]
means[2]=[-1.5,-1.5]
means[3]=[1.5,1.5]
covs=np.array([[1.0,0.0],[0.0,1.0]])
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.put('m_coefficients', np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
xnte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in range(num)]).T
xnte1=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
xpte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in range(num)]).T
xpte1=np.array([gmm.sample() for i in range(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
l0 = np.array([0.0 for i in range(num)])
l1 = np.array([1.0 for i in range(num)])
l2 = np.array([2.0 for i in range(num)])
l3 = np.array([3.0 for i in range(num)])
trainlab=np.concatenate((l0,l1,l2,l3))
testlab=np.concatenate((l0,l1,l2,l3))
plt.title('Toy data for multiclass classification')
plt.jet()
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=75)
In [ ]:
feats_train=sg.features(traindata)
labels=sg.MulticlassLabels(trainlab)
Let us try the multiclass classification for different kernels.
In [ ]:
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(2))
poly_kernel=sg.kernel('PolyKernel', degree=4, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
In [ ]:
svm=sg.GMNPSVM(1, gaussian_kernel, labels)
_=svm.train(feats_train)
size=100
x1=np.linspace(-6, 6, size)
x2=np.linspace(-6, 6, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Multiclass Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train(feats_train)
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
The distinguishing properties of the kernels are visible in these classification outputs.
[1] Classification in a Normalized Feature Space Using Support Vector Machines - Arnulf B. A. Graf, Alexander J. Smola, and Silvio Borer - IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 3, MAY 2003
[2] Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 15, 2011.
[3] Lin, H., Lin, C., and Weng, R. (2007). A note on Platt's probabilistic outputs for support vector machines.