# [Supervised Learning] SVM – Support Vector Machine explained with examples

This post is the first part of a double post about SVMs. This part tries to help to clarify how SVMs work in theory (with 2 full developed examples). The second part (not published yet) will explain the algorithm to solve this problem using a computer: Quadratic Programming and SMO.

Index

## Basic definition

Support vector machines (SVMs from now on) are supervised learning models used for classification and regression. The basic idea of SVMs in binary classification is to find the farthest boundary from each class.

Therefore, solving a basic mathematical function given the coordinates (features) of each sample will tell whether the sample belongs to one region (class) or other. Input features determine the dimension of the problem. To keep it simple, the explanation will include examples in 2 dimensions.

## Mathematical explanation

 The vector is a perpendicular vector to the boundary, but since boundary’s coefficients are unknown, vector’s coefficients are unknown as well. What we want to do is to calculate the boundary’s coefficients with respect to because we have its coordinates (sample’s coordinates).

 If both vectors are multiplied, the result will be the purple vector.

Let us say that:

Where is a positive sample (class A) and is a negative sample (class B).
A new variable is now introduced:

Multiply each of them by the previous equations:

The result is the same equation. Therefore, we only need the previous formula:

 Finally, we add an additional constrain so that the values that fulfill this, fall in between the two regions as depicted (green zone).

 The next step is to maximize the projection of on (the black perpendicular vector to the boundary) to keep samples from each class as far as possible. I assume that you know about scalar projection, but if you don’t, you can check out the Appendix A. The length of the projection is given by the following formula:

From the previous formula now let us substitute both positive and negative samples and so that:

Therefore:

The goal is to maximize which is the same as minimizing or, to make it more mathematically convenient,

Thus, we have a function to minimize with a constraint (), so Lagrange is applied. In case you want to know more about Lagrange multipliers, you can check Appendix B.

First we have the function we want to minimize, and later the constraints.

Plug these two functions to .

Hence, we aim to minimize
The optimization depends on

## Kernel trick

One of the most interesting properties of SVMs is that we can transform problems from a certain number of dimensions to another dimensional space. This flexibility, as known as kernel trick, allows SVMs to classify nonlinear problems.

The following example shows how to graphically solve the XOR problem using 3 dimensions.

Now it is not difficult to imagine a plane that can separate between blue and red samples.

## Example 1: 2 points in a plane

Points and class (coordinate x (x1), coordinate y (x2), class/output (y)):

Point 1:

Coordinates:
Class (output):

Point 2:

Coordinates:
Class (output):

We want to minimize:
We know that:
and

Let us calculate the second part of the function we want to minimize first to keep it simple:

Ergo:

Now let us calculate :

Now we have to figure out the bias

Solution =

## Example 2: 3 points in a plane

Points and class (coordinate x (x1), coordinate y (x2), class/output (y)):

Point 1:

Coordinates:
Class (output):

Point 2:

Coordinates:
Class (output):

Point 3:

Coordinates:
Class (output):

First let us calculate the second part of the function we want to minimize. You can see both alphas being multiplied by two numbers. The first number is the product of and . The second number is the dot product between the two coordinates .

Result:

## Appendix A: Scalar Projection of Vectors

We have two points and its vector:

And we want to calculate the length of the vector’s projection (purple) on the orange vector (which by the way, is the perpendicular of both green lines). The result is the blue vector which is over the orange one within the green region.

For this, we have to solve the formula:

Now with the length and the angle, we can calculate the coordinates using sine and cosine functions.

B was in [9,7], so the point on the other side of the projection is

## Appendix B: Lagrange Multipliers

Lagrange is a strategy to find local maxima and minima of a function subject to equality constraints, i.e. max of subject to . and need to have continuous first partial derivatives.

If is a maximum of for the original constrained problem, then there exists such that is a stationary point for Lagrange function (so is 0).

 In mathematics, a stationary point of a differentiable function of one variable is a point of the function domain where the derivative is zero. For a function of several variables, the stationary point is an input whose all partial derivatives are zero (gradient zero). They correspond to local maxima or minima.

 To make it clear, let us say that we have a surface whose gradient is and it is perpendicular to the whole surface. We try to find its maxima whose gradient should theoretically be perpendicular as well. Let us not forget the relationship between the first derivative and the gradient. Hence we can say that the gradient of and the gradient of are pointing in the same direction so: (proportional). is called Lagrange multiplier.

#### Example 1: 1 constraint, 2 dimensions

Maximize on unit circle

Now we plug these results into the original equation.

Therefore we have two points:

#### Example 2: 1 constraint, 2 dimensions

Find the rectangle of maximal perimeter that can be inscribed in the ellipse . We want to maximize the perimeter which is subject to

Now plug it into the original equation.

(we take the positive because it is a maximum),

Then,
So the maximum permieter is:

#### Example 3: 2 constraints, 3 dimensions

Now plug it into and :

Since this is a parabola in 3 dimensions, this has no maximum, so it is a minimum.

#### lipman

"The only way to proof that you understand something is by programming it"

## 7 thoughts on “[Supervised Learning] SVM – Support Vector Machine explained with examples”

1. atul says:

In example 1, the last line says
(x,y) = (3/5, 4/5) Then f(x,y) = 3(x + y) = 21/5 = 4.2. Hos is the answer shown as 5? Am i missing something?

1. lipman says:

That would be absolutely true, but there was a typo in that part of the text and I updated it:

I wrote:
3 = 2xλ
4 = 2yλ
And it is suppose to be:
3 = 2xλ
3 = 2yλ

Thank you for noticing the error!

2. Iván Camilo Morales Buitrago says:

Miguel, thanks a lot for your post, it is really useful.

3. Shujaat Khan says:

Very useful and nice explanation.
In mathematical explanation section after “(From the previous formula y_i(\overrightarrow{x_i} \cdot \overrightarrow{w}+b) -1 = 0 now let us substitute both positive and negative samples x_+ and x_- so that:)” in second equation I guess there is a typo I guess. It should be “x_-” instead of and “x_+”.
am I right ?

1. lipman says:

You are right! I forgot to modify the index after copypasting 🙂

Now it’s fixed, thanks!