Summarizing Papers

In this section I collect the posts of those papers I read and I wrote a tiny summary about. I sometimes write literal sentences from the papers if I think it’s important and cannot be summarized better. Papers are sorted by year.

. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger. 2017. On Calibration of Modern Neural Networks.

. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. 2017. Large-batch training for Deep Learning: Generalization gap and Minima.

. Ian J. Goodfellow, Oriol Vinyals, Andrew M. Saxe. 2014. Qualitatively Characterizing Neural Network Optimization Problems.

. Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio. 2013. Maxout Networks.

. Matthew A. Turk and Alex P. Pentland. 1991. Face recognition using Eigenfaces.

[Explanation] Face Recognition using Eigenfaces

This post comes from an assignment for my “Cognitive Science I” course. Enjoy it.

Brief Introduction

This paper is one of the most relevant papers regarding to face recognition. Nowadays it is difficult to find a real life implementation of this old algorithm, but other research has been built upon this. In addition, the simplistic and effectiveness of this algorithm makes it very beautiful.

Algorithm

To train the model:
1.-Flat the black and white images of the training set (from matrices to vectors)
2.-Calculate the mean.
3.-Normalize the training set: for each image, subtract the mean.
4.-Calculate the covariance: multiply all images by themselves.
5.-Extract eigenvectors from the covariance.
6.-Calculate eigenfaces: eigenvectors x normalized pictures.
7.-Choose the most significant eigenfaces.
8.-Calculate weights: chosen eigenfaces x normalized pictures.

To detect a face:
9.-Vectorize and normalize this picture: subtract the calculated mean from the picture.
10.-Calculate the weights: multiply the eigenfaces x normalized picture.
11.-Interpret the distance of this weight vector in the face space: if it is far, it is not a face (establishment of a threshold).

Explanation of the algorithm

Training set used:
trainingset

1.-Flatting the image
This algorithm works with vectors because we have to calculate later the covariance. Because of this, the image needs to be in a vector form.

1 2 3
4 5 6
7 8 9
1 4 7 2 5 8 3 6 9

2.-Calculate the mean
The mean is just the sum of all of the pictures divided by the number of pictures. As a result, we will have an “average” face.
median

3.-Normalize the training set
To normalize the training set, we just simply need to subtract for each picture in the training set the mean that was calculated in the previous step.

The reason why this is necessary is because we want to create a system that is able to represent any face. Therefore, we calculated the elements that all faces have in common (the mean). If we extract this average from the pictures, the features that distinguish each picture from the rest of the set are visible.
1 copy

2 copy

4.-Calculate the covariance
The covariance represents how two variables change together. After the previous step, we have a set of images that have different features, so now we want to see how these features for each individual picture change in relation to the rest of the pictures.

For this purpose, we put all the flat normalized pictures together in a vector. My training set consists of 16 pictures whose dimensions are 235×235 pixels. Therefore, the resulting matrix will be 55225×16. The covariance is the multiplication of this matrix by itself, and if we transpose it correctly, the resulting matrix will be 16×16:

16x55225 * 55225x16 = 16×16

5.-Extract eigenvectors
From the covariance we can extract the eigenvectors. Fortunately, there is Matlab function that helps us in this step (you can see it on the code). There is plenty of information in the internet about eigenvectors (2) (3) but the general idea is that eigenvectors are the vectors of the covariance that describe the direction of the data. The first eigenvector will describe more information than the second and so on. For this reason, later we have to pick the first eigenvectors generated (avoiding noise).
5

6.-Calculate eigenfaces
Each eigenvector is multiplied by the whole normalized training set matrix (the 55225×16 matrix) and as a result, we will have the same amount of eigenfaces as images in our training set.
eigenfaces2

7.-Choose the most significant eigenfaces.
The first eigenfaces represent more information than the last eigenfaces. Actually, the last eigenfaces only add noise to the model, so it is necessary to avoid them. Therefore, only the most significant eigenfaces are chosen. For this, there are many heuristic algorithms but it can also be done by looking at the pictures. In my code I only used 16 different pictures, and since the training set is tiny, all of the eigenfaces represent important features.

Some simple heuristic algorithms are shown in the code but they are not used. I preferred to manually select the amount of eigenfaces to see the difference in the algorithm’s performance.

Among these heuristics are:
1) To select those eigenvectors whose eigenvalues are above 1.
2) To choose all eigenvectors until the cumulative sum of the eigenvalues is around 95%

4

8.-Calculate weights
Each normalized face in the training set multiplies each eigenface. Consequently, there will be N set of weights with M elements (N = amount of pictures in the training set, M = number of eigenfaces).

After this procedure, we can theoretically represent each face as a linear combination of the chosen eigenfaces. This means that each picture in the training set can be recalculated by a sum of each eigenface multiplied by the corresponding weight plus the mean.

3

Recognition part: 9.-Vectorize and normalize the picture
Reshape the test picture into a vector and subtract the mean calculated in 2) from it.

Recognition part: 10.-Calculate the weights
The same as 8) but with the test picture.

Recognition part: 11.-Interpret the distance
Now we have all the weights from our training set and the weight of the picture that we want to classify. The final step is to determine whether the picture is a face or not, given the distance. This can be a bit confusing: the most obvious approach might be calculate the mean of the distances and if the distance is over a predetermined threshold, the picture will be categorized as a face. Nonetheless, this might lead to errors when using, for example, one of the faces used in the training set.

Since the model was trained using the same image, for the system should be obvious to categorize this picture as a face. Unfortunately, that is not the case. The reason is because the distance of the picture used in the training regarding to the specific eigenface that describes its features might be 0 (because it is exactly the same picture) but the distances between this picture and the rest of the images in the training set might be greater, and if we take the mean of the distances, the overall result could be over the threshold that we determined.

6

The code is provided in the Source code section.
You can also access to the slides of the presentation.

References

1. Matthew A. Turk & Alex P. Pentland. 1991. “Face recognition using Eigenfaces”.
2. “Principal Component Analysis 4 Dummies: Eigenvectors, Eigenvalues and Dimension Reduction”. https://georgemdallas.wordpress.com/2013/10/30/principal-component-analysis-4-dummies-eigenvectors-eigenvalues-and-dimension-reduction/ (Accessed 5-10-2015)
3. Eigenvectors and Eigenvalues Explained Visually. http://setosa.io/ev/eigenvectors-and-eigenvalues/ (Accessed 5-10-2015)

[Hough Transform] Ellipse detection and space reduction

This is the last entry regarding Hough Transform. I previously wrote about Line Detection and Circle Detection including some Source Code, but in this case I will just write about it. The reason why I did not write any code is because it can be found in [1] and because it is very similar to the Circle Detector.

Brief Introduction

Ellipse detection is another useful tool that may have various applications in the field of recognition. Let us not forget that sometimes due to the movement when a picture is taken, in some images circles cannot be represented correctly, so ellipses appear instead. Ellipses are also a very simple shape that can be interesting to recognize since we live surrounded by objects with that shape. As an example, self-driving cars are intended to recognize traffic signs. In particular, rounded traffic signs may look elliptical when they are seen from a specific point of view.

trafficsign

Simple way (5 dimensions)

This approach is very similar to the Circle Detector. Circumferences have 3 characteristics needed to be defined: radius and center coordinates. Likewise, it can be said that in case of ellipses we need 5 characteristics: the center (2), the size along both axes (2) and its rotation (1).

characteristics

Implementing a 5 dimensional matrix will lead us to fall over the curse of dimensionality. This approach can be reasonable when the algorithm faces a known environment and therefore some characteristics can be drastically reduced. For instance, if a fixed camera aims to capture the movement of the moon, the size of its radius will not change very much, and as the movement can also be predicted, one can simply modify a few parameters to adapt the algorithm and focus on a certain region.

The implementation, as stated above, is similar to the circle detector: using the trigonometric characteristics of the ellipse one can establish relationships between them and iterate over each parameter to try all combinations along the whole image. There is a Matlab code written in [1] (page 209).

Space Reduction

Space reduction is applied in a very similar way to the Circle Detector. Nonetheless, as ellipses are figures a bit more complex than circumferences, it can be more problematic to achieve the solution. Let us recall that for circumferences, it is only necessary to take the center point in the chord between two prospective points that might belong to the circumference, and draw a perpendicular line to the chord passing through the center point. This is possible due to the orthogonality between these two lines, but this is not the case when analyzing ellipses. The equivalent to that perpendicular line in the circumference must be found in other way, for instance, using the tangential lines of those two chosen points, as we can see below.

902 901
Circle detection Ellipse detection

The algorithm shown in the book [1] is similar to the Circle detector: it iterates over the whole picture trying to find a black pixel. When it is detected, it looks in the neighborhood for another one to draw a chord and therefore, the pixel in the center of the chord. The proposed way to find the coordinates of the pixel out of the ellipse is by the intersection of the tangent of those two chosen points. The author proposed that these tangents can be obtained before the process of Non-Maximum Suppression. In this case, during the use of Sobel filter the angles to generate those tangents can be obtained when Canny Edge detector is used.

Finally, we just need to obtain the maxima of the accumulator, and draw the consequent ellipse.

References

1. M. Nixon and A. Aguado. 2008. “First order edge detection operators”, Feature Extraction & Image Processing.

[Hough Transform] Circle detection and space reduction

Brief Introduction

Hough Transform was already used for Line detection and it showed how powerful it can be. This time, the main goal will be detecting circles. Detecting this basic shape may be interesting in the field of recognition since many objects subject to be classified have a circular shape such as the iris of the eyes, coins or even cells under a microscope.

eye

Simple way (3 dimensional matrix)

This first method is easier to understand but very inefficient compared to the next (after space reduction is applied). A circle needs three parameters: x,y values for the center location and the radius of the circumference. Thus the accumulator matrix will have 3 dimensions, one for each parameter, covering all possibilities.

Given a certain radius, when a pixel is detected, the algorithm will increment the accumulator elements corresponding to the circumference that can be drawn using that pixel and radius as characteristics of the circle.

The first two parameters of the 3-D accumulator are x,y values corresponding to the coordinates of the whole picture. The third parameter is the radius. In the following picture, we are using a fixed radius to make everything easier to understand, and the accumulator is incremented in those coordinates in which a red pixel is located.

circlacc

As you can see, the red pixels almost do not coincide in any point when iterating over the same coordinate. During the algorithm execution, many circles will be drawn and there will be a peak in the center of the real circle we want to detect, as depicted below. They all meet in the center of the circumference (green points).

cirrcleexpl

When the algorithm finishes the iteration over all pixels and radius, we only need to find where is that maxima located to get the characteristics of the circumference. Finally, we can draw it.

circleploted

The most problematic characteristic of this algorithm is that it needs to iterate over all radius, so it will lose a considerable amount of time on this task. For this reason, in the algorithm I developed I establish a min and max radius, to avoid checking radius extremely short and radius too large. This is how it looks depending on the described situation.

circleplot4 circleplot2 circleplot3
a) b) c)

a) When the circumference has a larger radius than expected.
b) When it finds more figures.
c) When it only finds an ellipse.

The algorithm is this:
Iterate over columns (x)
Iterate over rows (y)
If an edge is detected
Iterate over all radius (r)
For angles between 0 and 360 (m)
Calculate pixel for the generated circumference (y,x,r)
If that pixel is not out of bounds, increase accumulator

Space Reduction

Space reduction in this case consist in removing the problematic radius parameter.

When iterating over the picture and a pixel is detected, it will try to look for more pixels in a enclosed neighborhood. When a pixel in the neighborhood is detected, we will have an arc of the circumference. Given these two pixels, we will have a slope and a middle point (blue). We need to find the perpendicular line passing through that middle point (red) which represents the increment in the accumulator.

circc

As the algorithm iterates, it will increase more and more the accumulator, and a peak in the center of the real circumference will be generated.

circleano plot
After some iterations Accumulator plotted

There is another method to obtain the center explained in [1], but the algorithm written in the book is an implementation of the already explained method. However, the author of the books does not give a hint about getting the radius once we got the center.

I used a 1 dimensional array (or vector) as an accumulator of the distances of each pixel in the image to the center. The peak will be generated when the distances of all the pixels belonging to the circumference are summed in the accumulator. This results in the radius of that circumference. It may be improved by starting in the neighborhood of the center and stop counting at some point.

The execution time differs from the improved algorithm and from the non-improved algorithm. For the same picture, the non-improved algorithm takes around 0.39 seconds whereas the improved version takes around 0.019 seconds.

The code is provided in the Source code section.

References

1. M. Nixon and A. Aguado. 2008. “First order edge detection operators”, Feature Extraction & Image Processing.

[High and low pass filters] The Einstein-Monroe picture

There is an Einstein-Monroe picture wandering around the Internet that I recently saw. It can be a nice example of how human vision works as well as building a high and low pass filter from scratch in order to extract both images.

einsteinmonroe

This picture includes a low frequency picture of Monroe and a high frequency picture of Einstein blended together. Human vision notices details of its environment when objects are close (high frequency). That means that when we are close enough (a normal distance) to this picture, we should be able to see Einstein’s face, otherwise you should check your sight. If you take 3 steps back and see the same picture, you should be seeing Monroe’s face because your vision is not able anymore to get the small details of the picture. Instead, it will get a general idea of the picture (a bit blurry).

Since this is about high and low frequency pictures, we can build high and low pass filters to extract the frequency of the image we want. Thus each original image is theoretically possible to be extracted.

First of all, we can see how the Fast Fourier transform looks like. In order to achieve it, we have to perform the 2-D Fast Fourier, shift it, and scale it.

fftshifted

The low frequency data of the image is in the center of the previous image. Thus a simply low-pass filter can be built by keeping the center and removing the rest, as this picture shows. If we want to extract the opposite, we can invert it.

low high
Low-pass filter High-pass filter

The radius of the circle needs to be manually adjusted depending on the output. When we try to rebuilt the image using both of those Fourier Transform, we get the original images.

c1 c2
Low frequency image High frequency image

Since in my secondary screen I can barely see Einstein’s face, I tried to increase the contrast to make it easier to see in case you have the same issue.

einstein2

The code is provided in the Source code section.

Interesting Links

1. S. Lehar. “An Intuitive Explanation of Fourier Theory”, http://cns-alumni.bu.edu/~slehar/fourier/fourier.html.

[Hough transform] Line detection (Cartesian, Polar and Space reduction)

Brief Introduction

Line detection is one of the most important and basic feature extraction methods. Many currently developing and promising fields such as self driving cars may use line detection to detect lanes. Thus, it is important to understand how it works (both mathematically and the implementation).

As we are using a 2D plane (an image) we can use Cartesian or Polar parameterization. Polar parameterization is useful not only because of its own advantages, but also because it allows the algorithm to reduce costs by space reduction.

Cartesian

Let us keep in mind the line equation:
[latex]y = mx + c[/latex]
In homogeneous form:
[latex]Ay+Bx +1 = 0 \quad[/latex] where [latex]A = -1/c, B = m/c[/latex]

To determine the line we must find [latex]m, c \quad \text{(or A, B)}[/latex]

The way HT works is by simply counting the potential solution in an accumulator, tracing all possible lines for each point within the main iteration. Hence, finding the maximum in the accumulator means finding the line with the highest probability.

When iterating, after checking that a black pixel (typically corresponding to an edge) has been detected, it iterates over two different “for” loops. The first loop corresponds to angles between -45 and 45 degrees (both inclusive) and the second loop between 45 and 135. It is necessary to separate them because for slopes whose degrees are larger than 45 or lower than -45, [latex]c[/latex] (intersection with y-axis) may take large values. Thus, an additional accumulator for angles between 45 and 135 is needed, which will store a similar [latex]c[/latex] variable whose value is the intersection with x-axis rather than y-axis.

As we can see in the image below for the case when angles are between -45 and 45, when [latex]c[/latex] is out of bounds (a) (bounds are 0 and the height), the accumulator is not increased. Otherwise, the accumulator will increase for all those angles between the allowed boundaries (b) (green). Note that all those angles that are outside are later taken into account when examining x-axis.

angles1

In the following picture, we have 5 points that may compound a line. In the green zone of the left side you can see how the region in the middle is getting larger and larger. That represents the accumulator values for those angles and intersection with the y-axis, and it shows that there is a high probability to find a line, as it is. Likewise, we can also see that the behavior of this algorithm is strong against noise and occlusion. As a drawback, two large matrices are needed.

angles2

Cartesian Algorithm

Iterate over rows (y)
Iterate over columns(x)
If an edge is detected
For angles between -45 and 45 (m)
Calculate c (y-axis intersections)
If c is between the bounds, increase accumulatorA
For angles between 45 and 135 (m)
Calculate c (x-axis intersections)
If c is between the bounds, increase accumulatorB

Cartesian Examples

line1 line2

Correction

The previous algorithm taken from [1] was used to understand Hough Transform for Cartesian coordinate systems, however, this algorithm failed to classify some lines such as the following:
line3

At first I thought that the problem could be my implementation of how to draw those lines given the accumulators, but after a deep study of the algorithm I realized what was the mistake. In addition to the author mistake, I added a small improvement that may help to understand HT in Cartesian coordinate systems.

First of all, in the author’s algorithm the accumulators are split depending on the degrees that are being examined. The first accumulator stores vertical intersections of the y-axis on angles between -45 and 45 whereas the second is responsible of the x-axis intersections on angles between 45 and 135.

If a nearly horizontal line is examined, the first accumulator (vertical intersections) will have a higher maxima than the second accumulator. This can be seen in the image above the algorithm: in the vertical accumulator a peak around the center will be created. In contrast, the horizontal accumulator will grow but very plain. Let us recall that in the accumulator there are represented the pixel where the line should start (the intersection with the axis) and the angle. Once this is completely understood, one can imagine many problematic scenarios such as the one depicted previously.

In the previous image, the line that should be generated must start in the x-axis. This means that the horizontal accumulator (the second) should have a higher peak than the vertical one. However, the angle corresponding to that line is between -45 and 45 degrees, so it is only examined by the first accumulator. Thus, the assumption that one accumulator should be in charge of a certain range of degrees while the second takes cares of the rest, is extremely naïve. The solution to this problem is by simply computing the range from -45 to 135 degrees in both accumulators. It is worth saying that the computation time is almost not affected at all, but we need a higher amount of memory.

The improvement to better understand the algorithm is more related with the later line drawing. As the “for” loops and pixel detection works, the image is examined from top to bottom and left to right. This makes our coordinates system move from the typical 0,0 starting in the bottom-left corner to the top-left corner. This shift arises problematic issues regarding the formulas, especially with the one used for the second accumulator (x-axis crossings detection). The formula used for detecting these crossings is:

[latex size=”1.5″]b = \text{round(} x- \frac{y}{\tan{m* \pi / 180}} \text{)}[/latex]

Where [latex]m[/latex] represents the angle and [latex]x, y[/latex] represent the coordinates. This formula may work when the 0,0 is in the bottom-left corner:

bot

But if the coordinate epicenter is moved, it will not work anymore. For this reason, instead of computing [latex]y[/latex] when [latex]b[/latex] is calculated, I decided to compute [latex]yInv = rows – y[/latex]. An alternative solution would be changing the formula.

Final algorithm:
Iterate over rows (y)
Iterate over columns(x)
If an edge is detected
For angles between -45 and 135 (m)
Calculate yInv ([latex]yInv = rows – y[/latex])
Calculate c (y-axis intersections)
If c is between the bounds, increase accumulatorA
Calculate c (x-axis intersections)
If c is between the bounds, increase accumulatorB

Both algorithms (the original and the improved one) are in the Source code section.

Polar

Polar coordinate system is an alternative to the Cartesian in which a radius and angle are needed to locate a single point, rather than X-Y coordinates. The maximum length of the radius can be obtain by the Pythagoras formula: [latex]\sqrt{2} N[/latex] where N is the largest size (width or height).

sqa

In contrast with Cartesian, in the Polar algorithm we only need one accumulator. The first dimension of the matrix is the radius which is between 0 and [latex]\sqrt{2} N[/latex], and the second dimension is the angle (0-180). It will work in the same way as Cartesian: examining each point of the edge individually the accumulator will increase in those points which may generate the objective line. The final line will be drawn by finding the maximum value in the accumulator and using the radius and angle where it is located. It may seem a bit confusing how to calculate prospective points in the Polar coordinate system, so I tried to explain it using some drawings.

Imagine that we have the point 2,4. It is not difficult to calculate its angle and radius.

point

[latex]r = \sqrt{x^2 + y^2} = \sqrt{4^2 + 2^2} = 4.47 \\
\sin{\theta} = \frac{a}{c}; \theta = 26.5[/latex]

However, for the same point, it looks a bit confusing when examining different angles. This is how it looks when we try to figure out the radius of the same point for a 45º angle.

sqa2

And here is an example of a straight line: 4 purple points in a row. If we examine all the angles, we will realize that for the 90º angle they all reach the same point (3), so the accumulator will be maximum there.

sqa3

Polar Algorithm

Initialize max value of radius [latex]\sqrt{2} N[/latex]
Iterate over columns (x)
Iterate over rows (y)
If an edge is detected
For angles between 1 and 180 (m)
Calculate the radius (*)
If radius is between the bounds (0 and maximum), increase accumulator

Polar Examples

The same pictures as Cartesian Examples.

Correction

As in the Cartesian algorithm given in [1], in the Polar algorithm another mistake related to the one found in the Cartesian is found as well. This algorithm seems to work always except for one particular case: when one side of the line is bent pointing from top to bottom following a north-west to south-east direction.

pic2 pic1 pic3
Ok. Fail Ok.

Again, the error is to naïvely assume that the previous studied radius-angles were all the possibilities.

The error is illustrated in the picture below. Given that angle (around 150º) the corresponding radius is negative and therefore, it is discarded (radius must be between 0 and [latex]\sqrt{2} N[/latex]). The reason why the radius is negative is very simple: the intersection is in the other side of the line to which it is suppose to intersect (the red line with the arrow). Given this scenario the solution could be a negative radius and 150º degrees or a positive radius but a negative degrees (150-180 = -30º). Both scenarios cannot be stored in the accumulator for obvious reasons. The real (and more expensive in terms of memory) solution would be to extent the accumulator from 1-180 to 1-360 degrees to cover all cases. In this case, this line would be detected in the angle -30+360 = 330º and the radius will be positive because of the orientation of the arrow.

radius

Nonetheless, I tried to study how good or bad was assuming that we only needed to check 1-180 degrees. For this, I run the algorithm through a 150×150 black image to make it fire in every pixel and try each combination. After that, I checked which parts of the accumulator were 0, meaning this that the will never be modified. I did the same in the 1-360 case to see those cases that cannot be seen from the first 1-180º implementation. The black color represents those elements who are never increased whereas the white color represents a zone that can be modified. The width indicates each angle, so in the first case the picture has 180 columns and the second one has 360.

op180 op360

The conclusion drawn from these pictures is that it is possible to make a more efficient algorithm to iterate only over those cases that may be meaningful.

Space Reduction

The space reduction is a modification of the polar algorithm in which the accumulator matrix is reduced from [latex]\text{max},180[/latex] to two matrices: [latex]180,1 \quad \text{and} \quad \text{max},1[/latex]. The huge save is obvious, but again, the naïve approach to enclose the problem from 1 to 180 degrees will have the same consequences as in the previous section.

The algorithm presented in the book does not iterate over all angles. Instead, it checks whether a point is located in a certain neighborhood (a 5×5 window) and it calculates the angle for that point.

This approach is more statistical than the previous algorithms, so instead of the accuracy given by knowing the coordinates which correspond to the characteristics of the line, it decides the parameters of the line given the statistics from the accumulator.

The code given in [1], page 211, does not work for almost any line. To make it work, one needs to fix it as in the previous section: adding the 360 degrees.

Additional notes for further improvements

·The accuracy is extremely related with the counter value
·Instead of taking the max, you can take the 2 or 3 max values since more lines may be found.
·Instead of 2 or 3 max, you can establish a threshold
·It can also be possible to study only those lines in a certain region of the picture. For instance, if you want to make a lane recognizer you should focus on the half in the bottom.

The code is provided in the Source code section.

References

1. M. Nixon and A. Aguado. 2008. “First order edge detection operators”, Feature Extraction & Image Processing.

[Image Processing] Thresholding and Subtracting

One of the simplest methods to extract features from a picture is by subtracting and thresholding. Imagine that from a static background we want to get any figure that appears in front. For instance, let us say that we want to measure the height of different people who will lean against a wall. At first, the background (the wall) is observed. Then, someone will appear, and later, by subtracting the new picture from the background we will be able to extract the person who appeared. After that, it will not be difficult to guess the height. The thresholding operation comes from the fact that we have to decide how strong is the change to consider that it indeed changed.

The most impressive advantage of this method is that it is really easy to implement and fast, similar to the temporal median which can actually be used to generate the background. The biggest drawback is that it is very sensitive to noise and luminosity changes as we will see.

I cropped a couple of pictures used in the temporal median to try this out.

background
Background
figure
Background + figure

And these are the results after applying thresholding and subtraction. The first picture has a lot of noise as you can see, but I think is very easy to remove in this case. I think this noise comes from that when these pictures were taken, there was a bit of wind and pixels are not exactly the same. In order to remove the noise, in the second attempt I firstly used a Gaussian filter. By the way, both pictures were converted first to gray scale to make it simple and faster since the background and foreground will be in a different color, although Gaussian took really long compared to the first one. In any case, in the original image my right arm is almost blended with the background. That is why it is not well detected.

2
Simple thresholding and subtracting
1
Using a Gaussian filter first

When subtracting you can be very imaginative and try different things depending on the background, foreground and the application:
a) Having single thresholds for each color channel
b) Having a global threshold made from the sum of the subtracting of each channel
c) Combining a) and b)
d) Using gray scale
e) Applying a template convolution (Gaussian or any other)
f) Check out the neighborhood
and so on.

I wanted to try out how to use the webcam to process and show the results “in real time”, although my computer does not compute very fast and I took a picture every 1 or 1.5 seconds, but it is still interesting. I also wrote a post about how to take pictures from the webcam with Matlab.

This is the algorithm of the code I made

Initialize webcam, max_times
Wait 2 seconds before taking a picture of the background (to let me hide)
Take a picture of the background
Wait 1 second
while max_times>=0
Take a picture
result = blackPicture (background color)
Iterate over each pixel (y,x)
If background(y,x,redChannel) – picture(y,x,redChannel) > threshold || background(y,x,greenChannel) – picture(y,x,greenChannel) > threshold || background(y,x,blueChannel) – picture(y,x,blueChannel) > threshold
result(x,y,channels) = newColor
End if

Show picture (or store it)
Decrease max_times

Results

Some times we can see that the whole area is green. This is because the luminosity changes depending on where the camera is focusing, and the focus changes depending on where the camera detects something. As the luminosity of the whole picture changes, everything turns to green. Shadows also make the luminosity change.

An example of how luminosity changes depending on where the webcam is focused.

testback
Background picture
testfront
Take a look to the luminosity of the walls

The code is provided in the Source code section.

References

1. M. Nixon and A. Aguado. 2008. “First order edge detection operators”, Feature Extraction & Image Processing.

How to take pictures from the webcam with Matlab

In Image Processing, taking pictures as well as recording videos is an important task since it is the first step before processing the images. Thus, I consider interesting to write an entry about configuring the webcam in Matlab. I have to say that I was extremely astonished by how easy this was. I was expecting installing external drivers or add-ons in Matlab (actually that was my first and wrong step) but nothing like that is needed. You have to, at least, have installed the camera in your PC, which seems to be quite obvious if you want to use it. The only requirement is having the Image Acquisition Toolbox installed, which I had installed by default in my R2013a Matlab version.

At first, we will use imaqhwinfo to see the adaptors recognized by Matlab:

>> imaqhwinfo

ans =

    InstalledAdaptors: {'gentl'  'gige'  'matrox'  'winvideo'}
        MATLABVersion: '8.1 (R2013a)'
          ToolboxName: 'Image Acquisition Toolbox'
       ToolboxVersion: '4.5 (R2013a)'

“winvideo” seems to be the adaptor I want to use, so I will try to get more information about it:

>> devices = imaqhwinfo('winvideo')

devices =

       AdaptorDllName: [1x81 char]
    AdaptorDllVersion: '4.5 (R2013a)'
          AdaptorName: 'winvideo'
            DeviceIDs: {[1]  [2]}
           DeviceInfo: [1x2 struct]

We can see in “DeviceIDs” that I have two cameras connected: my laptop camera and a USB camera. If I want to get more information about each camera, I will print out DeviceInfo.

>> devices.DeviceInfo(1)

ans =

             DefaultFormat: 'YUY2_160x120'
       DeviceFileSupported: 0
                DeviceName: 'USB2.0 Camera'
                  DeviceID: 1
     VideoInputConstructor: 'videoinput('winvideo', 1)'
    VideoDeviceConstructor: 'imaq.VideoDevice('winvideo', 1)'
          SupportedFormats: {1x5 cell}

>> devices.DeviceInfo(2)

ans =

             DefaultFormat: 'MJPG_1280x1024'
       DeviceFileSupported: 0
                DeviceName: '1.3M HD WebCam'
                  DeviceID: 2
     VideoInputConstructor: 'videoinput('winvideo', 2)'
    VideoDeviceConstructor: 'imaq.VideoDevice('winvideo', 2)'
          SupportedFormats: {1x18 cell}

When we are using a camera in Matlab, we need to determine which format will be used. I decided that I want to use the in-built camera, so let’s check what formats does it support.

>> device2 = devices.DeviceInfo(2);
>> device2.SupportedFormats

ans =

  Columns 1 through 5

    'MJPG_1280x1024'    'MJPG_1280x720'    'MJPG_1280x800'    'MJPG_1280x960'    'MJPG_160x120'

  Columns 6 through 10

    'MJPG_176x144'    'MJPG_320x240'    'MJPG_352x288'    'MJPG_640x480'    'YUY2_1280x1024'

  Columns 11 through 15

    'YUY2_1280x720'    'YUY2_1280x800'    'YUY2_1280x960'    'YUY2_160x120'    'YUY2_176x144'

  Columns 16 through 18
    'YUY2_320x240'    'YUY2_352x288'    'YUY2_640x480'

My recommendation is to use MJPG rather than YUY2, and to use a small size like 640×480 or lower because the lower, the faster to process. In addition, if you want to test your camera in Matlab, you can also type imaqtool and the Simulator will pop up. Now we have to configure Matlab writing which camera and format will be used, frames per second captured, and the color space:

vid = videoinput('winvideo', 2, 'MJPG_640x480');
vid.FramesPerTrigger = 1;
vid.ReturnedColorspace = 'rgb';

When we want to capture a picture with the camera, we simple have to trigger:

start(vid);

To retrieve the capture picture(s):

picture = getdata(vid);

When using RGB, we would expect that the variable picture have 3 dimensions (height, width, color channel), but if FramesPerTrigger is more than 1, this variable will have 4 dimensions. The 4th dimension will be each frame, so if we want to save the first and last frame taken, we will simply do:

vid = videoinput('winvideo', 2, 'MJPG_640x480');
vid.FramesPerTrigger = 30;
vid.ReturnedColorspace = 'rgb';
start(vid)
frames = getdata(vid);
imwrite(frames(:,:,:,1),'firstframe.jpg');
imwrite(frames(:,:,:,end),'lastframe.jpg');

SVM algorithm improvements

In the previous post I talked about an SVM implementation in Matlab. I consider that post and implementation really interesting since it is not easy to find a simple SVM implementation. Instead, I found tons of files which may implement a very interesting algorithm but they are insanely difficult to examine in order to learn about how it works. This is the main reason why I put so much effort on this implementation I developed thanks to the algorithm [1].

First improvement

After I implemented that algorithm everything seemed to work:

Example 1:

exampl1

Data points coordinates and target:
[latex]
data = \begin{bmatrix} -1 & -4 & -1 \\
-4 & 5 & -1 \\
6 & 7 & 1 \end{bmatrix}
[/latex]

Distance to the border from each point:
[latex]
dist = \begin{bmatrix} 5.0598 \\
5.0597 \\
5.0596 \end{bmatrix}
[/latex]

Very acceptable results, right? However, when I added more points I experienced an odd behavior.

Example 2:

example2

[latex]
data = \begin{bmatrix} -1 & -4 & -1 \\
-4 & 5 & -1 \\
9 & 12 & 1 \\
7 & 12 & 1 \\
6 & 7 & 1 \end{bmatrix}
[/latex]

[latex]
dist = \begin{bmatrix} 9.0079 \\
7.3824 \\
7.3824 \\
5.6215 \\
3.3705 \end{bmatrix}
[/latex]

It clearly fails at finding the optimum boundary, however, the funny thing here is that the distance between the second and third point and the boundary are the same. This means that the rest of the samples were ignored and the algorithm focused only on those two. Actually, [latex]\alpha_2[/latex] and [latex]\alpha_3[/latex] were the only nonzero values.

After debugging the code everything seemed to be right according to the algorithm I followed [1] but after many trials I saw what definitely brought me to discover the error. In my trial the first two elements belong to class -1 whereas the rest of them belong to the other one. As you can see in the following examples, when I changed the order of the elements in the second class I got that the boundary was different depending only on the first element of the second class, ergo, the third sample.

Example 3:

example3

[latex]
data = \begin{bmatrix} -1 & -4 & -1 \\
-4 & 5 & -1 \\
7 & 12 & 1 \\
9 & 12 & 1 \\
6 & 7 & 1 \end{bmatrix}
[/latex]

[latex]
dist = \begin{bmatrix} 8.8201 \\
6.5192 \\
6.5192 \\
8.2065 \\
2.9912 \end{bmatrix}
[/latex]

Example 4:

example4

[latex]
data = \begin{bmatrix} -1 & -4 & -1 \\
-4 & 5 & -1 \\
6 & 7 & 1 \\
7 & 12 & 1 \\
9 & 12 & 1 \end{bmatrix}
[/latex]

[latex]
dist = \begin{bmatrix} 5.0598 \\
5.0597 \\
5.0596 \\
7.5894 \\
9.4868 \end{bmatrix}
[/latex]

In this last trial we get the best solution because in this case the algorithm has to focus on the third sample which is the closest one to the other class. However, this is not always true, so I had the need of fixing it. The fix is very simple but was not easy to find (at least quickly).

When I was debugging the code, I realized that in the first loop (iterating over [latex]i[/latex]) it never reached the samples 4th and 5th. The reason was easy to understand: after calculating the temporal boundary (even if it is not the best, that is why it is called “temporal”), there were no errors because the algorithm classified it correctly, so it never entered that loops which needed to pass the “if” which takes care of the tolerance. In other words, if there is no error, it does not try to fix it because it is able to classify it correctly (and this actually makes sense).

If the samples were not encountered on the [latex]i[/latex] loop on purpose, then they should be encountered on the other loop. Surprisingly, the algorithm did not encountered any of them in the inner loop. After I checked that I wrote the code accordingly to the algorithm [1], I thought that there had to be a mistake in the algorithm itself. And the mistake was the “Continue to [latex]\text{next i}[/latex]”. Because of that line, it ignored the rest of [latex]j[/latex]’s, so it should be “Continue to [latex]\text{next j}[/latex]”.

Thus, the fix in the Matlab code was pretty simple: changing from “break” to “continue“. Break allows to stop iterating over the loop and therefore it continues in the outer loop whereas continue makes the current loop stop and start iterating over the next value in that loop.

Second improvement

After the first improvement was implemented, it seemed that it worked for many trials, but when I tried more complex examples, it failed again.

ejemplo5

[latex]
data = \begin{bmatrix} -7 & -4 & -1 \\
-9 & -8 & -1 \\
2 & 5 & -1 \\
-3 & -10 & -1 \\
9 & 7 & 1 \\
3 & 8 & 1 \\
8 & 11 & 1 \\
8 & 9 & 1 \end{bmatrix}
[/latex]

The original algorithm [1] uses the variable [latex]\text{num\_changed\_alphas}[/latex] to see whether alpha changed. If no alphas are changed during the iterations for [latex]\text{max\_passes}[/latex] times, the algorithm will stop. I think the idea of iterating various times over the main algorithm is correct, but the algorithm must focus on those samples that will help building the boundary. After I implemented the modification, the algorithm iterated less times than the original algorithm. Additionally, the original algorithm implementation seemed to fail in many cases whereas my implementation works.

When the algorithm iterates once, alphas are updated such that nonzero alphas correspond to the samples that will help building the boundary. In this example, after the first iteration, alpha values correspond to this:

[latex]
\alpha = \begin{bmatrix} 0 \\
0 \\
0.2 \\
0 \\
0 \\
0.2 \\
0 \\
0 \end{bmatrix}
[/latex]

Therefore, in the next iteration it will update the samples to focus only in sample #3 and sample #6. After this implementation was done, all the trials I tried worked perfectly. This is the result of the same problem:

ejemplo6

Algorithm

This is the algorithm [1] after both improvements:

Initialize [latex]\alpha_i = 0, \forall i, b = 0[/latex]
Initialize [latex]\text{counter} = 0[/latex]
[latex]\text{while} ((\exists x \in \alpha | x = 0 ) \text{ } \& \& \text{ } (\text{counter} < \text{max\_iter})) [/latex] Initialize input and [latex]\alpha[/latex]
[latex]\text{for } i = 1, … numSamples[/latex]
Calculate [latex]E_i = f(x^{(i)}) – y^{(i)}[/latex] using (2)
[latex]\text{if } ((y^{(i)} E_i < -tol \quad \& \& \quad \alpha_i < C) \| (y^{(i)} E_i > tol \quad \& \& \quad \alpha_i > 0))[/latex]

[latex]\text{for } j = 1, … numSamples \quad \& \quad j \neq i[/latex]
Calculate [latex]E_j = f(x^{(j)}) – y^{(j)}[/latex] using (2)
Save old [latex]\alpha[/latex]’s: [latex]\alpha_i^{(old)} = \alpha_i, \alpha_j^{(old)} = \alpha_j[/latex]
Compute [latex]L[/latex] and [latex]H[/latex] by (10) and (11)
[latex]\text{if } (L == H)[/latex]
Continue to [latex]\text{next j}[/latex]
Compute [latex]\eta[/latex] by (14)
[latex]\text{if } (\eta \geq 0)[/latex]
Continue to [latex]\text{next j}[/latex]
Compute and clip new value for [latex]\alpha_j[/latex] using (12) and (15)
[latex]\text{if } (| \alpha_j – \alpha_j^{(old)} < 10^{-5})[/latex] (*A*)
Continue to [latex]\text{next j}[/latex]
Determine value for [latex]\alpha_i[/latex] using (16)
Compute [latex]b_1[/latex] and [latex]b_2[/latex] using (17) and (18) respectively
Compute [latex]b[/latex] by (19)
[latex]\text{end for}[/latex]
[latex]\text{end if}[/latex]
[latex]\text{end for}[/latex]
[latex]\text{counter } = \text{ counter}++[/latex]
[latex]\text{data } = \text{ useful\_data}[/latex] (*B*)
[latex]\text{end while}[/latex]

Algorithm Legend

(*A*): If the difference between the new [latex]\alpha[/latex] and [latex]\alpha^{(old)}[/latex] is negligible, it makes no sense to update the rest of variables.
(*B*): Useful data are those samples whose [latex]\alpha[/latex] had a nonzero value during the previous algorithm iteration.
(2): [latex]f(x) = \sum_{i=1}^m \alpha_i y^{(i)} \langle x^{(i)},x \rangle +b[/latex]
(10): [latex]\text{If } y^{(i)} \neq y^{(j)}, \quad L = \text{max }(0, \alpha_j – \alpha_i), H = \text{min } (C,C+ \alpha_j – \alpha_i)[/latex]
(11): [latex]\text{If } y^{(i)} = y^{(j)}, \quad L = \text{max }(0, \alpha_i + \alpha_j – C), H = \text{min } (C,C+ \alpha_i + \alpha_j)[/latex]
(12): [latex] \alpha_ := \alpha_j – \frac{y^{(j)}(E_i – E_j)}{ \eta }[/latex]
(14): [latex]\eta = 2 \langle x^{(i)},x^{(j)} \rangle – \langle x^{(i)},x^{(i)} \rangle – \langle x^{(j)},x^{(j)} \rangle[/latex]
(15): [latex]\alpha_j := \begin{cases} H \quad \text{if } \alpha_j > H \\
\alpha_j \quad \text{if } L \leq \alpha_j \leq H \\
L \quad \text{if } \alpha_j < L \end{cases}[/latex] (16): [latex]\alpha_i := \alpha_i + y^{(i)} y^{(j)} (\alpha_j^{(old)} - \alpha_j)[/latex] (17): [latex]b_1 = b - E_i - y^{(i)} (\alpha_i^{(old)} - \alpha_i) \langle x^{(i)},x^{(i)} \rangle - y^{(j)} (\alpha_j^{(old)} - \alpha_j) \langle x^{(i)},x^{(j)} \rangle[/latex] (18): [latex]b_2 = b - E_j - y^{(i)} (\alpha_i^{(old)} - \alpha_i) \langle x^{(i)},x^{(j)} \rangle - y^{(j)} (\alpha_j^{(old)} - \alpha_j) \langle x^{(j)},x^{(j)} \rangle[/latex] (19): [latex]\alpha_j := \begin{cases} b_1 \quad \quad \text{if } 0 < \alpha_i < C \\ b_2 \quad \quad \text{if } 0 < \alpha_j < C \\ (b_1 + b_2)/2 \quad \text{otherwise} \end{cases}[/latex] The code is provided in the Source code section.

References

1. The Simplified SMO Algorithm http://cs229.stanford.edu/materials/smo.pdf

[SVM Matlab code implementation] SMO (Sequential Minimal Optimization) and Quadratic Programming explained

This post is the second and last part of a double entry about how SVMs work (theoretical, in practice, and implemented). You can check the first part, SVM – Support Vector Machine explained with examples.

Index

Introduction

Since the algorithm uses Lagrange to optimize a function subject to certain constrains, we need a computer-oriented algorithm to implement SVMs. This algorithm called Sequential Minimal Optimization (SMO from now on) was developed by John Platt in 1998. Since it is based on Quadratic Programming (QP from now on) I decided to learn and write about it. I think it is not strictly necessary to read it, but if you want to fully understand SMO, it is recommended to understand QP, which is very easy given the example I will briefly describe.

Optimization and Quadratic Programming

QP is a special type of mathematical optimization problem. It describes the problem of optimizing (min or max) a quadratic function of several variables subject to linear constraints on these variables. Convex functions are used for optimization since they only have one optima which is the global optima.
Optimality conditions:

[latex]\Delta f(x^*) = 0 \quad \text{(gradient is zero, derivative)} \\
\Delta^2 f(x^*) \geq 0 \quad \text{(positive semi definite)}[/latex]

If it is a positive definite, we have a unique global minimizer. If it is a positive indefinite (or negative), then we do not have a unique minimizer.

Example

Minimize [latex]x_1^2+x_2^2-8x_1-6x_2[/latex]
[latex]\text{subject to } \quad -x_1 \leq 0 \\
\text{ } \quad \quad \quad \quad \quad -x_2 \leq 0 \\
\text{ } \quad \quad \quad \quad \quad x_1 + x_2 \leq 5
[/latex]

Initial point (where we start iterating) [latex]x_{(0)} = \begin{bmatrix}
0 \\
0
\end{bmatrix}[/latex]
Initial active set [latex]s^{(0)} = \left\{ 1,2 \right\}[/latex] (we will only take into account these constraints).

Since a quadratic function looks like: [latex]f(x) = \frac{1}{2}x^T Px + q^T x[/latex]

We have to configure the variables as:
[latex]
P = \begin{bmatrix}
2 & 0 \\
0 & 2
\end{bmatrix}

q = \begin{bmatrix}
-8 \\
-6
\end{bmatrix}
\text{(minimize)} \\

A_0 = \begin{bmatrix}
-1 & 0 \\
0 & -1 \\
1 & 1
\end{bmatrix}

b_0 = \begin{bmatrix}
0 \\
0 \\
5
\end{bmatrix}
\text{(constraints)}
[/latex]

qpprobb

This picture represents how the problem looks like. The function we want to minimize is the red circumference whereas the green lines represent the constraints. Finally, the blue mark shows where the center and the minima are located.

Iteration 1
[latex]s^{(0)} = \left\{ 1,2 \right\} \quad \quad x_{(0)} = \begin{bmatrix}
0 \\
0
\end{bmatrix}[/latex]

Solve EQP defined by [latex]s^{(0)}[/latex], so we have to deal with the first two constraints. KKT method is used since it calculates the Lagrangian multipliers.

[latex]KKT = \begin{bmatrix}
P & A^T \\
A & 0
\end{bmatrix}

\quad \quad

\begin{bmatrix}
P & A^T \\
A & 0
\end{bmatrix}
\begin{bmatrix}
x^* \\
v^*
\end{bmatrix}
=
\begin{bmatrix}
-q \\
b
\end{bmatrix} \\

\begin{bmatrix}
2 & 0 & -1 & 0 \\
0 & 2 & 0 & -1 \\
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0
\end{bmatrix}
\cdot
\begin{bmatrix}
x_1^* \\
x_2^* \\
v_1^* \\
v_2^*
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0 \\
-8 \\
-6
\end{bmatrix}
\to
\text{Solution: }
x_{EQP}^* = \begin{bmatrix}
0 \\
0
\end{bmatrix}
v_{EQP}^* = \begin{bmatrix}
-8 \\
-6
\end{bmatrix}

[/latex]
[latex]x_{EQP}^*[/latex] indicates the next point the algorithm will iterate.
Now it is necessary to check whether this solution is feasible: [latex]A_0 \cdot x_{EQP}^* \leq b_0[/latex]

[latex]
\begin{bmatrix}
-1 & 0\\
0 & -1 \\
1 & 1
\end{bmatrix}
\cdot
\begin{bmatrix}
0\\
0
\end{bmatrix}
\leq
\begin{bmatrix}
0\\
0 \\
5
\end{bmatrix}
\to
\begin{bmatrix}
1\\
1 \\
1
\end{bmatrix}
[/latex]

It is clear that this point was going to be feasible because the point (0,0) is where we started.
Now, we remove a constraint with negative [latex]v_{EQP}^*[/latex] and move to the next value [latex]x_{EQP}^*[/latex].

[latex]S^{(1)} = S^{(0)} – \left\{\ 1 \right\} = \left\{ 2 \right\} \quad \text{constraint 1 is removed} \\
X^{(1)} = x_{EQP}^* = \begin{bmatrix}
0\\
0
\end{bmatrix}[/latex]

Graphically, we started on (0,0) and we have to move to the same point (0,0). The removal of the constraint indicates to which axis and direction we are going to move: up or right. Since we took the first constraint out, we will move to the right. Now we need to minimize constrained to the second constraint.

Iteration 2
[latex]s^{(1)} = \left\{ 2 \right\} \quad \quad x_{(1)} = \begin{bmatrix}
0 \\
0
\end{bmatrix}[/latex]

Solve EQP defined with [latex]A = \begin{bmatrix}
0 & -1
\end{bmatrix} \quad b = [0][/latex]

[latex]
K = \begin{bmatrix}
2 & 0 & 0 \\
0 & 2 & -1 \\
0 & -1 & 0
\end{bmatrix}
\cdot
\begin{bmatrix}
x_1^* \\
x_2^* \\
v_1^*
\end{bmatrix}
=
\begin{bmatrix}
8 \\
6 \\
0
\end{bmatrix}
\to
\text{Solution: }
x_{EQP}^* = \begin{bmatrix}
4 \\
0
\end{bmatrix}
\quad
v_{EQP}^* = \begin{bmatrix}
-6
\end{bmatrix}

[/latex]

Check whether this is feasible: [latex]A_0 \cdot x_{EQP}^* \leq b_0[/latex] ?

[latex]
\begin{bmatrix}
-1 & 0\\
0 & -1 \\
1 & 1
\end{bmatrix}
\cdot
\begin{bmatrix}
4 \\
0
\end{bmatrix}
=
\begin{bmatrix}
-4 \\
0 \\
4
\end{bmatrix}
\leq
\begin{bmatrix}
0\\
0 \\
5
\end{bmatrix}
\to
\begin{bmatrix}
1\\
1 \\
1
\end{bmatrix}
\quad
\text{It is feasible}
[/latex]

[latex]S^{(2)} = S^{(1)} – \left\{\ 2 \right\} = \varnothing \quad \quad \text{constraint 2 is removed} \\
X^{(2)} = \begin{bmatrix}
4\\
0
\end{bmatrix}[/latex]

Iteration 3
[latex]s^{(2)} = \varnothing \quad \quad x_{(2)} = \begin{bmatrix}
4 \\
0
\end{bmatrix}[/latex]

Solve EQP defined with [latex]A = [] \quad b = [][/latex]

[latex]
K = \begin{bmatrix}
2 & 0 \\
0 & 2
\end{bmatrix}
\cdot
\begin{bmatrix}
x_1^* \\
x_2^*
\end{bmatrix}
=
\begin{bmatrix}
8 \\
6
\end{bmatrix}
\to
\text{Solution: }
x_{EQP}^* = \begin{bmatrix}
4 \\
3
\end{bmatrix}

[/latex]

Check whether this is feasible: [latex]A_0 * x_{EQP}^* \leq b_0 = \begin{bmatrix}
1\\
1 \\
0
\end{bmatrix}[/latex]

It is not feasible. 3rd constrained is violated (that is why the third element is 0). As it is not feasible, we have to maximize [latex]\text{t} \quad \text{s.t.} \quad x^{(2)}+t(x_{EQP}^* – x^{(2)}) \in \text{Feasible}[/latex]

[latex]A_0(\begin{bmatrix}
4\\
0
\end{bmatrix}+t(\begin{bmatrix}
4\\
3
\end{bmatrix}-\begin{bmatrix}
4\\
0
\end{bmatrix})) \leq b_0 \\
\vspace{2em}
[/latex]
 
[latex]
\begin{bmatrix}
-1 & 0 \\
0 & -1 \\
1 & 1
\end{bmatrix}
\cdot
\begin{bmatrix}
4 \\
3t
\end{bmatrix}
\leq
\begin{bmatrix}
0 \\
0 \\
5
\end{bmatrix}

\\
-4 \leq 0 \quad \text{This is correct} \\
-3t \leq 0 \quad \text{This is correct, since} \quad 0 \leq t \leq 1 \\
4 -3t \leq 5 \quad \to \quad t = \frac{1}{3} \quad \text{(the largest)}
[/latex]

So, [latex]x^{(3)} = x^{(2)} + \frac{1}{3} (x_{EQP}^* – x^{(2)}) = \begin{bmatrix}
4 \\
1
\end{bmatrix}[/latex]

Finally we add the constraint that was violated.

[latex]S^{(2)} = S^{(1)} + \left\{\ 3 \right\} = \left\{\ 3 \right\}[/latex]

Iteration 4
[latex]s^{(3)} = \left\{ 3 \right\} \quad \quad x_{(3)} = \begin{bmatrix}
4 \\
1
\end{bmatrix}[/latex]

Solve EQP defined with [latex]A = \begin{bmatrix}
1 & 1
\end{bmatrix} \quad b = [5][/latex]

[latex]
K = \begin{bmatrix}
2 & 0 & 1 \\
0 & 2 & 1 \\
1 & 1 & 0
\end{bmatrix}
\cdot
\begin{bmatrix}
x_1^* \\
x_2^* \\
v_1^*
\end{bmatrix}
=
\begin{bmatrix}
8 \\
6 \\
5
\end{bmatrix}
\to
\text{Solution: } x_{EQP}^* = \begin{bmatrix}
3 \\
2
\end{bmatrix}
\quad
v_{EQP}^* = \begin{bmatrix}
3
\end{bmatrix}

[/latex]

Check whether this is feasible: [latex]A_0 \cdot x_{EQP}^* \leq b_0 = \begin{bmatrix}
1\\
1 \\
1
\end{bmatrix}
\quad \text{It is feasible}
[/latex]

[latex]v^* \geq 0 \quad \to \quad \text{We found the optimal}[/latex]

Iterations and steps are drawn on the plane below. It is graphically easy to see that the 4th step was illegal since it was out of the boundaries.

qprob2

Sequential Minimal Optimization (SMO) algorithm

SMO is an algorithm for solving the QP problem that arises during the SVM training. The algorithm works as follows: it finds a Lagrange multiplier [latex]\alpha_i[/latex] that violates the KKT conditions for the optimization problem. It picks a second multiplier [latex]\alpha_j[/latex] and it optimizes the pair [latex](\alpha_i,\alpha_j)[/latex], and repeat this until convergence. The algorithm iterates over all [latex](\alpha_i,\alpha_j)[/latex] pairs twice to make it easier to understand, but it can be improved when choosing [latex]\alpha_i[/latex] and [latex]\alpha_j[/latex].

Because [latex]\sum_{i=0} y_i \alpha_i = 0[/latex] we have that for a pair of [latex](\alpha_1,\alpha_2): y_1 \alpha_1 + y_2 \alpha_2 = y_1 \alpha_1^{old + y_2 \alpha_2^{old}}[/latex]. This confines optimization to be as the following lines show:

confinment

Let [latex]s = y_1 y_2[/latex] (assuming that [latex]y_i \in \left\{ -1,1 \right\}[/latex])
[latex]y_1 \alpha_1 + y_2 \alpha_2 = \text{constant} = \alpha_1 + s \alpha_2 \quad \to \quad \alpha_1 = \gamma – s \alpha_2 \\
\gamma \equiv \alpha_1 + s \alpha_2 = \alpha_1^{old} + s \alpha_2^{old} = \text{constant}[/latex]

We want to optimize:
[latex]L = \frac{1}{2} \| w \| ^2 – \sum \alpha_i [y_i (\overrightarrow{w} \cdot \overrightarrow{x_i} +b)-1] = \sum \alpha_i – \frac{1}{2} \sum_i \sum_j \alpha_i \alpha_j y_i y_j \overrightarrow{x_i} \cdot \overrightarrow{x_j}[/latex]

If we pull out [latex]\alpha_1[/latex] and [latex]\alpha_2[/latex] we have:
[latex]L = \alpha_1 + \alpha_2 + \text{const.} – \frac{1}{2}(y_1 y_1 \overrightarrow{x_1}^T \overrightarrow{x_1} \alpha_1^2 + y_2 y_2 \overrightarrow{x_2}^T \overrightarrow{x_2} \alpha_2^2 + 2 y_1 y_2 \overrightarrow{x_1}^T \overrightarrow{x_2} \alpha_1 \alpha_2 + 2 (\sum_{i=3}^N \alpha_i y_i \overrightarrow{x_i})(y_1 \overrightarrow{x_1} \alpha_1 + y_2 \overrightarrow{x_2} \alpha_2) + \text{const.} )[/latex]

Let [latex]K_{11} = \overrightarrow{x_1}^T\overrightarrow{x_1}, K_{22} = \overrightarrow{x_2}^T\overrightarrow{x_2}, K_{12} = \overrightarrow{x_1}^T\overrightarrow{x_2} \quad \text{(kernel)}[/latex] and:

[latex]v_j = \sum_{i=3}^N \alpha_i y_i \overrightarrow{x_i}^T\overrightarrow{x_j} = \overrightarrow{x_j}^T\overrightarrow{w}^{old} – \alpha_1^{old} y_1 \overrightarrow{x_1}^T \overrightarrow{x_j} – \alpha_2^{old} y_2 \overrightarrow{x_2}^T \overrightarrow{x_j}[/latex]

It represents the original formula [latex]\overrightarrow{x_j}^T\overrightarrow{w}^{old}[/latex] without the first and second [latex]\alpha[/latex]

[latex]… = \overrightarrow{x_j}^T\overrightarrow{w}^{old} -b^{old} +b^{old} – \alpha_1^{old} y_1 \overrightarrow{x_1}^T \overrightarrow{x_j} – \alpha_2^{old} y_2 \overrightarrow{x_2}^T \overrightarrow{x_j} [/latex]

Let [latex]u_j^{old} = \overrightarrow{x_j}^T\overrightarrow{w}^{old} -b^{old}[/latex]. This means that [latex]u_j^{old}[/latex] is the output of [latex]\overrightarrow{x_j}[/latex] under old parameters.

[latex]u_j^{old} + b^{old} – \alpha_1^{old} y_1 \overrightarrow{x_1}^T \overrightarrow{x_j} – \alpha_2^{old} y_2 \overrightarrow{x_2}^T \overrightarrow{x_j}[/latex]

Now we substitute in our original formula with [latex]\alpha_1[/latex] and [latex]\alpha_2[/latex] pulled out using new variables: [latex]s,\gamma,K_{11},K_{22},K_{12},v_j[/latex].

[latex]L = \alpha_1 + \alpha_2 – \frac{1}{2}(K_{11} \alpha_1^2 + K_{22} \alpha_2 + 2 s K_{12} \alpha_1 \alpha_2 + 2 y_1 v_1 \alpha_1 + 2 y_2 v_2 \alpha_2) + \text{const.}[/latex]
Note that here we assume that [latex]y_i \in \left\{ -1,1 \right\} \quad \text{since} \quad y_1^2 = y_2^2 = 1[/latex]

Now we substitute using [latex]\alpha_1 = \gamma – s \alpha_2[/latex]

[latex]L = \gamma – s \alpha_2 + \alpha_2 – \frac{1}{2} (K_{11}(\gamma – s \alpha_2)^2 + K_{22} \alpha_2^2 + 2 s K_{12}(\gamma – s \alpha_2) \alpha_2 + 2 y_1 v_1 (\gamma – s \alpha_2) +2 y_2 v_2 \alpha_2 ) + \text{const.}[/latex]

The first [latex]\gamma[/latex] is a constant so it will be added to the [latex]\text{const.}[/latex] value.

[latex] L = (1 -s) \alpha_2 – \frac{1}{2} K_{11} (\gamma – s \alpha_2)^2 – \frac{1}{2} K_{22} \alpha_2^2 – s K_{12} (\gamma -s \alpha_2) \alpha_2 – y_1 v_1 (\gamma – s \alpha_2) – y_2 v_2 \alpha_2 + \text{const.} \\
= (1 -s) \alpha_2 – \frac{1}{2} K_{11} \gamma^2 + s K_{11} \gamma \alpha_2 – \frac{1}{2} K_{11} s^2 \alpha_2^2 – \frac{1}{2} K_{22} \alpha_2^2 – s K_{12} \gamma \alpha_2 + s^2 K_{12} \alpha_2^2 – y_v v_1 \gamma + s y_1 v_1 \alpha_2 – y_2 v_2 \alpha_2 + \text{const.}[/latex]

Constant terms are grouped and [latex]y_2 = \frac{s}{y_1}[/latex] is applied:

[latex](1 -s) \alpha_2 + s K_{11} \gamma \alpha_2 – \frac{1}{2} K_{11} \alpha_2^2 – \frac{1}{2} K_{22} \alpha_2^2 – s K_{12} \gamma \alpha_2 + K_{12} \alpha_2^2 + y_2 v_1 \alpha_2 – y_2 v_2 \alpha_2 + \text{const.} \\
= (- \frac{1}{2} K_{11} – \frac{1}{2} K_{22} + K_{12}) \alpha_2^2 + (1-s+s K_{11} \gamma – s K_{12} + y_2 v_1 – y_2 v_2) \alpha_2 + \text{const.} \\
= \frac{1}{2}(2 K_{12} – K_{11} – K_{22}) \alpha_2^2 + (1 -s +s K_{11} \gamma – s K_{12} \gamma + y_2 v_1 – y_2 v_2) \alpha_2 + \text{const.}
[/latex]

Let [latex]\eta \equiv 2 K_{12} – K_{11} – K_{12}[/latex]. Now, the formula is reduced to [latex]\frac{1}{2} \eta \alpha_2^2 + (\dots)\alpha_2 + \text{const.}[/latex]. Let us focus on the second part (the coefficient “[latex]\dots[/latex]”). Remember that [latex]\gamma = \alpha_1^{old} + s \alpha_2^{old}[/latex]

[latex]1 -s +s K_{11} \gamma – s K_{12} \gamma + y_2 v_1 – y_2 v_2 \\
= 1 – s + s K_{11}(\alpha_1^{old}+s \alpha_2^{old}) – s K_{12} (\alpha_1^{old}+s \alpha_2^{old}) \\
\hspace{6em} \text{ } + y_2 (u_1^{old}+b^{old} – \alpha_1^{old} y_1 K_{11} – \alpha_2^{old} y_2 K_{12}) \\
\hspace{6em} \text{ } – y_2 (u_2^{old}+b^{old} – \alpha_1^{old} y_1 K_{12} – \alpha_2^{old} y_2 K_{22}) \\
= 1 – s + s K_{11}\alpha_1^{old}+ K_{11} \alpha_2^{old} – s K_{12} \alpha_1^{old} – K_{12} \alpha_2^{old} \\
\hspace{6em} \text{ } + y_2 u_1^{old} + y_2 b^{old} – s \alpha_1^{old} K_{11} – \alpha_2^{old} K_{12} \\
\hspace{6em} \text{ } – y_2 u_2^{old} – y_2 b^{old} + s \alpha_1^{old} K_{12} + \alpha_2^{old} K_{22} \\
= 1 -s +(s K_{11} – s K_{12} – s K_{11} + s K_{12}) \alpha_1^{old} \\
\hspace{6em} \text{ } + (K_{11} – 2 K_{12} + K_{22}) \alpha_2^{old} + y_2 (u_1^{old} – u_2^{old})
[/latex]

Since [latex]y_2^2 = 1[/latex]: (mathematical convenience)

[latex]y_2^2 – y_1 y_2 + (K_{11} – 2 K_{12} + K_{11} + s K_{12}) \alpha_2^{old} + y_2(u_1^{old} – u_2^{old}) \\
= y_2 (y_2 – y_1 + u_1^{old} – u_2^{old}) – \eta \alpha_2^{old} \\
= y_2 ((u_1^{old} – y_1) – (u_2^{old} – y_2)) – \eta \alpha_2^{old} \\
= y_2 (E_1^{old} – E_2^{old}) – \eta \alpha_2^{old}
[/latex]

Therefore, the objective function is:
[latex]\frac{1}{2} \eta \alpha_2^2 + (y_2 (E_1^{old} – E_2^{old}) – \eta \alpha_2^{old}) + \text{const.}[/latex]

First derivative: [latex]\frac{\partial L}{\partial \alpha_2} = \eta \alpha_2 + (y_2 (E_1^{old} – E_2^{old}) – \eta \alpha_2^{old})[/latex]

Second derivative: [latex]\frac{\partial^2 L}{\partial \alpha_2} = \eta[/latex]

Note that [latex]\eta = 2 K_{12} – K_{11} – K_{22} \leq 0[/latex]. Proof: Let [latex]K_{11} = \overrightarrow{x_1}^T \overrightarrow{x_1}, K_{12} = \overrightarrow{x_1}^T \overrightarrow{x_2}, K_{22} = \overrightarrow{x_2}^T \overrightarrow{x_2}[/latex]. Then [latex]\eta = – (\overrightarrow{x_2} – \overrightarrow{x_1})^T (\overrightarrow{x_2} – \overrightarrow{x_1}) = -\|\overrightarrow{x_2} – \overrightarrow{x_1} \| \leq 0[/latex]
This is important to understand because when we want to use other kernels this has to be true.

[latex]\text{Let} \quad \frac{\partial L}{\partial \alpha_2} = 0, \text{then we have that} \\
\text{ } \quad \alpha_2^{new} = \frac{- y_2 (E_1^{old} – E_2^{old}) – \eta \alpha_2^{old}}{\eta}[/latex]

Therefore, this is the formula to get a maximum:

[latex]\alpha_2^{new} = \alpha_2^{old} + \frac{y_2 (E_2^{old} – E_1^{old})}{\eta}[/latex]

While performing the algorithm, after calculating [latex]\alpha_j[/latex] (or [latex]\alpha_2[/latex]) it is necessary to clip its value since it is constrained by [latex]0 \leq \alpha_i \leq C[/latex]. This constraint comes from the KKT algorithm.

So we have that: [latex]\text{ } \quad s=y_1 y_2 \quad \gamma = \alpha_1^{old} + s \alpha_2^{old}[/latex]
If [latex]s=1[/latex], then [latex]\alpha_1 + \alpha_2 = \gamma[/latex]
If [latex]\gamma > C[/latex], then [latex]\text{max} \quad \alpha_2 = C, \text{min} \quad \alpha_2 = \gamma – C[/latex]
If [latex]\gamma < C[/latex], then [latex]\text{min} \quad \alpha_2 = 0, \text{max} \quad \alpha_2 = \gamma[/latex]
If [latex]s=-1[/latex], then [latex]\alpha_1 – \alpha_2 = \gamma[/latex]
If [latex]\gamma > 0[/latex], then [latex]\text{min} \quad \alpha_2 = 0, \text{max} \quad \alpha_2 = C – \gamma[/latex]
If [latex]\gamma < 0[/latex], then [latex]\text{min} \quad \alpha_2 = - \gamma, \text{max} \quad \alpha_2 = C[/latex]

In other words: (L = lower bound, H = higher bound)
[latex]
\text{If} \quad y^{(i)} \neq y^{(j)}, L = max(0,\alpha_j – \alpha_i), H = min(C, C+ \alpha_j – \alpha_i) \\
\text{If} \quad y^{(i)} = y^{(j)}, L = max(0,\alpha_i + \alpha_j – C), H = min(C, \alpha_i + \alpha_j)
[/latex]

Why do we need to clip our [latex]\alpha_2[/latex] value?
We can understand it using a very simple example.
Remember that if [latex]y_1 = y_2[/latex], then [latex]s = 1 \quad \text{and} \quad \gamma = \alpha_1^{old} + s \alpha_2^{old}[/latex] remains always constant. If [latex]y_2 \neq y_2[/latex], then [latex]\alpha_1 = \alpha_2[/latex] so they can change as much as they want if they remain equal. In the first case, the numbers are balanced, so if [latex]\alpha_1 = 0.2 \quad \text{and} \quad \alpha_2 = 0.5[/latex], as they sum 0.7, they can change maintaining that constraint true, so they may end up being [latex]\alpha_1 = 0.4, \alpha_2 = 0.3[/latex].

If [latex]\alpha_1 = 0.3, \alpha_2 = 0.4, C = 1[/latex]:
alphas

1) As explained before, and taking into account that [latex]\Delta \alpha_1 = -s \Delta \alpha_2[/latex], [latex]\alpha_2[/latex] can only increase 0.3 because [latex]\alpha_1[/latex] can only decrease 0.3 keeping this true: [latex]0 \leq \alpha_1 \leq C[/latex]. Likewise, [latex]\alpha_2[/latex] can only decrease 0.4 because [latex]\alpha_1[/latex] can only increase that amount.

2) If they are different, [latex]\alpha_2[/latex] can grow up to 1 because similarly, [latex]\alpha_1[/latex] will grow up to 1 (which is the value of C and hence, the upper boundary). [latex]\alpha_1[/latex] can only decrease till 0.1 because if it decreases till 0, [latex]\alpha_1[/latex] would be -0.1 and the constraint would be violated.

Remember that the reason why [latex]\alpha_1[/latex] has to be [latex]0 \leq \alpha_1 \leq C[/latex] is because of the KKT algorithm.

clipped

[latex]\alpha_i[/latex] [latex]\alpha_j[/latex] s L H
0 1 -1 1 1
If [latex]L == M[/latex], it means that we cannot balance or proportionally increase/decrease at all between [latex]\alpha_i[/latex] and [latex]\alpha_j[/latex]. In this case, we skip this [latex]\alpha_i – \alpha_j[/latex] combination and try a new one.

After updating [latex]\alpha_i[/latex] and [latex]\alpha_j[/latex] we still need to update b.
[latex]\sum (x,y) = \sum_{i=1}^N \alpha_i y_i \overrightarrow{x_i}^T\overrightarrow{x} -b -y[/latex] (the increment of a variable is given by the increment of all variables which are part of it).

[latex]\Delta \sum (x,y) = \Delta \alpha_1 y_1 \overrightarrow{x_1}^T\overrightarrow{x} + \Delta \alpha_2 y_2 \overrightarrow{x_2}^T\overrightarrow{x} – \Delta b[/latex]

The change in the threshold can be computed by forcing [latex]E_1^{new} = 0 \text{ if } 0 < \alpha_1^{new} < C \text{(or } E_2^{new} = 0 \text{ if } 0 < \alpha_2^{new} < C \text{)}[/latex] [latex]E (x,y)^{new} = 0 \\ E(x,y)^{old} + \Delta E (x,y) = E (x,y)^{old} + \Delta \alpha_1 y_1 \overrightarrow{x_1}^T \overrightarrow{x} + \Delta \alpha_2 y_2 \overrightarrow{x_2}^T \overrightarrow{x} - \Delta b[/latex] So we have: [latex]\Delta b = E(x,y)^{old} + \Delta \alpha_1 y_1 \overrightarrow{x_1}^T \overrightarrow{x} + \Delta \alpha_2 y_2 \overrightarrow{x_2}^T \overrightarrow{x}[/latex]

Algorithm

The code I provide in the Source code section was developed by me, but I followed the algorithm shown in [1]. I will copy the algorithm here since it made my life really easier and avoided me many headaches for sure. All the credit definitely goes to the writer.

Algorithm: Simplified SMO
Note: if you check [1] you will see that this algorithm differs from the original one written on that paper. The reason is because the original one had mistakes that I wanted to fix and improve. I talk about that in this post.

Input:
C: regularization parameter
tol: numerical tolerance
max_passes: max # of times to iterate over [latex]\alpha[/latex]’s without changing
[latex](x^{(1)},y^{(1)}),…,(x^{(m)},y^{(m)})[/latex]: training data

Output:
[latex]\alpha \in \mathbb{R}^m[/latex]: Lagrange for multipliers for solution
[latex]b \in \mathbb{R}[/latex]: threshold for solution

Algorithm:
Initialize [latex]\alpha_i = 0, \forall i, b = 0[/latex]
Initialize [latex]\text{counter} = 0[/latex]
[latex]\text{while} ((\exists x \in \alpha | x = 0 ) \text{ } \& \& \text{ } (\text{counter} < \text{max\_iter}))[/latex] Initialize input and [latex]alpha[/latex]
[latex]\text{for } i = 1, … numSamples[/latex]
Calculate [latex]E_i = f(x^{(i)}) – y^{(i)}[/latex] using (2)
[latex]\text{if } ((y^{(i)} E_i < -tol \quad \& \& \quad \alpha_i < C) \| (y^{(i)} E_i > tol \quad \& \& \quad \alpha_i > 0))[/latex]

[latex]\text{for } j = 1, … numSamples \quad \& \quad j \neq i[/latex]
Calculate [latex]E_j = f(x^{(j)}) – y^{(j)}[/latex] using (2)
Save old [latex]\alpha[/latex]’s: [latex]\alpha_i^{(old)} = \alpha_i, \alpha_j^{(old)} = \alpha_j[/latex]
Compute [latex]L[/latex] and [latex]H[/latex] by (10) and (11)
[latex]\text{if } (L == H)[/latex]
Continue to [latex]\text{next j}[/latex]
Compute [latex]\eta[/latex] by (14)
[latex]\text{if } (\eta \geq 0)[/latex]
Continue to [latex]\text{next j}[/latex]
Compute and clip new value for [latex]\alpha_j[/latex] using (12) and (15)
[latex]\text{if } (| \alpha_j – \alpha_j^{(old)} < 10^{-5})[/latex] (*A*)
Continue to [latex]\text{next j}[/latex]
Determine value for [latex]\alpha_i[/latex] using (16)
Compute [latex]b_1[/latex] and [latex]b_2[/latex] using (17) and (18) respectively
Compute [latex]b[/latex] by (19)
[latex]\text{end for}[/latex]
[latex]\text{end if}[/latex]
[latex]\text{end for}[/latex]
[latex]\text{counter } = \text{ counter}++[/latex]
[latex]\text{data } = \text{ useful\_data}[/latex] (*B*)
[latex]\text{end while}[/latex]

Algorithm Legend

(*A*): If the difference between the new [latex]\alpha[/latex] and [latex]\alpha^{(old)}[/latex] is negligible, it makes no sense to update the rest of variables.
(*B*): Useful data are those samples whose [latex]\alpha[/latex] had a nonzero value during the previous algorithm iteration.
(2): [latex]f(x) = \sum_{i=1}^m \alpha_i y^{(i)} \langle x^{(i)},x \rangle +b[/latex]
(10): [latex]\text{If } y^{(i)} \neq y^{(j)}, \quad L = \text{max }(0, \alpha_j – \alpha_i), H = \text{min } (C,C+ \alpha_j – \alpha_i)[/latex]
(11): [latex]\text{If } y^{(i)} = y^{(j)}, \quad L = \text{max }(0, \alpha_i + \alpha_j – C), H = \text{min } (C,C+ \alpha_i + \alpha_j)[/latex]
(12): [latex] \alpha_ := \alpha_j – \frac{y^{(j)}(E_i – E_j)}{ \eta }[/latex]
(14): [latex]\eta = 2 \langle x^{(i)},x^{(j)} \rangle – \langle x^{(i)},x^{(i)} \rangle – \langle x^{(j)},x^{(j)} \rangle[/latex]
(15): [latex]\alpha_j := \begin{cases} H \quad \text{if } \alpha_j > H \\
\alpha_j \quad \text{if } L \leq \alpha_j \leq H \\
L \quad \text{if } \alpha_j < L \end{cases}[/latex] (16): [latex]\alpha_i := \alpha_i + y^{(i)} y^{(j)} (\alpha_j^{(old)} - \alpha_j)[/latex] (17): [latex]b_1 = b - E_i - y^{(i)} (\alpha_i^{(old)} - \alpha_i) \langle x^{(i)},x^{(i)} \rangle - y^{(j)} (\alpha_j^{(old)} - \alpha_j) \langle x^{(i)},x^{(j)} \rangle[/latex] (18): [latex]b_2 = b - E_j - y^{(i)} (\alpha_i^{(old)} - \alpha_i) \langle x^{(i)},x^{(j)} \rangle - y^{(j)} (\alpha_j^{(old)} - \alpha_j) \langle x^{(j)},x^{(j)} \rangle[/latex] (19): [latex]\alpha_j := \begin{cases} b_1 \quad \quad \text{if } 0 < \alpha_i < C \\ b_2 \quad \quad \text{if } 0 < \alpha_j < C \\ (b_1 + b_2)/2 \quad \text{otherwise} \end{cases}[/latex]

Source Code Legend

(*1*): This error function arises when we try to see the difference between our output and the target: [latex]E_i = f(x_i) – y_i = \overrightarrow{w}^T \overrightarrow{x_i}[/latex] and as [latex]\overrightarrow{w} = \sum_i^N \alpha_i y_i x_i[/latex] we get that [latex]E_i = \sum_i^N \alpha_i y_i \overrightarrow{x} \cdot \overrightarrow{x_i} + b – y_i[/latex]
(*2*): This line has 4 parts: if ((a && b) || (c && d)). A and C parts control that you will not enter if the error is lower than the tolerance. And honestly, I don’t get parts B and D. Actually I tried to run this code with several problems and the result is always the same with and without those parts. I know that alpha is constrained to be within that range, but I don’t see the relationship between A and B parts, and C and D parts separately, because that double constraint (be greater than 0 and lower than C) should be always checked.

Results

Given a 3D space we have two samples from each class. The last column indicates the class:

[latex]data = \begin{bmatrix}
0 & 0 & 3 & -1 \\
0 & 3 & 3 & -1 \\
3 & 0 & 0 & 1 \\
3 & 3 & 0 & 1
\end{bmatrix}[/latex]

Solution:
[latex]W = [0.3333 0 -0.3333] \to f(x,y,z) = x-z[/latex]

plane

The code is provided in the Source code section.

References

1. The Simplified SMO Algorithm http://cs229.stanford.edu/materials/smo.pdf
2. Sequential Minimal Optimization for SVM http://www.cs.iastate.edu/~honavar/smo-svm.pdf
3. Inequality-constrained Quadratic Programming – Example https://www.youtube.com/watch?v=e6jDGxNZ-kk