Support Vector Regression and it’s Mathematical Implementation

Original article was published on Deep Learning on Medium

Support Vector Regression and it’s Mathematical Implementation

Let’s continue our learning (DAY-9)

In this article, I will cover the topics-
What is SVR?
Do a deep-dive into the algorithm.
Mathematical formulation in case of linear and non linear SVR.

Support vector machines are known for classification problems but not as well documented for regression problems ,that’s why here SUPPORT VECTOR REGRESSION came into play.

WHAT IS SVR?

Support Vector Regression (SVR) is a supervised machine learning algorithm which can be used regression challenges.In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. For example-
If you have two features then two dimensional space will be plotted,if you have three features then three dimensional space will be plotted and so on.

SVR gives us the flexibility to define how much error is acceptable in our model and will find an appropriate line (or hyperplane in higher dimensions) to fit the data.Support Vector Regression tries to fit the best line within a predefined or threshold error value.

Important notations

SOME IMPORTANT DEFINITIONS:

1. Kernel: Kernel is a function that is used to map a lower-dimensional data points into higher dimensional data points. As SVR performs linear regression in a higher dimension, this function is crucial. There are many types of kernel such as Polynomial Kernel, Gaussian Kernel, Sigmoid Kernel, etc.

2.Hyper Plane: In Support Vector Machine, a hyperplane is a line used to separate two data classes in a higher dimension than the actual dimension. In SVR, a hyperplane is a line that is used to predict continuous value.

3.Boundary Line: Two parallel lines drawn to the two sides of Support Vector with the error threshold value, 𝞮(epsilon) are known as the boundary line. Boundary line situated at positive region is known as Positive Hyperplane and the boundary line situated at negative region is known as Negative Hyperplane .These lines create a margin between the data points.

4.Support Vector: The line from which the distance is minimum or least from two boundary data points.

MATHEMATICAL FORMULATION:

SVR mathematical formulas

In case of LINEAR SVR:

In case of NON-LINEAR SVR:

The kernel functions transform the data into a higher dimensional feature space to make it possible to perform the linear separation.

Kernel functions:

Logic Behind Support Vector Regression:

The problem of regression is to find a function that approximates mapping from an input domain to real numbers on the basis of a training sample. So let’s now dive deep and understand how SVR works actually.

Assuming that the equation of the hyperplane is as follows:

Y = wx+b (equation of hyperplane)

Then the equations of decision boundary will be:

wx+b= +a

wx+b= -a

Let these are the lines (red lines)at distance ‘+a’ and ‘-a’ from the hyperplane. This ‘a’ in the text is basically referred to as epsilon.

hyperplane that satisfies our SVR should satisfy:

-a < Y- wx+b < +a

Our main aim here is to decide a decision boundary at ‘a’ distance from the original hyperplane.Hence, we are going to take only those points that are within the decision boundary.

Here comes the end of this block.I hope you like it .

THANKS FOR YOUR VALUABLE TIME.