Support vector machine is particularly powerful
and flexible class of the supervised learning algorithms for both
classification and regression.
Main aim in SVM is to find a separating hyperplane in such a way that
it maximizes the margin on both sides of the hyperplane.
Let us consider
that we have data points belonging to two classes say w1 and w2.
Let \[a^Tx +b=0\]
be equation of
the hyperplane. Then for one class data points $a^Tx+b > 0$ and for other class data points $ a^Tx+b>0 $ where b is bias (Position of the plane) and gives the orientation of the plane.
Let M be margin
and we want to maximize the margin subject to some constraints.
i.e. \[y_i(x_i^T\beta+\beta_0 ) \geq N\]
for I = 1,2,3, . . . . . , N. We have the constraint
that ||β|| should be one that
because we do not want the solution to blow up arbitrarily.
Every data point
must be at-least M distance away from hyper-plane.
So the condition
will be\[\frac{y_i(x_i^T\beta+\beta_0 )}{||\beta||} \geq M\]
here i can remove condition that ||β||=1.
here i can remove condition that ||β||=1.
So here I can
arbitrary set $||\beta|| = \frac{1}{M} $ then I
can say that $ y_i(x_i^T\beta+\beta_0 ) \geq 1 $ and margin will be $M = \frac{1}{||\beta||}$.
Then I am left
with constraint $ y_i(x_i^T\beta+\beta_0 ) $ that minimize $\frac{||\beta||^2}{2}$.
The constraints
define the margin around the linear decision boundary of thickness $\frac{1}{||\beta||}$.
Hence we choose β and β0 to maximize
its thickness. The Lagrangian Function that is to be minimized w.r.t. β and β0
is \[L_p = \frac{1}{2}||\beta||^2 - \Sigma_{i=1}^{N}\alpha_i[y_i(x_i^T\beta+\beta_0)-1]\]
Setting the
derivatives to zero, we obtain: \[\beta = \Sigma_{i=1}^N \alpha_iy_ix_i,\]
\[0 = \Sigma_{i=1}^N \alpha_iy_i\]
Then substituting
these values in the above equation(1), then we obtain the so-called WOLFE DUAL
subject to constraint αi ≥ 0. \[L_D = \Sigma_{i=1}^N \alpha_i - \frac{1}{2}\Sigma_{i=1}^N\Sigma_{k=1}^N\alpha_i\alpha_ky_iy_kx_i^Tx_k\]
The solution is
obtained by maximizing LD in the positive orthant.
No comments:
Post a Comment
If you have any doubt, let me know