Pages

Regularization

Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, refers to a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. This information is usually in the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. The same idea arose in many fields of science. For example, the least-squares method can be viewed as a very simple form of regularization. A simple form of regularization applied to integral equations generally termed Tikhonov regularization after Andrey Nikolayevich Tikhonov, is essentially a trade-off between fitting the data and reducing the norm of the solution. More recently, non-linear regularization methods, including total variation regularization have become popular. 

Regularization in machine learning:  In machine learning, the regularization method is used for model selection, in particular, to prevent overfitting by penalizing models with extreme parameter values. The most common variants in machine learning are L₁ and L₂ regularization, which can be added to learning algorithms that minimize a loss function E(XY) by instead minimizing E(X, Y) + α‖w‖, where w is the model’s weight vector, ‖·‖ is either the  L₁ norm or the squared L₂ norm, and α is a free parameter that needs to be tuned empirically. This method applies to many models. When applied in linear regression, the resulting models are termed lasso or ridge regression, but regularization is also employed in (binary and multiclasslogistic regression, neural nets, support vector machinesconditional random fields, and some matrix decomposition methods. L₂ regularization may also be called “weight decay”, in particular in the setting of neural nets. L₁ regularization is often preferred because it produces sparse models and thus performs feature selection within the learning algorithm, but since the L₁ norm is not differentiable, it may require changes to learning algorithms, in particular gradient-based learners. Bayesian learning methods make use of a prior probability that (usually) gives a lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross validationRegularization can be used to fine-tune model complexity using an augmented error function with cross-validation. The data sets used in complex models can produce a leveling-off of validation as the complexity of the models increases. Training data sets errors decrease while the validation data set error remains constant. Regularization introduces a second factor that weights the penalty against more complex models with increased variance in the data errors. This gives an increasing penalty as model complexity increases. 
Examples of applications of different methods of regularization to the linear model are: A linear combination of the LASSO and ridge regression methods is elastic net regularization.

1 comment:

If you have any doubt, let me know

Email Subscription

Enter your email address:

Delivered by FeedBurner

INSTAGRAM FEED