In machine learning, Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes’ theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable,
requiring a number of parameters linear in the number of variables (features/predictors)
in a learning problem. Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to
problem instances represented as vectors of feature
values, where the class labels are drawn from
some finite set. It is not a single algorithm
for training such classifiers, but a family of
algorithms based on a common principle: all naive Bayes classifiers assume that
the value of a particular feature is independent
of the value of any other feature, given the
class variable.
For example, a fruit may be considered to be an apple if it is red, round, and about
3 cm in diameter. A naive Bayes classifier considers each of these features to
contribute independently to the probability that this fruit is an apple,
regardless of any possible correlations between the color, roundness, and diameter features. For some types of probability models, naive
Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications,
parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the
naive Bayes model without accepting Bayesian probability or using any Bayesian methods. An
advantage of naive Bayes is that it only requires a small amount of training
data to estimate the parameters necessary for classification.
No comments:
Post a Comment
If you have any doubt, let me know