# What are the characteristics of Naive Bayes Classifiers?

Data MiningDatabaseData Structure

#### Big Data Hadoop

Best Seller

89 Lectures 11.5 hours

#### Practical Data Science using Python

22 Lectures 6 hours

#### Data Science and Data Analysis with Python

50 Lectures 3.5 hours

Bayesian classifiers are statistical classifiers. It can predict class membership probabilities, such as the probability that a given sample applied to a definite class. Bayesian classifiers have also displayed large efficiency and speed when they can have high databases.

Because classes are defined, the system must infer rules that supervise the classification, hence the system must be able to discover the description of each class. The descriptions must define the predicting attributes of the training set so that only the positive instances must satisfy the description, not the negative instances. A rule is said to be correct if its description covers all the positive examples and none of the negative examples of a class is covered.

It is assuming that the contributions by all attributes are independent and that each contributes equally to the classification problem, a simple classification scheme called Naïve Bayes classification.

Naïve Bayes classification is called Naïve because it assumes class conditional independence. The implement of an attribute value on a given class is separate from the values of the multiple attributes. This assumption is made to decrease computational costs and therefore is treated Naïve.

Multiple algorithms exist for understanding the network topology from the training records given observable variables. The issue is discrete optimization. Human professionals generally have a good grasp of the direct conditional dependencies that influence the domain under analysis, which supports network design. Experts should define conditional probabilities for the nodes that perform in direct dependencies.

These probabilities can be used to evaluate the remaining probability values. If the network topology is acknowledged and the variables are observable, therefore training the network is simple. It includes computing the CPT entries, as is similarly completed when evaluating the probabilities included in naive Bayesian classification.

There are various characteristics of Naïve Bayes Classifiers which are as follows −

They are robust to isolated noise points because such points are averaged out when estimating conditional probabilities from data. It can also manage missing values by deleting the instances during model constructing and classification.

They are robust to irrelevant attributes. If Xi is an inappropriate attribute,therefore P (Xi|Y) becomes consistently distributed. The class conditional probability for Xi has no impact on the complete calculation of the posterior probability.

Correlated attributes can degrade the performance of naive Bayes classifiers because the conditional independence assumption no longer holds for such attributes. For example, consider the following probabilities −

P (A=0|Y=0) =0.4, P (A=1 | Y=0) =0.6,

P (A=0|Y = 1) = 0.6, P (A= 1 | Y =1) = 0.4,

where A is a binary attribute and Y is a binary class variable. Suppose there is another binary attribute B that is perfectly correlated with A when Y = 0, but is independent of A when Y = 1. For integrity, consider that the class-conditional probabilities for B are the equal as for A.