# What is Bayesian Belief Networks?

The naıve Bayesian classifier makes the assumption of class conditional independence, i.e., given the class label of a tuple, the values of the attributes are assumed to be conditionally independent of one another. This simplifies computation.

When the assumption influence true, therefore the naïve Bayesian classifier is the efficient in comparison with multiple classifiers. Bayesian belief networks defines joint conditional probability distributions.

They enable class conditional independencies to be represented among subsets of variables. They support a graphical structure of causal relationships, on which learning can be implemented. Trained Bayesian belief networks is used for classification. Bayesian belief networks are also called a belief networks, Bayesian networks, and probabilistic networks.

A belief network is represented by two components including a directed acyclic graph and a group of conditional probability tables. Every node in the directed acyclic graph defines a random variable. The variables can be discrete- or continuous-valued.

They can correspond to certain attributes given in the information or to “hidden variables” believed to form a relationship (e.g., in the case of medical records, a hidden variable can denote a syndrome, describing a number of symptoms that, together, identify a definite disease).

Each arc defines a probabilistic dependence. If an arc is drawn from a node Y to a node Z, therefore Y is a parent or instantaneous predecessor of Z, and Z is a descendant of Y. Every variable is conditionally autonomous of its non-descendants in the graph, given its parents.

A belief network has one conditional probability table (CPT) for every variable. The CPT for a variable Y defines the conditional distribution P (Y|Parents(Y)), where Parents(Y) are the parents of Y.

A node inside the network can be choosed as an “output” node, defining a class label attribute. There can be higher than one output node. There are several algorithms for inference and learning that can be used to the network. Instead of returning a single class label, the classification procedure can return a probability distribution that provides the probability of every class.

Belief networks can be used to model several well-known issues. An example is genetic linkage analysis such as the mapping of genes onto a chromosome. By casting the gene linkage issues in the method of inference on Bayesian networks, and utilizing state-of-the-art algorithms, the scalability of analysis has advanced significantly.

There are several applications that have benefited from the need of belief networks such as computer vision, image restoration and stereo vision, files and text analysis, decision support systems, and sensitivity analysis. The content with which several applications can be decreased to Bayesian network inference is beneficial in that it curbs the required to produce specialized algorithms for each application.