- Related Questions & Answers
- How to read the data from a properties file in Java?
- C++ Program for Dijkstra’s shortest path algorithm?
- Ways to paint N paintings such that adjacent paintings don’t have same colors in C programming
- Find the k most frequent words from data set in Python
- Inbuilt Data Structures in Python
- Get data from sessionStorage in JavaScript?
- How to append data to a file in Java?
- Program to build DFA that starts and ends with ‘a’ from the input (a, b)
- When a lot of changes are required in data, which one you choose String or StringBuffer in java?
- How to open a website in Android’s web browser from any application?
- How to pass data between activities in Android?
- The Maximum Data Rate of a Channel
- Prim’s (Minimum Spanning Tree) MST Algorithm
- Kruskal’s (Minimum Spanning Tree) MST Algorithm
- How to compress and un-compress the data of a file in Java?

- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who

A way to update our beliefs depended on the arrival of new, relevant pieces of evidence are provided by Bayes rule. For example, if we were trying to provide the probability that a given person has cancer, we would initially just conclude it is whatever percent of the population has cancer. However, given extra evidence such as the fact that the person is a smoker, we can update our probability, since the probability of having cancer is greater given that the person is a smoker. This allows us to utilize prior knowledge to improve our probability estimations.

The rule is explained below -

$$P\lgroup C|D \rgroup=\frac{P \lgroup D|C \rgroup P \lgroup C \rgroup}{P\lgroup D \rgroup}$$

In this formula, C is the event we want the probability of, and D is the new evidence that is related to C in some way.

**P(C|D)** is denoted as the posterior; this is what we are trying to estimate. In the above example, it is concluded that the “probability of having cancer given that the person is a smoker”.

**P(D|C)** is denoted as the likelihood; this is the probability of observing the new evidence, provided our initial hypothesis. In the above example, it is concluded that the “probability of being a smoker given that the person has cancer”.

**P(C)** is denoted as prior; this is the probability of our hypothesis without any extra prior information. In the above example, it is concluded that the “probability of having cancer”.

**P(D)** is denoted as the marginal likelihood; this is the total probability of observing the evidence. In the above example, it is concluded that the “probability of being a smoker”. In several applications of Bayes Rule, this is ignored, as it mainly serves as normalization.

Advertisements