__Bayesian Machine Learning (__**part - 3 )**

__Bayesian Modelling__

In this post we will see the methodology of building Bayesian models. In my previous post I used a Bayesian model for linear regression. The model looks like:

So, let us first understand the construction of the above model:

- when there is an arrow pointing from one node to another, that implies start nodes causes end node. For example, in above case
**Target node**depends on**Weights node**as well as**Data node**. - Start node is known as
**Parent**and the end node is known as**Child** - Most importantly, cycles are avoided while building a Bayesian model.
- These structures normally are generated from the given data and experience
- Mathematical representation of the Bayesian model is done using Chain rule. For example, in the above diagram the chain rule is applied as follows:

P(y,w,x) = P(y/w,x)P(w)P(x)

Generalized chain rule looks like:

- The Bayesian models are build based upon the subject matter expertise and experience of the developer.

**An Example**

Problem Statement : Given are three variables : sprinkle, rain , wet grass, where sprinkle and rain are predictors and wet grass is a predicate variable. Design a Bayesian model over it.

Solution:

Theory behind above model:

**Sprinkle**is used to**wet grass**, therefore**Sprinkle**causes**wet grass**so**Sprinkle node**is parent to**wet grass****node****Rain**also**wet the grass**, therefore**Sprinkle**causes**wet grass**so**Rain node**is parent to**wet grass****node**- if there is rain, there is no need to sprinkle, therefore there Is a negative relation between Sprinkle and
**rain node**. So,**rain node**is parent to**Sprinkle node**

Chain rule implementation is :

P(S,R,W) = P(W/S,R)*P(S/R)*P(R)

__Latent Variable Introduction__

*Wiki definition:* In statistics, **latent variables** (from Latin: present participle of *lateo* (“lie hidden”), as opposed to observable variables), are variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured)

*In my words:* Latent variables are hidden variables i.e. they are not observed. Latent variables are rather inferred and are been thought as the cause of the observed variables. Mostly in Bayesian models they are used when we end up in cycle generation in our model. Latent variables help us in simplifying the mathematical solution of our problem, but this is not always correct.

**Let us see with some examples**

Suppose we have a problem , **hunger**, **eat** and **work** . if we create a Bayesian model, the model looks like:

The above model reads like, if we **work**, we feel **hunger**. If we feel **hunger** – we **eat**. If we **eat**, we have energy to **work**.

* *Now this above model has a cycle in it and thus if chain rule is applied to it, the chain will become infinitely long. So, the above Bayesian model is not correct. Thus, we need to introduce the **Latent variable** here, let us call it as **T**

Now the above mode states that, **T** is responsible for **eat**, **hunger** and **work** to happen. This variable T is not observed but can be inferred as the cause of happening of work, eat and hunger. This assumption seems to be correct also, as in a biological body – something resides in it that pushes it to eat and work, even though we cannot observe it physically.

Let us write the chain rule equation for the above model:

P(W,E,H,T) = P(W/T)*P(E/T)*P(H/T)*P(T)

**Another Example**

Let us see another example. Suppose we have following variables: **GPA**, **IQ**, **School**. The model reads like if a person has good IQ, he/she will get good **School** and **GPA,** if he/she got good **School,** he/she will have good IQ and may get good **GPA. ** If he/she gets good **GPA,** he/she may have good **IQ,** and he/she may be from good **School.** The model looks like:

Now from above description of the model, all the nodes are connected to every other node. Thus chain rule cannot be applied to this model. So we need a **Latent variable** to be introduced here. Let us call the new **Latent variable** as **I.** The new model looks like:

Now we read the above model as Latent variable **I** is responsible for all the other three variables. Now the chain rule can easily be applied. And it looks like:

P(S,G,Q,I) = P(S/I)*P(G/I)*P(Q/I)*P(I)

Hence in this post we saw how to model and create Latent Variables. They mostly help in reducing the complexity of the problem.

Now from then next post we will start much interesting part of Bayesian inferencing using the above Latent variables wherever required.

Thanks for reading!!!

© 2020 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Upcoming DSC Webinar**

- Data Science Leadership Exchange: Best Practices for Driving Outcomes

Despite an increasing awareness of the role data science plays in successful business outcomes, data science leaders still struggle to organize, implement and communicate effective data science initiatives.

Join this latest DSC webinar and gain advice on optimizing your data management strategies. Some of the industry’s best and brightest from Bayer, S&P Global and Transamerica will be presenting their insights and experiences. Register today.

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Upcoming DSC Webinar**

- Data Science Leadership Exchange: Best Practices for Driving Outcomes

Despite an increasing awareness of the role data science plays in successful business outcomes, data science leaders still struggle to organize, implement and communicate effective data science initiatives.

Join this latest DSC webinar and gain advice on optimizing your data management strategies. Some of the industry’s best and brightest from Bayer, S&P Global and Transamerica will be presenting their insights and experiences. Register today.

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central