.

Originally posted on :Linkedin

It's a known fact that bagging (an ensemble technique) works well on unstable algorithms like decision trees, artificial neural networks and not on stable algorithms like Naive Bayes. The well known ensemble algorithm Random forest thrives on the ability of bagging technique which leverages the 'instability' of decisions trees, to help build a better classifier.

*Even though, random forest attempts to handle the issues caused by highly correlated trees, does it completely solve the issue? Can the decision trees be made more unstable than what random forest does, so that the learner be even more accurate? *

1. **Discards pruning:** No more early stopping. If trees are sufficiently deep, they have very low bias.

Since,

Mean Squared Error = Variance + (Bias)2.

This explains why discarding pruning works for random forest.

2. The most important parameters to tune while building a random forest model are mtry i.e the number of variables per level and ntree i.e the number of tress to ensemble. optimal 'mtry' can be estimated by using 'tuneRF'. tuneRF assumes the default value as the square root of total number of variables (lets say 'n') for classification problem, while n/3 for prediction problems. It then calculates the out of bag error. Further, it goes for left and right estimation, assuming the 'mtry' to be equal to default value/step factor and (default value)* (step factor) respectively ; and calculates the out of bag error on both the scenarios and comes up the optimal mtry . The step factor is manually provided,thus mtry definitely looks to be heavily dependent on the selected step factor, inappropriate assumption of which may lead to misleading results. It is advised to keep it low so that more 'mtry' values are searched for.

*If the step factor is anyway manually fed, which definitely restricts the search subspace ; can this be called an efficient optimization? *

According to Adele Cutler, tuneRF does add bias and may lead to overfitting. !

3. **Variable importance and the split criteria:**

Information gain, entropy measure, ginni impurity is considered to be the metric for selecting the best attribute to split on. Conventional recursive partitioning would consider all the variables together and search in the given space for the best attribute to split. Random forest does it in random fashion by considered variables randomly by using the 'mtry' as the number of variables to use.

*Can the selection of 'mtry' be randomized as well? Can the splitting criteria be made more random and unstable by choosing one of the top-k variables based on information gain,instead of choosing the best only ?*

Remember, adding instability worked well for bagging (e.g. random forest), adding more instability will increase the diversity we seek in an ensemble.

It's a false assumption or notion, that the variables which come out to be the most significant ones estimated by variable importance process; are the ones most often used to split. It is essentially not true, in all the cases.

*Can the variables with high variable importance be given more weightage while concluding which attribute to split on, along with the existing criterion ?*

4. **Correlation pruning in recursive fashion: **

Highly correlated variables can cause multicollinearity, and deviation from orthogonal behavior; which causes issues in classification problems. To a great extent, multicollinearity is avoided even while dealing with correlated variables in case of random forest.

*But what if the trees come out to be highly correlated ? *

*Can there be a pruning process by backward elimination in place to handle the correlated attributes and discard the correlated trees and not use them in final voting?*

5. **Brewed/ composite features :** Learners are assumed to extract the interaction between attributes and the predictor. Even highly advanced learners may miss out on the estimation of the interactions of different attributes together and subsequent interaction of these composite features to the predictor, be the deciding factor in choosing attribute as the best classifier and split . Brewed features may help in providing the information in a tree, which may otherwise be left out when growing trees.

Composite features combined with the organic ones, during the random space search for splits can add more randomness and diversity to the ensemble built using the votes by the isolated learners.

These are just the 'thought vectors', the 'geometry' is soon to be published :)

**About the author:** Ashish kumar heads the data science initiatives at IGP.com, an e-commerce platform for gifting. He is an IIM-alumnus,Author(ML,NLP) associated with Publishing houses like Packt Publishing, Springer Apress.

Linkedin Profile : http://linkedin.com/in/ashishiimc

- A History and Timeline of Big Data
- AI voice technology has benefits and limitations
- Strong data governance frameworks are fuel for analytics
- Top 12 most commonly used IoT protocols and standards
- What is the status of quantum computing for business?
- How parallelization works in streaming systems
- An Eggplant automation tool tutorial for Functional, DAI
- Circular economy model enables sustainability and resilience

Posted 29 March 2021

© 2021 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central