Should you use linear or logistic regression? In what contexts? There are hundreds of types of regressions. Here is an overview for data scientists and other analytic practitioners, to help you decide on what regression to use depending on your context. Many of the referenced articles are much better written (fully edited) in my data science Wiley book.
Click here to see source, for this picture
Note: Jackknife regression has nothing to do with Bradley Efron's Jackknife, bootstrap and other re-sampling techniques published in 1982; indeed it has nothing to do with re-sampling techniques.
Other Solutions
Before working on any project, read our article on the lifecycle of a data science project.
Comment
What are folks thoughts on MARS (Multivariable Adaptive Regression Spines) as far as regression techniques? R: earth. Python: py-earth Salford Systems own the MARS implementation.
http://www.slideshare.net/salfordsystems/evolution-of-regression-ol...
I'd love to see a case study, to show how different methods provide different results.
About R implementations, here is a comment by Alan Parker (see also Amy's comment below):
The CRAN task view: “Robust statistical methods” gives a long list of regression methods, including many that Vincent mentions. Here a some that are not mentioned there:
Regression in unusual spaces. This subject is old. It is usually addressed under the title “Compositional data” (see Wikipedia entry). The late John Aitchison founded this area of statistics. Googling his name + “compositional data” gives access to a number of his articles. The R package “compositions” deals with it comprehensively. Another package treats the problem using robust statistics: “robCompositions”.
Bayesian regression. I find Bayesian stuff conceptually hard, so I am using John Kruschke’s friendly book: “Doing Bayesian data analysis”. Chapter 16 is on linear regression. He provides a free R package to carry out all the analyses in the book. The CRAN view “Bayesian” has many other suggestions. Package BMA does linear regression, but packages for Bayesian versions of many other types of regression are also mentioned.
I think what Kalyanaraman has in mind is auto-regressive models for time series, like ARIMA processes and Box & Jenkins types of tools to estimate the parameters. A simple form is x(t) = a * x(t-1) + b * x(t-2) + error, where t is the time, a, b are the "regression" coefficients, and a, b are positive numbers satisfying a + b = 1 (otherwise the time series explodes).
Bayesian regression was added later. Here's how to do it in SAS, courtesy of one of our readers, Ralph Winters:
For Bayesian analysis in SAS, you can use Proc MCMC, or do some post Bayesian type comparisons using proc GENMOD with the Bayes option, or even proc Logistic.
Ralph Winters
Data Architect at EmblemHealth
Here's how to do it in R, courtesy of one of our readers, Blaise F Egan:
Hi Kalyanaraman,
Can you elaborate? Jackknife regression addresses this issue. But you can also transform your data, use PCA to decorrelate the variables (I don't like it because the new variables lack interpretation). But maybe you had a another idea in mind.
Transforming your data is a bit risky in the context of black box, automated data science, because each time you add enough new data, you need to transform the whole data set again, it creates a bit of instability. Not an issue for small transformations involving one observation at a time, but for big transformations involving all observations simultaneously (e.g. Mahalanobis transforms).
If the error term in your model is auto-correlated, you might want to stratify your data and perform ecologic regression mentioned above.
Vincent
© 2014 Data Science Central
You need to be a member of Data Science Central to add comments!
Join Data Science Central