*Note: This blog was written by Dr. John Elder and was originally published on www.elderresearch.com/blog.*

______________________________

Elder Research has solved many challenging and previously unsolved technical problems in a wide variety of fields for Government, Commercial and Investment clients, including fraud prevention, insider threat discovery, image recognition, text mining, and oil and gas discovery. But our team got its start with a hedge fund breakthrough (as described briefly in a couple of books^{1,2}), and has remained active in that work, continuing to invent the underlying science necessary to address what is likely the hardest problem of all: accurately anticipating the enormous “ensemble model” of the markets.

It is extremely challenging to extract lasting and actionable patterns from highly volatile and noisy market signals. In theory, timing the market is impossible – and in practice that is a good first approximation. However, small but significant advances we made over the past two decades in three contributing areas, briefly described here, have combined to lead to breakthrough live market timing strategies with high Sharpe ratios and low market exposure.

Because of the power of modern analytic techniques, it is often possible to find apparent (but untrue) predictive correlations in the market due to over-fit—where the complexity of a model overwhelms the data or, even more dangerously, from over-search—where so many possible relationships are examined that one is found to work by chance. Wrestling with this serious problem over many years in many fields of applications, I refined a powerful resampling method, which I called Target Shuffling** ,** to measure the probability that an experimental finding could have occurred by chance. It is far more accurate than

Years earlier, to more accurately measure the quality of market timing, or style-switching strategies, I defined a criterion I called **DAPY**, for “Days Ahead Per Year”. It measures, in days of average-sized returns, the expected excess return for a timing strategy compared to a benchmark similarly exposed to the market. The Sharpe ratio can be thought of as measuring the quality of a strategy’s returns; whereas DAPY measures its timing edge. Together, they are much more useful than Sharpe alone. Most importantly, Elder Research studies have shown DAPY to be better than Sharpe at predicting future performance.

Even the most modern data science tools most often attempt to minimize squared error, due to its optimization convenience, when forecasting or classifying. But that metric is not well-suited for obtaining market decisions, as the user’s criteria of merit has much more to do with return, drawdown, volatility, exposure, etc., than with strict forecast accuracy. (If one gets the direction right, for instance, it is not bad to be wrong on magnitude, much less its square.) What we need are optimization metrics that reflect our true interests, as well as an algorithm that can find the best values in a noisy, multi-modal, multi-dimensional space.

Early years of my career working with the markets were marked by continual failure, even after strong success in aerospace and a couple of other difficult fields. I became convinced of the need for a quality search algorithm in order to allow the design of custom score functions (model metrics). I returned to graduate school and made this the focus of my PhD research. I created a global optimization algorithm **GROPE **(Global Rd Optimization when Probes are Expensive) which finds the global optimum value (within bounds) for the parameters of a strategy, using as few probes (experiments) as possible. By that criterion, it was for many years (and may still be) the world champion optimization algorithm. (Note in Figure here how it represents a nonlinear 2-dimensional surface as a set of interconnected triangular planes.)

In Elder Research’s investment models the global optimization often works in a second stage after a smallish set (i.e., dozens) of useful inputs have been identified – in a quantitative and not qualitative manner – from thousands of candidate inputs. The winnowing is accomplished in a first stage through regularized model fitting, such as Lasso Regression, to filter out useless variables while allowing unexpected combinations to surface.

Ensemble methods have been called “the most influential development in Data Mining and Machine Learning in the past decade.” They combine multiple models into one often more accurate than the best of its components. Ensembles have provided a critical boost to industrial challenges—from investment timing to drug discovery, and fraud detection to recommendation systems—where predictive accuracy is more vital than model interpretability. In 2010 I had the privilege of co-authoring book on Ensembles with Dr. Giovanni Seni, about a decade and a half after I’d been one of the early discoverers and promoters of the idea. The investment system we use, as well as many of our models for other fields, employ an ensemble of separately-trained models to improve accuracy and robustness.

Even with these breakthrough technologies, most of the investment models we attempt do not work. The general problem is so hard that our attempts to find repeatable patterns that work out of sample fall apart at some stage of implementation – fortunately before client money is involved! Yet, we have had a couple of strong successes, including a system that worked for over a decade with hundreds of millions of dollars and for which every investor came out ahead. The Target Shuffling method not only convinced the main investor at the beginning that it was significant (a real pocket of inefficiency) but it provided an early warning when its edge was disappearing and when it was time to shut it down. Together, these three technology breakthroughs made the impossible occasionally possible.

^{1} See Chapter 1 of Dr. Eric Siegel’s best-selling book, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die

^{2} An excerpt from the book Journeys to Data Mining; Experiences from 15 Renowned Researchers briefly recounts the start: “The stock market project turned out, against all predictions of investment theory, to be very successful. We had stumbled across a persistent pricing inefficiency in a corner of the market. A slight pattern emerged from the overwhelming noise which, when followed fearlessly, led to roughly a decade of positive returns that were better than the market and had only two-thirds of its standard deviation—a home run as measured by risk-adjusted return. My slender share of the profits provided enough income to let me launch Elder Research in 1995 when my Rice fellowship ended, and I returned to Charlottesville for good. Elder Research was one of the first data mining consulting firms…”

© 2021 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions