Home » Uncategorized

Investment Modeling Grounded In Data Science

3396818507

Note: This blog was written by Dr. John Elder and was originally published on www.elderresearch.com/blog.

______________________________

Elder Research has solved many challenging and previously unsolved technical problems in a wide variety of fields for Government, Commercial and Investment clients, including fraud prevention, insider threat discovery, image recognition, text mining, and oil and gas discovery. But our team got its start with a hedge fund breakthrough (as described briefly in a couple of books1,2), and has remained active in that work, continuing to invent the underlying science necessary to address what is likely the hardest problem of all: accurately anticipating the enormous “ensemble model” of the markets.

It is extremely challenging to extract lasting and actionable patterns from highly volatile and noisy market signals. In theory, timing the market is impossible – and in practice that is a good first approximation. However, small but significant advances we made over the past two decades in three contributing areas, briefly described here, have combined to lead to breakthrough live market timing strategies with high Sharpe ratios and low market exposure.

1. Luck, Skill Or Torture? How To Tell

Because of the power of modern analytic techniques, it is often possible to find apparent (but untrue) predictive correlations in the market due to over-fit—where the complexity of a model overwhelms the data or, even more dangerously, from over-search—where so many possible relationships are examined that one is found to work by chance. Wrestling with this serious problem over many years in many fields of applications, I refined a powerful resampling method, which I called Target Shuffling, to measure the probability that an experimental finding could have occurred by chance. It is far more accurate than t-tests and other formulaic statistical methods that don’t take into account the vast search performed by modern inductive modeling algorithms. With this tool, one can much more accurately measure the “edge” (or lack thereof) of a proposed investment strategy (or any other model).

Years earlier, to more accurately measure the quality of market timing, or style-switching strategies, I defined a criterion I called DAPY, for “Days Ahead Per Year”. It measures, in days of average-sized returns, the expected excess return for a timing strategy compared to a benchmark similarly exposed to the market. The Sharpe ratio can be thought of as measuring the quality of a strategy’s returns; whereas DAPY measures its timing edge. Together, they are much more useful than Sharpe alone. Most importantly, Elder Research studies have shown DAPY to be better than Sharpe at predicting future performance.

2. Global Optimization (combined with Simulation and sometimes Complexity Regularization)

Even the most modern data science tools most often attempt to minimize squared error, due to its optimization convenience, when forecasting or classifying. But that metric is not well-suited for obtaining market decisions, as the user’s criteria of merit has much more to do with return, drawdown, volatility, exposure, etc., than with strict forecast accuracy. (If one gets the direction right, for instance, it is not bad to be wrong on magnitude, much less its square.) What we need are optimization metrics that reflect our true interests, as well as an algorithm that can find the best values in a noisy, multi-modal, multi-dimensional space.

Investment Modeling Grounded In Data ScienceEarly years of my career working with the markets were marked by continual failure, even after strong success in aerospace and a couple of other difficult fields.  I became convinced of the need for a quality search algorithm in order to allow the design of custom score functions (model metrics).  I returned to graduate school and made this the focus of my PhD research. I created a global optimization algorithm GROPE (Global Rd Optimization when Probes are Expensive) which finds the global optimum value (within bounds) for the parameters of a strategy, using as few probes (experiments) as possible. By that criterion, it was for many years (and may still be) the world champion optimization algorithm. (Note in Figure here how it represents a nonlinear 2-dimensional surface as a set of interconnected triangular planes.)

In Elder Research’s investment models the global optimization often works in a second stage after a smallish set (i.e., dozens) of useful inputs have been identified – in a quantitative and not qualitative manner – from thousands of candidate inputs. The winnowing is accomplished in a first stage through regularized model fitting, such as Lasso Regression, to filter out useless variables while allowing unexpected combinations to surface.

3. Ensemble Models

Ensemble methods have been called “the most influential development in Data Mining and Machine Learning in the past decade.” They combine multiple models into one often more accurate than the best of its components. Ensembles have provided a critical boost to industrial challenges—from investment timing to drug discovery, and fraud detection to recommendation systems—where predictive accuracy is more vital than model interpretability. In 2010 I had the privilege of co-authoring book on Ensembles with Dr. Giovanni Seni, about a decade and a half after I’d been one of the early discoverers and promoters of the idea. The investment system we use, as well as many of our models for other fields, employ an ensemble of separately-trained models to improve accuracy and robustness.

Even with these breakthrough technologies, most of the investment models we attempt do not work.  The general problem is so hard that our attempts to find repeatable patterns that work out of sample fall apart at some stage of implementation – fortunately before client money is involved!  Yet, we have had a couple of strong successes, including a system that worked for over a decade with hundreds of millions of dollars and for which every investor came out ahead.  The Target Shuffling method not only convinced the main investor at the beginning that it was significant (a real pocket of inefficiency) but it provided an early warning when its edge was disappearing and when it was time to shut it down.  Together, these three technology breakthroughs made the impossible occasionally possible.


1 See Chapter 1 of Dr. Eric Siegel’s best-selling book, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die

2 An excerpt from the book Journeys to Data Mining; Experiences from 15 Renowned Researchers briefly recounts the start: “The stock market project turned out, against all predictions of investment theory, to be very successful. We had stumbled across a persistent pricing inefficiency in a corner of the market. A slight pattern emerged from the overwhelming noise which, when followed fearlessly, led to roughly a decade of positive returns that were better than the market and had only two-thirds of its standard deviation—a home run as measured by risk-adjusted return. My slender share of the profits provided enough income to let me launch Elder Research in 1995 when my Rice fellowship ended, and I returned to Charlottesville for good. Elder Research was one of the first data mining consulting firms…”