Summary: At least one instance of Real Time Predictive Model development in a streaming data problem has been shown to be more accurate than its batch counterpart. Whether this can be generalized is still an open question. It does challenge the assumption that Time-to-Insight can never be real time.
A few months back I was making my way through the latest literature on “real time analytics” and “in stream analytics” and my blood pressure was rising. The cause was the developer-driven hyperbole that claimed that the creation of brand new insights using advanced analytics has become “real time”. My reaction at the time, Humbug.
The issue then as now is the failure to differentiate between time-to-action and time-to-insight. Not infrequently the statements about ‘fast data’ are accompanied by a diagram like this, which to me has a fatal flaw.
The flaw, to my way of thinking, is that there are really two completely different tasks here with very different time frames.
Time-to-Action: Typically the customer-generated trigger comes in one side, enters your transactional platform where it may be scored, next best offers formulated, recommenders updated, or any number of other automated tasks may take place. Based on these predetermined routines the desired and pre-planned action comes out, not infrequently in an action that is fed back to the customer. Time-to-Action can indeed be real time, right down to milliseconds.
Time-to-Insight: These actions and the algorithms that created them were preplanned. That is, a team of data scientist spent time and effort to explore the data, clean, transform, and perhaps normalize it, selected features, and then built a number of models until one proved to their satisfaction that it was sufficiently robust to be implemented in the transactional system. That is Time-to-Insight, and that is decidedly not real time.
Is Real Time Predictive Model Development Even Possible?
Especially now that streaming data processing has become such a hot topic I wanted to revisit my earlier conclusion and see if real time analytics, specifically the creation of new predictive models, could in fact be real-time, and truly on-the-fly.
Batch versus Streaming Fundamentals
First off, let me be clear that we’re talking about streaming data problems, or perhaps more correctly ‘unbound data’ problems. If it’s not streaming, it’s by definition batch.
The “Correctness” Problem and Lambda Architecture
Whether we are talking about quantitative analytics (sums, medians, top-N, standard deviations, and the like) or actual predictive models, streaming analytics was always said to have a “correctness” problem. That is, calculations produced on a subset of the data could only approximate the accuracy of the same calculation conducted on the whole data set.
This “correctness” argument has come under fire since both the streaming (subset) calculation and the batch (entire data set) calculations are approximations of the true condition. As a result, Tyler Akidau, a leading developer of streaming systems at Google says “Streaming systems have long been relegated [unfairly] to a somewhat niche market of providing low-latency, inaccurate/speculative results, often in conjunction with a more capable batch system to provide eventually correct results, i.e. the Lambda Architecture.
If you’re not familiar with the Lambda Architecture model, Akidau goes on to explain “the basic idea is that you run a streaming system alongside a batch system, both performing essentially the same calculation. The streaming system gives you low-latency, inaccurate results (either because of the use of an approximation algorithm, or because the streaming system itself does not provide correctness), and sometime later a batch system rolls along and provides you with correct output.”
The Surprising Finding
With all these factors seemingly raising insurmountable barriers to real-time predictive analytics I was surprised to find one example where it is not only successful, but actually claims greater accuracy than the same model run in batch.
The example comes from UK-based Mentat Innovations (ment.at) which in December 2015 published these results regarding their proprietary predictive modeling package called “Streaming Random Forests”. Without repeating all of their detail which you can see here, here is a brief summary.
Using a well-known public database (ELEC2) which is used as a benchmark in streaming data literature:
Each record contains a timestamp, as well as four covariates capturing aspects of electricity demand and supply for the Australian New South Wales (NSW) Electricity Market from May 1996 to December 1998.
To quote directly from their findings:
Below we report the error rate (lower is better) achieved by a sliding window implementation of random forests using the randomForest package in R, for 8 different window sizes (left group of bars in the Figure below). When the dataset is ordered by timestamp, the best performing window size is 100, on the lower end of the scale. This is classic case of “more data does not equal more information”: using 100 times more data (w=10,000 vs w=100) almost doubles (175%) the error rate!!
To drive the point home, we took the same data, but presented it to the classifier in random order so that it was no longer possible to take advantage of temporal effects. In this case, without any temporal effects, indeed the accuracy improved with larger window sizes.
The advantage that a well-calibrated streaming method can have over its offline counterpart in a streaming context is quite dramatic: in this case, the best-performing streaming classifier has an error rate of 12%, whereas a random forest trained on the entire dataset (minus 10% withheld for testing) achieves an error rate of 24%. A fraction of the data, double the accuracy!
Let me repeat the central finding. They have built a more accurate real time classifier using a very small amount of windowed data that lends itself to very low latency real time streaming systems. And they have demonstrated that the accuracy is greater than for the same data run in batch.
What we have here is a breakthrough in real time predictive analytics. In fairness, presenting any gradient decent tool such as neural nets with the data sorted in time sequence is not a new idea. This is a procedure that’s been used for some time to reduce the likelihood of overfitting to local optima, a problem extremely common in gradient decent tools.
Does it Generalize?
Here are the claims that Mentat makes:
These strike me as generally supportable claims based on their demonstration experiment. Providing the data in time series means less probability of overfitting. The problem of temporal drift in your models is eliminated as each recalculation includes the next data points and means your model remains up to date.
Whether this approach is appropriate for your specific situation is open for consideration.
Taking nothing away from their accomplishment this is the first and indeed the only example of real time predictive modeling I have personally been able to find. If you know of others please add a comment and reference to this article.
About the author: Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. He can be reached at:
Comment
As we approach real-time insight, the paradox of synchronous present-shaping-future and future-shaping-present becomes philosophically fascinating! ...the birth of AI just around the corner?
Good discussion! I have a couple of questions: Why wouldn't machine learning techniques based on stochastic gradient descent (SGD) be considered "real-time?" And why wouldn't instance-based methods, such as kNN that does not require global training, be considered real-time predictive?
It is a long and interesting post. However, there is no extrapolation in mathematics, only interpolation. Mathematics considered this topics starting from Gödel.
Then cybernetic came, starting from predicting aircraft location as target for an artillery gun, there is a literature on this topic.
I think that some logic may be developed in modern application for business predictions.
Great article!! really insightful!
Dr. Vorhies, thank you for sharing the insightful information on latest development. However, a few clarifications:
1. As only smaller data sets are being used from small window sizes of real-time data, why even build random forests? Why not build some simpler decision trees that may perform similar or better? That may save some computational overhead?
2. More importantly, it appears that a classification/predictive model is being rebuilt on real-time data for every window; then, what is the new data to be used to score against the model? That is, in practical world, we build the predictive model on historical batch of the data, and then use that model to score the real-time event/transaction data (for example, fraud classification/scoring in credit card transactions in real-time). That leads to my next question:
3. To build the classification/prediction model, the data already has to be labeled with an actual known outcome. How practical is it to assume that the real-time event data is already known with the actual outcome? For same credit card fraud detection example above, while the transaction can be predicted/scored as fraud/not-fraud in real-time, the actual outcome is not known for a while..is it not?
© 2019 Data Science Central ® Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central