Home » Uncategorized

The Next Killer App Waits in Your Data

There are lots of writings about how data analytics changes business and everything, but deep down it only matters how you put these analytics in action. This is a software engineering perspective.

Following the success of AlphaGo, the machine learning software that defeated the Go master Lee Sedol, the Wired magazine released an article titled “The End of Code” with a provoking highlight

“Soon we won’t program computers. We’ll train them like dogs.”

This is a very interesting transition. In reality though, we are not going to train whole large systems like this, but more atomic functions. You see, training an ERP system like a dog would be one heck of an agility track.

We are talking about a new class of functions that make a huge leap in what programs can do, but ones make your average software developer to give up and surrender because the rules and logic involved are so hard to catch. The only hope is to engineer them by training instead of coding. How would you otherwise create a function that tells you if there is car in a picture, for example? Or what this a topic of this given article? Or a personalized suggestion for your next meal?

Anatomy of a Predictive Function

There are mainly two classes of actions that these functions can perform. They either classify things or they can estimate a value. The classification can tell you, for example, if the received email is junk or not. The estimation, on the other hand, can predict you the sales amount for the next month. Both of these actions include fair amount of uncertainty and that calls for a predictive function.

At the heart of such a function lays a predictive model that is engineered from thousands and thousands of training examples to capture the essence of the process that is generating those examples. It filters the signal out of the noise, if you will. You show it thousands of pictures with a car and others without a car, and eventually it can tell the difference. You might not even have to tell the correct answer because there are ways for a model to self-organize and see the common trends.

The Next Killer App Waits in Your Data

Once a candidate model is calculated, it is scored against new and unseen examples. If the model predicts well, it can be then accepted for now and utilized in a program as I will describe later. The problems start when the model does not behave well. As you see, there is a lot to tackle with. There are dozens of different methods for calculating the model with dozens of parameters that can be changed. You need to observe the right features that might explain the phenomenon, and if you happen to catch them right, the training data can be still unbalanced for containing only red cars, for example, or there might be other properties with the data that does not comply with your methods. I’m just saying that the modeling is serious engineering. It requires a professional just as programming.

Find Them and Feed Them

Your model is only as good as the data you are training it with. The data can be poor in quality, but usually when you have more data you can overcome the minor quality issues. More data also usually beats the clever modeling methods. So, look again that big data database you collected, and thought worthless. There might be some diamonds to dig up after all.

The Next Killer App Waits in Your Data

You can do the digging with either requirements or data in mind. The requirements may come from a software project that needs to find a solution to otherwise infeasible problem. The other possibility is just gather all your data and start looking new possibilities for killer features. Either way, by looking at your data it is possible to see how your world actually works. It may be surprisingly different from that you have imagined. By constructing a network of interdependent data it is possible to see what are the things that you may potentially estimate or optimize. From there you can capture the candidate training data and see if holds a solution for your problem.

Proper training data may be hard to acquire, like collecting thousands of representative pictures of cars. Estimating the model may also take lots of computing time. This makes well predicting model a valuable asset in itself and there might exist new business opportunities in selling general and well predicting models as separated items. For very general phenomena they might predict well for a long time.

Usually thought, the model gets outdated as seasons change and new data comes available. Also the first chosen modeling method or the observed features were not the right either. This may debunk the promising idea of selling offline prediction models as stock items. A better way to do this is by continuous modeling that provides new versions of models as a service. As a data scientist comes up with better algorithms, new tuning to parameters, or just new batch of data, she can make a new rivaling generation of model to compete with the old one. If the new version makes more accurate predictions it is allowed to replace the old one and automatically enhances the performance of your software.

Adding the New Magic to Your Applications

At the end, the predictive function is just like any other. You give it required inputs and receive an output. What is left to decide is where you execute it. There are two principal approaches in adding the data modeling magic into your software. You can either embed the the execution in your own codebase or you can outsource both modeling and the the scoring behind a cloud service where you have the necessary infrastructure.

It is possible store the model definition in a file, for example with Predictive Model Markup Language (PMML) that aims to be standard in exchanging predictive models. There are also ready-made libraries for model scoring, like JPMML. This really enables you to sell good models as stock items. JPMML only supports well established techniques, and if this is too restrictive, you can use free R language and its vast amount of libraries to express the model and do the scoring. You can run R code in a server with RServe like popular visualization tools Tableau and Power BI do.

The Next Killer App Waits in Your Data

As you see, also implementing the model scoring gets messy pretty soon. As a software developer you just want to add the function call into your code, execute it and move on. It makes sense to bundle the continuous modeling and the execution logic behind a separate service, for example a web or REST service. This also widens the range of possible applications. It allows very lightweight devices and mobile applications to incorporate predictive functions. One of the most interesting applications is robotic process automation (RPA).

The purpose of RPA is to use software robots to automate repeating tasks. Many times this involves the use of legacy desktop applications, for let’s say document management. You may have a repeating task of sorting documents by topic, or forwarding the certain topics onward. This has been requiring a human to cope with the uncertainty that the task involves. The software robot workflow is also as helpless with it as the traditional procedural code.

What you could do now is to gather good amount of your documents and apply a data analytics method called topic modeling to infer the probable topics of new and unseen documents. This model can then be wrapped up as function and published for the software robot to call. Now when the robot uses your old document management system, it can pass the document to the predictive function for classification and then process the document intelligently based on the result. This in effect adds new capabilities to your old application that was never previously thought to be possible. The revolution may not come overnight, but instead it changes your code function by function.

For me as a developer this is not the end of the code. This is a new level of possibilities that the code can do.

Read the originally posted article here.