Subscribe to DSC Newsletter

As a new sub-discipline of Data Science, I notice that SYSTEMS Analytics is starting to get some traction! There are a couple of Analytics graduate level programs with *Systems* in its title (Stevens Institute of Technology and University of North Carolina are the only ones I know). Web search brings up NO books on *Systems* Analytics. With the publication of my book with *Systems* in the title, that gap has been filled now! “SYSTEMS Analytics: Adaptive Machine Learning workbook”.

 

My last Analytics startup launched in 2013 explicitly used SYSTEMS Analytics in our Retail Recommendation and Uplift SaaS product; my initial bias for the Systems approach was confirmed by the success of our product. My book is partially an outcome of this experience and partly my strong sense that Systems thinking that has been lacking in Machine Learning (ML) till now will add a valuable new extension to the theoretical underpinning and practice of ML.

 

So what is Systems Analytics?  It is a merger of Systems Theory and Machine Learning. One way to quickly relate to this new field is to think of it as a “dynamical” extension to ML theory and practice. What exactly does “dynamical extension” mean? Let us restate what Machine Learning is in a manner suitable for our purpose here . . .

 

Machine Learning =

In plain English: What is the likely Class that the measured Attributes belong to?

In Probability speak: What is the Conditional Expectation of Class (y) given Attributes (x)? Or E[ y | x].

 

Systems Theory gives us a principled framework to answer the “probability speak” question. It starts with the data models.

 

We are very familiar with Multiple Linear Regression:

y = a0 + a1 x1 + a2 x2 + . . .  + aM xM + w                                                                                     (A)

 

Many of us are cool with Auto Regressive Moving Average (ARMA) model:

y[n] = - a1[n] y[n-1] - . . . - aD[n] y[n-D] + b1[n] x1[n] + . . + bM[n] xM[n-M+1] + e[n]                      (B)

 

Equation (A) is an example of a “static” data model; equation (B) is an example of a “dynamical” model. A more elaborate and powerful data model is a *time-varying dynamical* one; State-space model is such a data model.

 

s[n] = A s[n-1] + B x[n] + D q[n-1]                                                                                               (C)

y[n] = H[n] s[n] + r[n]

The first equation of (C) is called the “state” equation and the second, the “measurement” equation. State equation adds additional “degrees of freedom” allowing more “dynamics” and a more sophisticated form of ML.

 

We defined the ML problem as estimating Conditional Expectation, E[ y | x] above. It turns out that for data models of the form in equation (C), much work has been done in the past 50+ years and powerful solutions are already in hand. So, we have an opportunity to bring this heavy machinery into ML without much of the heavy lifting of discovering them from scratch! This is exactly the subject matter of my book, “Systems Analytics”.

 

Bayesian estimation approach to finding E[ y | x] is hugely simplified for the state-space data model in equation (C). There exist “Bayes Filter” algorithms to estimate E[s | y, x] WITHOUT obtaining the Conditional pdf explicitly first. Then, it is a simple matter of obtaining the Bayesian estimate we seek in ML, E[y | x] as =  H[n] s[n] since H[n] are known, non-random quantities.

For different cases, different “Bayes Filters” have to be used. Here is a list –

Bayes Filter algorithms:

  1. Linear Gaussian case – Kalman Filter.
  2. Mild Non-linear Gaussian case – Extended Kalman Filter (EKF).
  3. Non-linear Gaussian case – Cubature Kalman Filter (CKF), Unscented Kalman Filter (UKF).
  4. Non-linear distribution-free case – Particle Filter, Markov Chain Monte Carlo (MCMC) Filter.

 

Filters #1 and #2 have a long history of successful applications from Apollo space missions to our every-day GPS gadgets! Pure non-linear cases are harder but over the past decade, much progress have been made.

 

NOW, for the most important question – why bother with the sophistication of State-space data models, Kalman Filters and such . . .? This is where the theme of *Dynamical* Machine Learning becomes important.

 

In Machine Learning applications TODAY, almost all *business* Data Science applications learn a static “mapping” between inputs and outputs using Training Set data; then this map is moved into “production”. The implicit assumption is that the relationship between the input & output (= the underlying real “system”) remains unchanged in the future during production usage! This assumption is patently untenable in real-life situations.

 

The other reason to start moving to Dynamical ML/ Systems Analytics is the realization that if learning is the process of “generalization from experience”, we can be more explicit and say that “generalization from past experience AND results of new action” is the true definition of learning! As such, a “static” solution will be inadequate if we want to incorporate the results of new action . . .

 

From a business perspective, I have often said that business solutions are not “one and done”! ML solutions should be administered like flu-shots; adjust the mix and apply on a regular basis . . . or *dynamically* learn and update your ML “map”. This is what SYSTEMS Analytics does . . .

 

Clearly, we can only skim the surface in a brief blog like this. If you are sufficiently motivated, especially if you are an “engineering” Data Science practitioner (see What exactly is Data Science? for different “types” of Data Science), specialization in SYSTEMS Analytics may be in your future! A good starting point will be “SYSTEMS Analytics: Adaptive Machine Learning workbook”.

 

My book has two parts: PART I - Machine Learning from multiple perspectives & PART II - SYSTEMS Analytics. Here are a few key chapters – each chapter appendix has MATLAB code which can be downloaded from the book website.

Chapter 2: A quick romp through ML: Many key “traditional” ML methods reviewed with worked out examples.

Chapter 3: Systems Theory, Linear Algebra & Analytics BASICS – old wine in a different bottle: ML practitioners come from STEM as well as Social Sciences backgrounds; this chapter creates a common language for better collaborative work among Data Scientists of different stripes. Linear Algebra IS the lingua franca of ML, no doubt!

Chapter 4: “Modern” Machine Learning: This chapter brings you up to date with current Mathematical aspects of ML.

Chapter 6: State space model & Bayes Filter: Covers the theory, algorithms and use cases of SYSTEMS Analytics.

Chapter 7: Kalman Filter for ADAPTIVE Machine Learning: Kalman Filter algorithm details including a “recurrent” architecture for ML. Solutions developed in this chapter can be applied to Machine Learning use cases that require either static or dynamical, dynamical or time-varying dynamical, linear or non-linear mapping!

 

I conclude my book by noting that while we have established the foundation of SYSTEMS Analytics, many more opportunities to extend the field in theory and business applications await the careful reader of “Systems Analytics: Adaptive Machine Learning workbook”!

PG Madhavan, Ph.D. - “Data Science Player+Coach with deep & balanced track record in Machine Learning algorithms, products & business”

https://www.linkedin.com/in/pgmad

 

Views: 1909

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by PG Madhavan on August 9, 2016 at 2:02pm

Lennart Ljung's book you refer is right on my table - a classic! 

Comment by Sione Palu on August 8, 2016 at 5:44pm

Great article.  

Madhavan,  I use the Matlab System-Identification toolbox for all the dynamical system modelling I do. Its the best tool out there:

"System Identification Toolbox"

http://au.mathworks.com/products/sysid/features.html

The book that the Matlab System Identification is based on:

"System Identification: Theory for the User (2nd Edition)"
https://www.amazon.com/System-Identification-Theory-User-2nd/dp/013...

The author of the book is also the main developer of the Matlab System Identification Toolbox.

Comment by PG Madhavan on August 5, 2016 at 12:39pm

Thanks. Keep learning . . . it is a GOOD thing!

Comment by OG Mack Drama on August 5, 2016 at 12:36pm

I love your article, the math is beyond my comprehension, but I get the gist of it. I agree ML should be monitored and adjusted, because all input & output are really unstable variables, re: all users are unique but predictable when looking at the data. However, modules should be able to adapt to the end user after a period of time & for more sustainable data to be obtained in order for ML predictive modules to be more accurate! Trying to explain myself in a way that doesn't sound stupid forgive me. Here , is a recent experience of mine I am in the music industry, so I was posting a promotional post on one of my music artist facebook pages. It got flagged as having non allowable content violating rules etc (pure nonsense). There was a dialog box of course saying if you do not agree please explain & why blah blah. I responded & said please adjust your ML algorithms  to recognize that my posting is in relation to the person I was posting it to. I even suggested that their ML need to be "tweaked" , after reading your article i clearly see your talking about the same thing. (you are right or am I completely off base) if so my apologies & please delete my comment. I've been introduce to your wonderful world by my friend Kirk Borne PHD in Astrophysics but I am still learning.

PS They did eventually allow my post to be posted!

Follow Us

Videos

  • Add Videos
  • View All

Resources

© 2017   Data Science Central   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service