Subscribe to DSC Newsletter

Control: The "Uncle Fester" of the Data Science Family (part 3--Bringing Control Out of the Data Science Closet)

It's me, Gomez! I've tried to be someone I'm not. I live in shame...and the suburbs!

Fester Addams

In part 2 of this series, we discussed optimization, why it is a hard problem, how that problem is equivalent to both the regression problem and the root finding problem that are important in data science. And we had just begun to address the nuances that distinguish optimization from control. We noted that all of our optimization examples were "one shot" problems, where we set the parameters in some way that best suited our objective and then we were done. We said that if we were to turn the knobs over time--e.g. a fuel burn schedule for an airliner--the problem becomes one of control. That's a good segue. It's time to bring control out of the closet.

If we calculate the whole fuel burn schedule all at once, based on the initial state, and then stick with that schedule for better or for worse, we call that controller an "open loop" controller. Such open loop controllers are pretty much the same as solving optimization problems. But if we look every so often to see where we are on our route, and how much fuel we have left, etc., then figure out how much fuel we should burn, that controller is called a "closed loop" or "feedback" controller. Part 4 of this series will do more slicing and dicing of control theory, and hence have more to say about the different kinds of controllers. Suffice it to say here that feedback controllers are generally considered more reliable than open loop controllers because they allow for corrections due to error or noise. The examples here will be closed loop controllers.

One of my favorite examples of a control problem is the Inverted Pendulum Problem. Start with a pendulum at rest.  It will point straight down (if it's a balanced pendulum), because straight down is a "fixed point" of the system, meaning that with no additional forces on the system it will remain in that state. This is also called an "equilibrium" state of the system. If you deflect the pendulum slightly, it will eventually return to rest pointing straight down because that fixed point is a "stable" fixed point. But there is another fixed point facing straight up. That one is different. It is "unstable", meaning that with absolutely no forces on the system it will remain in that state, but any small perturbation will cause the pendulum to leave that fixed point (and seek the stable one). The Inverted Pendulum Problem is to keep the pendulum at that unstable fixed point. It's still a fixed point so with a feedback controller, when we detect a deflection from the equilibrium we need to nudge it back to the fixed point, but normally don't need to apply a large force to it.

Any kid who has played baseball knows this problem well, because all such kids learn to balance a baseball bat in the palm of their hand. The same goes for cricket bats as I understand. I used to balance the baseball bat on my nose, and I did that when I was eight years old, proving that one needn't know calculus to solve the Inverted Pendulum Problem. We seem to have a sort of innate Inverted Pendulum Problem solver in our brains. This probably has something to do with us walking upright. So this problem is important because we are inverted pendulums. 

Rocketeers also have to solve an Inverted Pendulum Problem. The rocket is like the baseball bat and the rocket engine is like the hand holding the bat up. If you push the bottom of the bat up without balancing the top properly, well, the pendulum seeks the stable fixed point, which if the bat is a rocket basically points the pointy end of the rocket down and smashes it into the ground on liftoff. More formally, when the center of gravity is located behind the center of drag, an aerodynamic instability is created. That instability can be modeled as an inverted pendulum. Here is a photograph from an early test of the German V-2 rocket in World War II. 

The rocket started its flight pointing straight up, but then, while still in its launch cloud seen at the bottom of the picture, the engine pushed the bat up without balancing the nose, turning the rocket on its side as seen here, and a moment after this picture was taken, the engine smashed the pointy end of the rocket into the ground with great force! Many of the "guidance" issues in rocketry are related to the inverted pendulum.

The Inverted Pendulum Problem is usually solved with a feedback controller. Here is MIT's Ford Professor of Engineering, Alan Oppenheim with a rather remarkable demonstration of an inverted pendulum (although he does use some calculus in his version).

For those with difficulty interpreting the photograph, Professor Oppenheim has a wine glass resting at the top of an inverted pendulum. That's tricky enough. Then he pours a liquid into the wine glass applying a continuous change to the system's dynamics. The liquid in the wine glass is not just an ever-increasing mass. It sloshes about changing the forces on the pendulum arm. Professor Oppenheim does even more remarkable things with the inverted pendulum before he is done and I highly recommend watching the video on MIT Open Courseware. But this hopefully gets the point across. And as we expect, there is an optimization problem involved: minimizing the deflection from vertical. To be sure, we could also build a control that kept the pendulum at some point other than a fixed point, i.e. a "far from equilibrium state". We could, for instance, theoretically, maintain an angle of 6 degrees left of top-dead-center. But this would require a constant force be applied and the cart that controls the pendulum arm would soon run off the rail. Fixed points are usually more interesting targets for control because the system can be bounded. Given a reasonable track length for the cart, we can probably maintain the inverted pendulum in its unstable fixed point indefinitely, and Professor Oppenheim's demonstration pretty much proved that.

Control and Data Science

But what does any of this have to do with data science? Well, a typical enterprise also has stable and unstable fixed points. The stable point is at zero, where the enterprise does nothing. It's basically a dead enterprise. But like the inverted pendulum, the unstable fixed points are where the enterprise wants to be. The unstable fixed points are where the enterprise is doing something successfully and wants to keep doing so. And like the inverted pendulum, there are a lot of factors that tend to deflect the enterprise from its unstable fixed points forcing small changes to maintain the preferred state. With the inverted pendulum, a puff of air can move the pendulum off center, as could Professor Oppenheim's pouring of the dark liquid into the wine glass. In an enterprise, it might be a press release from a competitor, or the appearance of a related product in a new Jennifer Lawrence movie. These are all things the enterprise can't reasonably prevent, but it nonetheless needs to control for in order to remain in one of its preferred states. The inverted pendulum has sensors at the hinge between the cart and the pendulum arm. It fuses measurements from those sensors as feedback to estimate the angle of the pendulum arm, and it has motors that control the position of a cart on the rail so that it can remain in the fixed point. The enterprise has a variety of sensors and it fuses these into estimates of enterprise variables. It also has controls that it can apply to remain at its preferred fixed point.

We also talked about controlling the inverted pendulum in a far from equilibrium state (i.e. 6 degrees from center), and how this would require application of a continuous force. There are corresponding states for the enterprise. For instance, the enterprise might be able to compete with the Hudsucker Whirlly-Dingus in the Hula Hoop market, but only if they sell below cost, requiring a continuous infusion of cash to support the enterprise. That enterprise is in a far from equilibrium state, and like with the inverted pendulum 6 degrees from top-dead-center, these states are typically not as useful to control for as fixed points because they require a constant application of force (i.e. cash), but there are cases where, for a limited time, we want to control for a trajectory, to move the business to a fixed point. That's what startups tend to do. Of course, identifying fixed points is a problem of its own (closely related to our friend the root finding problem).

To sum all of this up with a baseball analogy (this time without balancing any bats), control is the "payoff pitch" for data science. Control is how data science navigates an enterprise to its preferred state and maintains the enterprise in its preferred state! That simple statement has profound consequences. It seems obvious, but we have not seen it in the data science literature. That is the main reason we wanted to write this blog. Unfortunately, we couldn't make that statement as clearly and resolutely earlier because we had to develop certain key ideas in order for it to make sense. The question "what are applications of control theory to data science?" was actually asked by Mark Roche on the Data Science StackExchange in early 2015, and was closed after a single answer, which suggested that control theory might be useful to optimize MapReduce implementations. That seems altogether silly as a primary use case. But let's use this as an opportunity to return to the Knowledge Pyramid that we introduced in Part 1, and use it to give a better understanding of why control theory is essential to data science.

We've added a piece to the pyramid that we left out of Part 1 for simplicity: the Operating Environment that the enterprise lives in. That's where the observables are; the observables that constitute the raw materials that data science refines. Without the Operating Environment there is no data science. To make measurements of the observables we need sensors. Those sensors might simply be human sensors typing their desires into a web form, or stock market feeds. The sensors might also be traditional sensors like magnetometers or thermometers. These kinds of sensors, and more complex ones, are important in emerging Internet of Things (IoT) problems. Regardless, the sensors produce the data that data fusion coerces into models of variables of enterprise interest as a first stage of data refinement. But which observables should we actually observe? We can't answer that question without visiting the other end of the pyramid. The pointy end of the pyramid deals with what the enterprise is trying to do; how the enterprise wishes to influence the environment. If we don't know what the goal is we can't say what the variables of enterprise interest are, and without that we can't discern the relevant observables to measure. So in spite of the direction of the blue arrows in the diagram, the thought process starts at the top of the pyramid and flows down. The refinement process starts at the bottom but the thought process starts at the top. And here at the top, we have effectors to make changes to the Operating Environment consistent with enterprise goals. The effectors may be as abstract as the sensors: a product price is an effector. The assignment of Customer Service Reps to accounts is an effector, etc. Without these effectors, it's just an academic exercise because the enterprise has no influence. That influence--and many other influences beyond our controls--affect the observables that the sensors measure at the beginning of the refinement process, and we start all over again. Data science is a living, breathing part of the enterprise and its interaction with its environment, but only in the context of control. Even if we are repurposing data originally collected for some other use, the pointy end of the pyramid still drives the train. If we are repurposing data, we don't have any say about the sensors, but that makes it even more important to do the right data fusion to transform the data into models of enterprise variables to support the desired control. This was the motivation for calling control "the Uncle Fester of the data science family": like Fester, at first control might not seem to fit, but the more you look at it, the more essential it is to this family.

Enterprise Control

We used the Inverted Pendulum Problem to illustrate an analogy from a classic control problem to enterprise control. It's a pretty good analogy, but analogies are imperfect. Unfortunately, not all enterprise control problems are as simple as the inverted pendulum. For instance, with the inverted pendulum, we relied on a close coupling between estimation and control, and assumed that the sensors gave us the information necessary to get a good estimator for the angle of the pendulum arm. The Certainty Equivalence Principle (CEP) allows us to do this. The CEP, in this case, says, basically, that the noise in the the sensor measurements cancel out, and the resulting optimal control is the same as would be obtained if there was no noise in the sensor measurements. Sometimes the CEP holds. Other times it doesn't. For instance, if you estimate your competitor's current state based on sentiment analysis of their press releases, and that competitor uses their press releases to intentionally deceive your strategic planning team, the CEP ceases to hold because "sneaky competitor noise" is biased and doesn't cancel out. We will have more to say about "sneaky competitor noise" in Part 4. Until then, suffice it to say that 'sneaky competitor noise" doesn't mean you can't control for sneaky competitors, and it doesn't mean that you need to ignore their press releases. Instead it means that you need a more sophisticated controller, one that is robust to sneaky competitors. We have developed such a controller.

It would be improper for me to take much credit here as this controller, known as a Deception Robust Controller (DRC) is the brainchild of my colleague Dr. Rajdeep Singh. We worked together on a DARPA research program called Real-time Adversarial Intelligence and Decision-making (RAID), which focused on making rational decisions in the face of a deceptive adversary. This is an extremely challenging problem, and while "sneaky competitor noise" occurs in many contexts,  at the time DARPA funded that work--during the second Gulf War and the Afghan War--the sneaky competitors of greatest interest were small "asymmetric" enemies who were using deception tactics to inflict great harm on allied forces. The US military was willing to spend the required millions to address this problem. We summarize the results in the following chart:

This chart records the results of a field exercise conducted by DARPA and the US Army, one purpose of which was to compare the effectiveness of the DRC versus expert human commanders trained in adversarial deception tactics. Points on the chart above the dashed diagonal reflect tests where the DRC outperformed the expert human. Likewise, points below the dashed diagonal reflect tests where the expert human outperformed the DRC. As the chart shows, the DRC almost always beat the human, and in all cases where the expert human beat the DRC, it was by a nose. The chart shows that in many cases, the DRC substantially outperformed the expert human, and a standard T-test confirmed that at the 95% confidence level, the DRC outperforms the expert human commanders.

We are justly proud of this result, which has been confirmed theoretically and validated by peer review. Prior to this field exercise, automated controllers had proven to be very poor performers when faced with deceptive tactics. Some proofs of key theorems and even some implementation details are presented in Dr. Singh's dissertation, though this is not an easy read. Of course, the same issues with deception tactics arise frequently in commerce, or wherever valuable intellectual property is involved. There are also issues of self-deception. All of these issues are difficult to guard against with classical controllers, which typically regard the observations as a representative sample of the truth. That's another way of saying that they assume that the CEP holds. The DRC, by contrast, looks for scenarios that are consistent with the observations. In other words, the DRC uses observations to throw out conjectures that are clearly false, whereas the classical controller uses the observations to predict what is probably true. And the reason classical controllers fail so miserably in the face of deception (e.g. sneaky competitors) is that the deceiver's job is to line up the observables so that it looks like the state of the world is different from what it really is. That's almost a definition of a deception, so classical controllers that read the observables as an honest reflection of the real world are naturally suckers for this.

There are, of course, other differences between the Inverted Pendulum Problem and enterprise control problems. The inverted pendulum control relied on the estimators for the arm angle and the cart position being continuous variables. Continuous variables are nice. For one thing, you can apply standard calculus to them. Continuous functions--particularly smooth functions--have nice derivatives. Rates of change are knowable. Discrete functions (i.e. defined only for certain values, like number of windows in an office, or type of a nucleic acid) are usually more difficult to work with. They tend to lead to combinatorial explosions. For instance, there are 20 possible proteins with a single amino acid, and 160000 possible proteins with 4 amino acids and 7E20 possible proteins with 16 amino acids. When we get to 70 amino acids, the number of possible proteins substantially exceeds the number of fundamental particles in the universe. And that would be a very simple protein. In the real world, typical proteins have hundreds or thousands of amino acids. So combinatorial explosion prevents us from modeling certain aspects of real proteins through simple enumeration.

Common enterprise control problems have a mix of continuous and discrete controls. This means that they have to be managed smartly. Some of them are significant pieces of work.

Summary

This post has dealt with the subtle differences between optimization problems and control problems, and presented a case for treating control as an essential component of data science. The rationale is basically in two parts:

  1. Bottom-up: Data science is about the refinement of data and the penultimate refinement transforms data into decisions which are then acted upon. That is fundamentally a problem of control, and

  2. Top-down: The thought process begins at the pointy end of thFue pyramid (wisdom: influence and effective actions). These are the business of control. They seek an objective. That better that objective is understood, the better are the decisions at the bottom of the pyramid, like what variables provide feedback for the control and how we fuse measurements of observables to model those variables.

We’ve also discussed, briefly, how enterprise control differs from classical control, introducing a bit of terminology. Hopefully this instills some interest.

Part 4 in this series--the final installment--will discuss the nuts and bolts of control theory so that we can engage in meaningful conversation on the subject. We will also discuss "sneaky competitor noise" in greater detail and give it a more formal name ("adversarial noise") and describe why this kind of noise is fundamentally different and how to control for it. Further information on all of these subjects is available from the S3 Data Science web site.

 

Views: 1126

Tags: addams, analytics, control, fester, fusion, optimization

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service