“The only function of economic forecasting is to make astrology look respectable” John Kenneth Galbraith
Predictive modeling and traditional ratemaking is an exercise of forecasting the future, whether directly or indirectly (indirectly as generalizing historical lessons to the future). But is such forecasting so hopeless as being same as seeing into a crystal ball?
The advantage of data scientist and actuaries is their close in contact with other business professionals. That means leverage to obtain, generate and apply a lot of expert judgment to try to overcome the shortcomings of quantitative modeling. Perhaps that is why such methods have been so successful over time. ‘Data scientist’ and ‘actuary’ will be used synonymously here.
The post highlights the need for sound and deep qualitative understanding as a necessary part of the data scientist’s toolkit. Data scientist’s transition to such qualitative understanding is marked less by the acquisition of new technical skills as by the adoption of new attitudes. The data scientist is still a data scientist, but one whose conclusions are more technically realistic as well as more meaningful to upper management. The new attitude necessary is that understanding of any modeling exercise requires understanding of the whole.
The qualitative insights speak to us but generally we are too constrained within quantitative structures to make appropriate allowance for them. Some data scientist may see qualitative information as harming the objective purity of data science. However, it must be reminded that data-driven methodologies are not pure or precise; instead, they only feature an unbiased ignorance of real-world issues facing the insurance landscape.
Complexity science is given specific important as for ratemaking, it is important to learn the dynamics first before predicting these dynamics to hold under future.
That is not to say that we should give up any effort of looking into the emerging future as futile. What we can aim for is to develop better emotional maturity when forecasting for identification or ratemaking of any emerging liability.
Maturing our forecasting foresight
Emerging liability ventures into the unknown, into open ended subjectivity. To sharpen our forecasting skills in this area we utilize the profound learning provided in the essay by Werther for SOA publication. That essay aims to help financial and insurance practitioners better recognize, assess and respond to largescale, large-impact rare events (LSLIREs), occurrences often wrongly labeled as unpredictable black swans. The learning impact from recognizing LSLIREs can be readily applied to LSLIRE emerging risks and liabilities as it is the LSLIRE emerging risks that matter most and not emerging risks that will likely have little consequence. The key techniques of pattern recognition for pattern change to identify LSLIREs are:
Main shortcoming of predictive modeling is that we change only few assumptions and keep rest of them constant so that they are not dynamic enough. Of necessity, any line of disciplined inquiry focuses on certain operative variables and determinants, and freezes others. Often the ground thus frozen is that very territory which is problematical from the standpoint of emerging risks.
One very powerful technique for actuaries is to utilize quantitative models and qualitative methods simultaneously. Models and statistics create discipline and uniformity for actuaries and analysts and is a powerful source for ‘herding’ toward similar opinions. As Carl Jung says that “the statistical method shows us the facts but does not give us a picture of their empirical reality. Actuary can use the quantitative models to arrive at the ‘normal’ state of opinion and use qualitative, deep and context specific explanations to understand and explain deviations from the normal standards.
One aspect of quantitative models is particularly useful before emerging LSLIRE arise is breaking down of models and increasing diverge between stories of analysts explaining these deviations. More analysts will start feeling that something is wrong but cannot identify through their mainstream models what is wrong specifically.
Concerns with Big Data
Ronald Coase said ‘If you torture data long enough, it will confess to anything’. With the advent of big data and its accompanying curse, we do not need to even torture the data.
MIT’s top-ranked Alex Pentland provides nuanced views that enhance the potential for better LSLIRE recognition and assessment. First, “The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works.”
Unfortunately for this big data insight, the scientific method as we normally use it never did work well for even normal whole system change recognition, and especially not for rare event foresight, for the simple reason that just because something formerly couldn’t be measured didn’t make it irrelevant. Recall Kant’s, Jung’s, Berlin’s, Einstein’s and Goethe’s “beyond analysis” critique and advice: Intuition—experience and familiarity—links knowledge to understanding.
Nassim Nicholas Taleb says that ‘the more data we have, the more likely we are to drown in it. ‘ Fooled by Randomness (2001). Likewise, Werther makes an amazing assertion when he says that lacking the human inputs of correct intuition, imagination and understanding, technical knowledge management approaches like “big data” is only about getting better paint and brushes (tools). Likely, they will yield greater confusion and crises. “Mastery yet needs better painters.”
The core learning point is that quantitative modeling constructions typically fail when needed most, when something is actually changing. Qualitative profiling can help us better at sense and responding to change. Werther concludes that until and unless we consider intention—philosophy, cognitive system, bias, etc.—used in building data, models and expert’s analysis, and implications, we are missing the big picture already. Nietzsche’s point that “the decisive value of an action may lie precisely in what is unintentional in it. … The intention is only a sign and a symptom, something which still needs interpretation, and furthermore a sign which carries too many meanings and, thus, by itself alone means almost nothing” (emphasis added).
Once the models are generated using agent based modeling, qualitative profiling etc it is necessary to interpret even the results from sociology of finance perspective, with philosophical maturity and not merely with common sense. Lastly, an open and inquisitive mindset is most necessary for accurate forecasts as ‘superforecasting’ emphasizes and shows.
More Strategic Notes
In The Black Swan, Taleb describes “Mediocristan,”(Quadrants I and II) as a place where Gaussian distributions are applicable. By contrast, he calls Quadrant IV “Extremistan.” It is Extremistan where we are interested for understanding complex systems. Actuaries like to build their models on the Gaussian distribution we are perhaps avoiding professional expertise by fooling ourselves by retreating to the comfort and safety of the womb of Mediocristan instead of facing Extremistan in all its unknown mystery and ambiguity.
To avoid being ambiguity averse, we can train ourselves to explore the unexplored. As actuaries, perhaps we could make a greater effort to uncover hidden patterns. Actuarial and statistical modeling is a double-edged sword. If applied correctly, it is a very powerful and effective tool to discover knowledge in data, but in the wrong hands it can also be distorted and generate absurd results. It is not only our results that can be absurd, but our risk-averse and ambiguity-averse mentalities as well. As Voltaire said “doubt is not a pleasant condition but certainty is absurd.”
Aristotle explains this further: “It is the mark of an instructed mind to rest satisfied with that degree of precision which the nature of the subject limits, and not to seek exactness where only an approximation of the truth is possible.”
This teaches us that we should be aware that precision implies confidence. We must be very alert to not fall into this trap. While point estimates are often required (we have to quote and file a specific premium), there are many cases where ranges of estimates are more appropriate. While statistical techniques can sometimes be used to generate precise confidence intervals, mostly statistical rigor is not possible or even necessary for emerging risks. By discussing a range of estimates, actuaries can provide more value to their stakeholders by painting a more complete picture of the potential impacts of decisions related to emerging liabilities.
Finally, we must ensure that actuarial output highlights fundamental questions at hand to stakeholders instead of confusing them with complicated numbers and lack of decisiveness. There is obviously a premium to be established but the management running the company does not care what the actual premium is—they need to know the likely impacts of that premium on the business. From a financial perspective we should avoid saying that we’ve priced for a certain margin because that exact margin is, in the end, going to be exactly wrong! The better approach would be to explain the range of possible outcomes and the impacts of each. As Nassim Nicholas Taleb explains: “There are so many errors we can no longer predict, what you can predict is the effect of the error on you!”
In conclusion, this is an exciting and dangerous time for data scientists and actuaries. The proliferation of big data, machine learning techniques and evolving of emerging risks at lightning speed has resulted in problem-solving means and aptitude which we have been previously unable to tackle, but the same advances have brought its own share of technical and mentality challenges.
 Richard Stein ; The Actuary As Product Manager In A Dynamic Product Analysis Environment
 Werther; SOA 2013; Recognizing When Black Swans Aren’t: Holistically Training Management to Better Recognize, Assess and Respond to Emerging Extreme Events
 Mills, A. SOA Predictive Analytics and Futurism Newsletter; Issue 1, 2009. Should Actuaries Get Another Job? Nassim Taleb’s Work And Its Significance For Actuaries
 Hileman, G. SOA Predictive Analytics and Futurism Newsletter; Issue 9, 2014. “Roughly Right”.