Living in a World of Radical Uncertainty
Suppose that three months ago someone had asked you what the probability was that a virus would cause the stock market to crash in 2020 – not a computer virus, mind you, but a microorganism. You might have said the probability was infinitesimal. You might have said one in 10,000. Or you might have done a study – albeit utterly lacking in statistical significance – of what happened to the stock market when diseases spread in the past.
But if you were honest, you would simply say, “I don’t know what that probability is.”
The quantification mania
A few years ago I attended a conference at which most of the talks were about finance. Of course, the topic of risk was central to many of them.
After a talk by a luncheon speaker, there were audience questions and comments. One of the attendees, John Kay, himself a conference speaker, rose and gave his view that risk was defined not by volatility but through “narratives” – of which I will speak later.
Immediately after lunch I attended one of the afternoon’s parallel sessions. It was given by a finance professor and one of his graduate students.
The professor began by saying, “John Kay says risk is defined by narratives.” He then held his hands up, palms upward, shrugged and said, “What can you do with that?”
What he meant, I soon learned, was, “How can you make use of that observation to develop a long series of complicated-looking mathematical formulas in a PowerPoint presentation?”
Narratives can’t be expressed in mathematical formulas. But in this professor’s view – and the view of much of the academic finance field – you’re “doing something” only if you’re developing a series of complicated-looking mathematical formulas.
I noticed that the assumptions behind this professor’s formulas – which were hardly even touched on in the talk – made no sense whatsoever. I communicated with him about this afterwards, but quickly gave up the discussion.
Once they start wielding what they are thrilled to believe is mathematics, there’s no stopping them.
The harm done by models
A new book, “Radical Uncertainty,” co-authored by the same John Kay, a prominent and erudite British economist, and Mervyn King, a former governor of the Bank of England, elucidates at engaging length what Kay meant.
The book takes to task the use of models – the kinds of models that make substitutions of mathematical formulas for real-world concepts, for example the substitution of volatility for risk.
But their critique is finely focused.
There used to be a joke about the difference between neurotics and psychotics: Neurotics build castles in the air, while psychotics live in them.
It’s not the neurotics with whom Kay and King take issue – that is, the creators of thought experiments and bold theories, who build simplified models of “small world” versions of the real world, aiming to help us illuminate and think about how the real world works.
No, it’s not the neurotics, but the psychotics that worry them – the ones who mistake the models for the real world itself.
Their poster boy for this kind of psychotic is David Viniar, who was the chief financial officer of Goldman Sachs before and during the financial crisis. On August 13, 2007, Kay and King recount, after BNP Paribas had suspended redemptions from three of its funds, Viniar told the Financial Times, “We were seeing things that were 25-standard deviation moves, several days in a row.”
I also have it on good authority from a friend who had dinner with Nobel laureate Robert Merton after the meltdown of Long-Term Capital Management – in which Merton was a principal – that during dinner Merton shrugged and said that what hit LTCM was an eight-standard deviation event. The implication was that it couldn’t possibly have been predicted.
A 25-standard deviation event has essentially a zero probability of occurring even once in the lifetime of the universe; an eight-standard deviation event has no more than a vanishingly small one.
Both of these people mistook their model for the real world. In the real world the events that overtook them could not have been assigned a probability. They could, however, have been easily imagined by someone who was not blinded by a model, but simply posed themselves the question – as Kay and King repeatedly suggest – “What is going on here?”
The most consequential case of substitution of a model for the real world
Many factors contributed to the financial crisis of 2007-2009, but one stands out – the triple-A ratings given by the bond ratings agencies Moody’s, S&P, and Fitch to collateralized debt obligations (CDOs). If ratings agencies had not conferred triple-A status on those instruments the subprime meltdown, and the financial crisis, would not have occurred.
Why did the ratings agencies confer triple-A status on these debt instruments when they turned out to be so bad? Because they blithely assumed the same sort of stationarity in economic processes that is found in physical processes. This is a very bad assumption.
In an April 27, 2008, article in The New York Times, journalist Roger Lowenstein illustrated this mistake very clearly. It was an error committed not actually by mistake but with eyes wide open, in compliance with the insane dictates of the psychotic modelling culture:
Moody’s did not have access to the individual loan files, much less did it communicate with the borrowers or try to verify the information they provided in their loan applications. “We aren’t loan officers,” Claire Robinson, a 20-year veteran who is in charge of asset-backed finance for Moody’s, told me. “Our expertise is as statisticians on an aggregate basis. We want to know, of 1,000 individuals, based on historical performance, what percent will pay their loans?”
Notice the breezy transition from based on historical performance to “what percent will pay their loans.” The process was assumed stationary over time, without even a mention of that assumption.
Moody’s had only historical data on the rate of defaults on mortgages, which was low. So it assumed in its models that the probability of future defaults in the mortgages contained in CDOs would be similarly low.
All its analysts would have had to do is ask “What is going on here?”, then send a few agents out to visit a random sample of mortgagors and make inquiries. They would have quickly discovered that many of the mortgagors had acquired “liar loans” or “ninja loans” (no income no job no assets) and would default if home prices stopped rising.
But if they did that, their judgment would have been more qualitative then quantitative – and therefore not “evidence-based,” as the term is understood by today’s modeling psychotics.
A delightful read
I found Radical Uncertainty – 444 pages long – highly engaging reading throughout, not only because I agreed with virtually everything in it, but because it is chock full of fascinating asides, brain-teasers, and frequently ironic turns of phrase.
For example, I learned that Edmond Halley, best known for Halley’s Comet, also constructed the first mortality table. (Perhaps I should have known this already, but I did not – unless I had forgotten it.i)
My brain was teased by the following conundrum, intended to illustrate the difficulty of depending on “expected return”: “You are offered the choice of two envelopes and are told that one contains twice as much money as the other. You make your choice, open envelope one, and find that it contains $100. The referee asks if you would prefer envelope two.”
On an expected value basis, you would switch, because envelope two contains $50 or $200 with equal probability. But suppose instead it was envelope two you had opened first. Again the referee asks if you would prefer to switch. Again you would switch based on its expected value. Can this make sense – whichever envelope was opened, you would switch? I had to think about this for a while.ii
The authors make the best satiric use of the Lewis Carroll poem Jabberwocky that I have ever seen. The reader will have to learn the details from the book itself, but it concludes: “But by using words such as ‘output,’ ‘inflation,’ and ‘money,’ which appear to have real counterparts, rather than ‘toves’ and ‘borogoves,’ [economics Nobel laureate Robert] Lucas and his followers elided this distinction between their artificial world and the complex real world, and many users of their analysis were misled.”
Often repeated but utterly absurd conventional wisdoms are pilloried. It is frequently said that a corporate CEO’s job is to maximize shareholder value. But, “No chief executive knows what will maximize shareholder value, or after the event whether it has indeed been maximized.”iii
Faddish intellectual balloons are punctured, such as the admonition to make decisions and policy “evidence-based.” In a lapidary turn of phrase, the authors say, “evidence-based policy too often reduces … to policy-based evidence.” In other words, the evidence is fashioned to suit the policy, rather than the other way around.
“Behavioral economics” does not escape this scrutiny – nor should it. Kay and King point out that when experiments such as those performed by Nobel laureate Daniel Kahneman and his associate Amos Tversky showed that people don’t conform to the definition of “rationality” assumed by economic models, Kahneman and Twersky concluded that it was a failure of the people the model was intended to describe rather than a failure of the model.
There are so many fallacies that are accepted as truth because of an unshakable belief in economic models that it is beginning to resemble the phlogiston theory of combustion that prevailed for centuries. Perhaps it is true, as physicist Max Planck said, that “science advances one funeral at a time” – we may have to wait for all the current economists to die before we cease to believe so deeply in these models.
As I said near the beginning, Kay’s and King’s preference is to define risk in terms of “narratives.” “Risk,” they say, “is a failure of a projected narrative, derived from real-life expectations, to unfold as envisaged.”
Insurance, for example – a pre-eminent risk-mitigation measure – “is based not on calculations of expected value, but on the desire to protect the reference narrative of the insured.”
This is true, since a calculation of the expected return on almost any investment in insurance – life insurance, fire insurance, simple annuities, etc. – shows that its expected return is inferior to that of virtually any other kind of investment, and is usually negative. It’s even difficult to explain on an expected utility basis. Then why do people so often purchase it? Not because they’re irrational, but to protect their reference narrative. The financial models that supposedly work for other forms of investment have no relevance for this decision.
“The key to managing risk,” the authors go on to say, “is the identification of reference narratives which have these properties of robustness and resilience.”
This approach is perfectly suited to the problem of life-cycle financial planning. It resembles the “safety first” approach to financial planning that Wade Pfau and others (including myself) have advocated.
I had a phone conversation with John Kay about how his theory meshes with Pfau’s. Many independent financial advisors undoubtedly apply this approach already. They pose the question “What is going on here?” to clients who they interview. (Those advisors who are only interested in selling a commissioned product, of course, don’t.) They apply models, like Monte Carlo simulation, only as ancillary means of foreseeing possible future scenarios – or “narratives” – but they don’t substitute models for the real world. And they try to identify reference narratives that are robust and resilient, and try to make them a reality while defending against the risk of their failure.
If creating a reference narrative for every client entails too much time and effort, you might need to create a series of standardized narratives to keep on the shelf, then select one of them that seems the best fit to a particular client.
Identifying such a robust and resilient reference narrative is at the heart of the task. It may not be easy, and it may not be perfect, and it certainly won’t “optimize,” as the models mistakenly lead us to believe they are doing. But it will serve the purpose.
If we believe too much in models, it will harm our ability to see things clearly – much as too much belief in financial models harmed our ability to prevent a financial crisis.
Modeling is to finance and economics what Harry Potter is to the real world – a parallel universe, one that might be illuminating and even helpful. But it’s not our universe.
Economist and mathematician Michael Edesess is adjunct associate professor and visiting faculty at the Hong Kong University of Science and Technology, chief investment strategist of Compendium Finance, adviser to mobile financial planning software company Plynty, and a research associate of the Edhec-Risk Institute. In 2007, he authored a book about the investment services industry titled The Big Investment Lie, published by Berrett-Koehler. His new book, The Three Simple Rules of Investing, co-authored with Kwok L. Tsui, Carol Fabbri and George Peacock, was published by Berrett-Koehler in June 2014.
i This supplemented my conversation-piece knowledge of the fact that 19th century mathematician Simon Newcomb, who wrote books on economics praised by John Maynard Keynes and Irving Fisher, also constructed the tables of the ephemerides, which listed the daily positions of the planets in the solar system relative to the sun. I made use of the formulas Newcomb used to construct the tables m in writing a computerized astrology program as a side job while a graduate student.
ii It does make sense. If you always choose to switch and are offered the choice 100 times, the probability is 97.7% that you’ll have at least 10% more money than if you didn’t switch.
iii To learn how to maximize shareholder value, one can do no better than to read another of John Kay’s books, “Obliquity.” It shows that goals are often best achieved by approaching them obliquely rather than directly, for example by trying to create the most valuable and useful product rather than trying to maximize shareholder value.