By
Professor Michael Mainelli
Published by Journal of Risk Finance, The Michael Mainelli Column, Volume 6, Number 2, Emerald Group Publishing Limited, pages 177-181.
This column was inspired by reading a book, meeting its two authors and having a breakfast. Benoit Mandelbrot, inventor of fractal geometry and one of the progenitors of Chaos Theory, is Sterling Professor of Mathematical Sciences at Yale University and a Fellow Emeritus at IBM’s Thomas J. Watson Laboratory. Richard L Hudson is former managing editor of The Wall Street Journal’s European edition. Their new book, The mis(Behaviour) of Markets: A Fractal View of Risk, Ruin and Reward [Mandelbrot and Hudson, 2004], is meant to challenge financial people to re-evaluate the standard tools and models of modern financial theory. I recently had the pleasure of discussing the book over breakfast with the authors and a number of leading financial thinkers. Some of the book recapitulates other writings by Mandelbrot and quite a bit of the book’s material has appeared in similar form earlier either by Mandelbrot himself or via other authors [e.g. Peters, 1991 & 1994]. Nevertheless, it is wonderful to see Mandelbrot return to finance again to remind us how shaky are some of our risk management foundations.
Despite paraphrasing the title of their book for this column, this is not a book review. The book is worth reading for its insights into the non-normal (non-bell curve or non-Gaussian) distributions encountered in finance and criticisms of the core assumptions of financial theory, e.g. Markowitz’s Random Walk, as well as noting the limitations and dangers in tools such as the Capital Asset Pricing Model, Sharpe Ratios and Black-Scholes option pricing. At the breakfast promoting their book, it seemed strange that the fundamental criticisms they made seemed, at times, less important to the small audience than how the authors’ conclusions might help improve financial prediction. In other words, “I don’t care whether it works; how can you help me make money?” I found myself more interested in what should my approach be if I believe that the foundations of risk management are wobbly.
If distinguished authors, supported by copious research, are criticising many fundamental risk management tools such as Beta and Value-at-Risk and labelling GARCH or FIGARCH as “patches”, perhaps we should pause for reflection before proceeding. Risk as a subject has witnessed advancing taxonomy in traditional Aristotelian form. While we have defined huge trees of risk, we still roll ahead of us a pile of un-taxonomised risk. Formerly we believed we understood “market” risk and “credit” risk, so “operational” risk was the pile of remainders. Today we can assume we are understood as we refer to “market”, “credit” or “operational” risk, but our rolling pile still exists. We are being slightly disingenuous when we pretend that everyone surely understands “liquidity” risk. Only recently have we been paying attention to “reputational” risk or “governance” risk.
Problems posed by chaos, complexity and predictability question whether a systems approach to risk management is appropriate or possible in whole or in part. Chaos Theory is not a theory, but a way of approaching problems. Chaos Theory assembles a set of techniques and viewpoints which seem to recur in problems. These techniques try to take account of boundary conditions between order and disorder, between the easily modelled problem and the impossible to model problem. Although earlier work by Cantor and Sierpinski has been a great influence, “chaos” as an informal intellectual movement began in the early 1960’s, largely kicked off by the work of Lorenz and Mandelbrot. “Many systems have now been found where randomness and determinism, or chance and necessity, integrate and coexist” [Peters, 1991, page 41].
Non-linear dynamic models are the posters of the Chaos Theory movement, as evidenced in numerous coffee table books of striking images produced from graphs of the models (as well as book covers themselves, such as that of Mandelbrot and Hudson’s). The initial recognition was that simple models can produce apparent complexity. Further, this apparent complexity seems, in many cases, to resemble apparent complexity found in nature, such as trees, clouds or coastlines. The exploration of fractals rose markedly in the mid-1970’s, made significantly easier through the ready availability of computers. Graphs of deterministic non-linear models that appear to generate unpredictable behaviour are popular in Chaos Theory because they illustrate well the existence of a region of models bounded by chaos on one side and the organised simplicity of continuous models on the other. These models tend to have the characteristics of being aperiodic and having forms that change structurally given small changes to the model variables. At the same time these models exhibit an underlying order, with resonances, symmetries and attractors that people perceive. Several authors make a point that linear modeling is a reduced case of non-linear modeling and that fractals, for instance, are truly ubiquitous in a way that linear approaches cannot be. Chaos Theory and its illustrative companions, fractals, are seen to encompass a new science of wholeness. On a cautionary note, and echoing William of Ockham,
“Although it is true that it is the goal of science to discover rules which permit the association and foretelling of facts, this is not its only aim. It also seeks to reduce the connexions discovered to the smallest possible number of mutually independent conceptual elements. It is in this striving after the rational unification of the manifold that it encounters its greatest successes, even though it is precisely this attempt which causes it to run the greatest risk of falling a prey to illusions.” [Einstein, Nature 146, 1940, 605, as taken from van den Beukel, page 83]
So beware of fractals as “illusions”. While there are elements of romanticism within Chaos Theory literature, concrete analytical work sits alongside. Chaos Theory’s distinctions among chaos, order and the boundary between chaos and order is particular useful as a metaphor. One large theme in Chaos Theory is said to originate with Poincaré, “sensitive dependence on initial conditions” [Peters, 1991, page 135]. “Chaos theory is about limitations on predictability in classical physics, stemming from the fact that almost all classical systems are inherently unstable. The ‘instability’ in question has nothing to do with any tendency to behave violently or disintegrate. It is about an extreme sensitivity to initial conditions.” [Deutsch, 1997, page 201]
In one direction, we frequently examine financial-mathematical models and compare them with reality in hopes of finding a lasting comparison from which the existence of a theory may be inferred. However, this “can be confused with ‘data mining’ or torturing the data until it confesses … Actual results depend on many numerical experiments with varying test parameters. If this sounds unscientific, it is” [Peters, 1991, pp. 171-172]. Another direction is to take a theory, develop a model and compare it with reality.
De Grauwe and Vansenten built a model of the foreign exchange market from generally accepted theory [De Grauwe and Vansenten, 1990]. They demonstrated the chaotic properties of the model and contrasted the model with actual exchange rate data showing numerous statistical similarities with the model outputs. The results illustrate that models can be constructed which may be perceived to mimic actual market behaviour but which are not predictive. However, minute changes to the model inputs resulted in new outputs that bore little statistical similarity to actual exchange rate data. The model was sensitive to initial conditions to such a degree that no data could ever be accurate enough to begin forecasting. The degree of variance in normal input data was more than sufficient to structurally change the model output. Trivial facts or events can completely alter the model output. “Exact” knowledge of the environment would be necessary to use the model predictively and knowledge to that accuracy is unlikely. Moreover, the model is untestable as there is no means of obtaining data of sufficient quality to test it.
Niels Bohr’s famous wry comment was, “prediction is extremely difficult, especially about the future.” Working empirically, Makridakis concludes that the answer to “can the future be predicted?” is, “it depends” [Makridakis, 1986], pointing out the many problems with setting realistic expectations for forecasts. The Economist proffers an old rule, “predict a figure or a date, but never both” [25 April 1992, page 91]. Many people, including this columnist, are less interested in deterministic models than predictions from historic statistics. We are interested in seeing whether, while we may not be able to model the system, we can work from input measures such as Key Risk Indicators (KRIs) for banks to predict losses, or from historic prices to identify anomalous trades. We have taken on board some of Mandelbrot’s comments over the years and do include fractal dimensions as coefficients, just as we do volatility or weighted moving averages. We spend our time building correlation engines between inputs and outputs without concerning ourselves overly much with the many sub-systems interacting and creating the outputs from the inputs. “Predictions from models without understanding”, if you will.
Interestingly, however improbable prediction may seem as a measure, most organisations feel that a successful plan or system should predict future performance, with a bias to believing that outcomes result from action, not inaction. Ironically, inaction or the initial disposition of forces are decisive factors in many victories. Sun Tzu indicates that there is a strong element of quiet victory in strategic success and that true success is the victory won before an obvious battle, “and therefore the victories won by a master of war gain him neither reputation for wisdom nor merit for valour” [Sun Tzu, 1963]. It can be difficult to distinguish positive inaction from luck, or negative action from misfortune. When organisations try to apply objective measures of success to predictive systems, the results are disappointing, particularly as detailed in Sherden [1998] or Mintzberg [1994, pp. 91-152]. The concept of prediction from historic data entails a number of philosophical problems, such as being unable to predict discontinuities, let alone paradox:
“But it is logically impossible to predict future knowledge: if we could predict future knowledge we would have it now, and it would not be future; if we could predict future discoveries they would be present discoveries. From this it follows that if the future contains any significant discoveries at all it is impossible to predict it scientifically . . . if the future were scientifically predictable it could not, once discovered, remain secret, since it would in principle be rediscoverable by anybody. This would present us with a paradox about the possibility/impossibility of taking evasive action. . . . With the collapse of the notion that the future is scientifically predictable the notion of the totally planned society goes down as well. This is also shown to be logically incoherent in other ways: first, because it cannot give a consistent answer to the question ‘Who plans the planners?’” [Magee, 1973, page 100]
So, we reach a tight spot. Leading mathematicians, such as Mandelbrot, point out that most of our financial data distributions are non-normal, thus questioning the popular tools in our toolkit. Our taxonomies are brittle and immature and can’t handle “sets of risk” [Leitch, 2004]. Our deterministic models are potentially chaotic and unverifiable. Our statistical models can help us in ‘steady state’ situations or at identifying anomalies, but are worthless at predicting discontinuities. Our approach to designing risk management systems assumes that cause (our new, improved risk management system) results in success (reduced risk), but we cannot prove it. We have few tools at all for handling the largest operational risk factor, people [Howitt et al, 2004] [Mainelli, 2004].
People sometimes ask how financial market risk varies from natural risk such as earthquakes or hurricanes. In my opinion, the key distinction is that financial markets incorporate people’s perceptions. People add a very strong feed-forward loop to financial systems. Talking a market up or down frequently moves a market up or down. If people change their perceptions of a risk, e.g. terrorism recently, then that perception change alters future behaviours. My suspicion is that the feed-forward loop of people’s perceptions might well explain the non-normal distributions we encounter in finance.
There is an old Groucho Marx joke that Woody Allen recycled to explain the inevitability of amorous relationships: “Doctor, my brother thinks he’s a chicken. Can you help?” “Why don’t you stop him?” “We need the eggs!” In some ways, risk management systems, checklists or models are inevitable. These risk tools may not work but, as humans, we need the eggs.
So what should risk managers do? Well, firstly, proceed with caution around any certainties in approaches to risk. Secondly, be honest with ourselves and others about the limitations of top-down taxonomies, ticklists, models and systems. Thirdly, keep risk management close to issues of human behaviour; risk management is mostly about changing future behaviours. Fourthly, look for hard evidence that will help you reinforce risk management approaches that are working and revise or remove those that are not - find KRIs that really are related to losses; rethink ticklists until their scores really do reveal higher risks; search for questions and processes that demonstrably add value.
Finally, we need to recognise that we are only in the early stages of research into risk management. Many of today’s accepted tools and techniques will be shown in future to be useless or wrong or even dangerous. We need to avoid erecting such an elaborate establishment that we can’t dismantle large portions for rebuilding. In short, we need to keep our minds open, our spirits inquiring and retain a sense of fun and adventure as we try to solve some of the biggest problems in finance – aligning risk and reward.
[1] DE GRAUWE, Paul and DEWACHTER, Hans, “A Chaotic Monetary Model of the Exchange Rate”, Center for Economic Policy Research Discussion Paper, No. 466, October 1990.
[2] DE GRAUWE, Paul and VANSENTEN, Kris, “Deterministic Chaos in the Foreign Exchange Market”, Center for Economic Policy Research Discussion Paper, No. 370, January 1990.
[3] DEUTSCH, David, The Fabric of Reality, The Penguin Press, 1997.
[4] HOWITT, Jonathan, MAINELLI, Michael and TAYLOR, Charles, “Marionettes, or Masters of the Universe? The Human Factor in Operational Risk”, Operational Risk (A Special Edition of The RMA Journal), pages 52-57, The Risk Management Association (May 2004).
[5] LEITCH, Matthew, “Rethink Your Attitude to Risk – Start to Think About Sets of Risk”, Balance Sheet, The Matthew Leitch Column, Volume 12, Number 5, pages 9-10, Emerald Group Publishing Limited (October 2004).
[6] MAGEE, Bryan, Popper, Fontana Press, 1973 (1985 ed).
[7] MAINELLI, Michael, “Personalities of Risk/Reward: Human Factors of Risk/Reward and Culture”, Journal of Financial Regulation and Compliance, Volume 12, Number 4, pages 340-350, Henry Stewart Publications (November 2004).
[8] MAKRIDAKIS, Spyros, “The Art and Science of Forecasting”, International Journal of Forecasting, Vol. 2, 1986, pp. 15-39.
[9] MANDELBROT, Benoit B and HUDSON, Richard L, The (mis)Behaviour of Markets: A Fractal View of Risk, Ruin and Reward, Profile Books Ltd, 2004.
[10] MINTZBERG, Henry, The Rise and Fall of Strategic Planning, Prentice Hall, 1994.
[11]PETERS, Edgar E., Chaos and Order in the Capital Markets: A New View of Cycles, Prices, and Market Volatility”, John Wiley & Sons, 1991.
[12] PETERS, Edgar E., Fractal Markets Analysis: Applying Chaos Theory to Investment and Economics, John Wiley & Sons, 1994.
[13] SHERDEN, William A., The Fortune Sellers, John Wiley & Sons, 1998.
[14] SUN TZU (translated by Samuel Griffith), The Art of War, Oxford University Press, 1963.
[15] VAN DEN BEUKEL, A., More Things in Heaven and Earth: God and the Scientists, SCM Press Ltd, 1991.
I would like to thank the Centre for the Study of Financial Innovation for sponsoring the breakfast with Benoit Mandelbrot and Richard Hudson that inspired this column. Thanks too to Matthew Leitch for improving the clarity of “fourthly”.
Michael Mainelli, PhD FCCA FCMC MBCS CITP MSI, originally did aerospace and computing research followed by seven years as a partner in a large international accountancy practice before a spell as Corporate Development Director of Europe’s largest R&D organisation, the UK’s Defence Evaluation and Research Agency, and becoming a director of Z/Yen (Michael_Mainelli@zyen.com). Z/Yen was awarded a DTI Smart Award 2003 for its risk/reward prediction engine, PropheZy, while Michael was awarded IT Director of the Year 2004/2005 by the British Computer Society for Z/Yen’s work on PropheZy.
Michael’s humorous risk/reward management novel, “Clean Business Cuisine: Now and Z/Yen”, written with Ian Harris, was published in 2000; it was a Sunday Times Book of the Week; Accountancy Age described it as “surprisingly funny considering it is written by a couple of accountants”.
Z/Yen Limited is a risk/reward management firm helping organisations make better choices. Z/Yen undertakes strategy, finance, systems, marketing and intelligence projects in a wide variety of fields (www.zyen.com), such as developing an award-winning risk/reward prediction engine, helping a global charity win a good governance award or benchmarking transaction costs across global investment banks.
Z/Yen Limited, 5-7 St Helen’s Place, London EC3A 6AU, United Kingdom; tel: +44 (0)20 7562 9562.
[An edited version of this article first appeared as "The (Mis)Behaviour of Risk Managers: Recognizing Our Limitations" (implications of chaos and fractal criticisms), Journal of Risk Finance, The Michael Mainelli Column, Volume 6, Number 2, Emerald Group Publishing Limited (April 2005) pages 177-181.]