001

Table of Contents
 
Title Page
Copyright Page
Dedication
Acknowledgements
 
Chapter 1 - Introduction
 
1.1 Why We Need Stress Testing
1.2 Plan of the Book
1.3 Suggestions for Further Reading
 
Part I - Data, Models and Reality
Chapter 2 - Risk and Uncertainty - or, Why Stress Testing is Not Enough
 
2.1 The Limits of Quantitative Risk Analysis
2.2 Risk or Uncertainty?
2.3 Suggested Reading
 
Chapter 3 - The Role of Models in Risk Management and Stress Testing
 
3.1 How Did We Get Here?
3.2 Statement of the Two Theses of this Chapter
3.3 Defence of the First Thesis (Centrality of Models)
3.4 Defence of the Second Thesis (Coordination)
3.5 The Role of Stress and Scenario Analysis
3.6 Suggestions for Further Reading
 
Chapter 4 - What Kind of Probability Do We Need in Risk Management?
 
4.1 Frequentist versus Subjective Probability
4.2 Tail Co-dependence
4.3 From Structural Models to Co-dependence
4.4 Association or Causation?
4.5 Suggestions for Further Reading
 
Part II - The Probabilistic Tools and Concepts
Chapter 5 - Probability with Boolean Variables I: Marginal and Conditional Probabilities
 
5.1 The Set-up and What We are Trying to Achieve
5.2 (Marginal) Probabilities
5.3 Deterministic Causal Relationship
5.4 Conditional Probabilities
5.5 Time Ordering and Causation
5.6 An Important Consequence: Bayes’ Theorem
5.7 Independence
5.8 Two Worked-Out Examples
5.9 Marginal and Conditional Probabilities: A Very Important Link
5.10 Interpreting and Generalizing the Factors
5.11 Conditional Probability Maps
 
Chapter 6 - Probability with Boolean Variables II: Joint Probabilities
 
6.1 Conditioning on More Than One Event
6.2 Joint Probabilities
6.3 A Remark on Notation
6.4 From the Joint to the Marginal and the Conditional Probabilities
6.5 From the Joint Distribution to Event Correlation
6.6 From the Conditional and Marginal to the Joint Probabilities?
6.7 Putting Independence to Work
6.8 Conditional Independence
6.9 Obtaining Joint Probabilities with Conditional Independence
6.10 At a Glance
6.11 Summary
6.12 Suggestions for Further Reading
 
Chapter 7 - Creating Probability Bounds
 
7.1 The Lay of the Land
7.2 Bounds on Joint Probabilities
7.3 How Tight are these Bounds in Practice?
 
Chapter 8 - Bayesian Nets I: An Introduction
 
8.1 Bayesian Nets: An Informal Definition
8.2 Defining the Structure of Bayesian Nets
8.3 More About Conditional Independence
8.4 What Goes in the Conditional Probability Tables?
8.5 Useful Relationships
8.6 A Worked-Out Example
8.7 A Systematic Approach
8.8 What Can We Do with Bayesian Nets?
8.9 Suggestions for Further Reading
 
Chapter 9 - Bayesian Nets II: Constructing Probability Tables
 
9.1 Statement of the Problem
9.2 Marginal Probabilities - First Approach
9.3 Marginal Probabilities - Second Approach
9.4 Handling Events of Different Probability
9.5 Conditional Probabilities: A Reasonable Starting Point
9.6 Conditional Probabilities: Checks and Constraints
9.7 Internal Compatibility of Conditional Probabilities: The Need for a ...
 
Part III - Applications
Chapter 10 - Obtaining a Coherent Solution I: Linear Programming
 
10.1 Plan of the Work Ahead
10.2 Coherent Solution with Conditional Probabilities Only
10.3 The Methodology in Practice: First Pass
10.4 The CPU Cost of the Approach
10.5 Illustration of the Linear Programming Technique
10.6 What Can We Do with this Information?
 
Chapter 11 - Obtaining a Coherent Solution II: Bayesian Nets
 
11.1 Solution with Marginal and n-conditioned Probabilities
11.2 An ‘Automatic’ Prescription to Build Joint Probabilities
11.3 What Can We Do with this Information?
 
Part IV - Making It Work In Practice
Chapter 12 - Overcoming Our Cognitive Biases
 
12.1 Cognitive Shortcomings and Bounded Rationality
12.2 Representativeness
12.3 Quantification of the Representativeness Bias
12.4 Causal/Diagnostic and Positive/Negative Biases
12.5 Conclusions
12.6 Suggestions for Further Reading
 
Chapter 13 - Selecting and Combining Stress Scenarios
 
13.1 Bottom Up or Top Down?
13.3 Possible Approaches to a Top-Down Analysis
13.4 Sanity Checks
13.5 How to Combine Stresses - Handling the Dimensionality Curse
13.6 Combining the Macro and Bottom-Up Approaches
 
Chapter 14 - Governance
 
14.1 The Institutional Aspects of Stress Testing
14.2 Lines of Criticism
Appendix - A Simple Introduction to Linear Programming
References
Index

001

To my parents
To my wife
To my son

Acknowledgements
It gives me great pleasure to acknowledge the support I received while writing this book.
Dr Keating provided very insightful comments and pointed out some ‘loose thinking’. I am very grateful for that.
Many of my colleagues have helped me a lot, by challenging my thoughts, pointing out what did and did not work, and suggesting how I could improve the approach. I am sure that I will forget many, but I certainly extend my thanks to Dr Gary Dunn, Dr Ron Keating, Dr Ronnie Barnes, Mr Daniel Burns, Dr Michael Smith, Mr Paul Fairhurst, Dr Jeremy Broughton, Dr Ed Hayes, Mr Craig Schor, Dr Tom Connor and, for his unflinching criticism, Dr Stephen Laughton. Above all, however, I thank Dr Jan Kwiatkowski, who proposed an earlier version of the Linear Programming approach that I describe in Chapter 10. His contribution has extended well beyond this technical suggestion, as over the years he has become my Bayesian mentor.
I have greatly benefited from discussions that I have had with regulators, at the Boston Fed, at the FSA and at the MSA. I am grateful for the time they spent discussing my ideas on stress testing.
I have presented parts of the material in this book at several conferences in the US and in Europe, and therefore I have received extremely useful and insightful comments from many delegates. The book would be much the poorer without their suggestions: thank you.
At John Wiley & Sons, Ltd, Caitlin Cornish first and then Pete Baker have shown from the start great enthusiasm for the project. I am very grateful for this.
Last but not least, my wife and my parents have given me continuous support and encouragement. In a way, it is to them that I owe my greatest debt of gratitude.
Despite all this help, I am sure that there are still many mistakes and imperfections in the book. I am fully and solely responsible for these.

Chapter 1
Introduction
[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply don’t know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us [. . .] a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed . . . .
Robertson (1936)

1.1 Why We Need Stress Testing

Why a book about stress testing? And why a book about stress testing now? Stress testing has been part of the risk manager’s toolkit for decades.1 What justifies the renewed interest from practitioners and regulators2 for a risk management tool that, truth be told, has always been the poor relation in the family of analytical techniques to control risk?3 And why has stress testing so far been regarded as a second-class citizen?
Understanding the reason for the renewed interest is simple: the financial crisis of 2007-2008-2009 has shown with painful clarity the limitations of the purely statistical techniques (such as Value at Risk (VaR) or Economic Capital) that were supposed to provide the cornerstones of the financial edifice. In the year and a half starting with July 2007, events of once-in-many-thousand-years rarity kept on occurring with disconcerting regularity. Only a risk manager of Stalinist dogmatism could have lived through these events and ‘kept the faith’. Clearly, something more - or, rather, something different - had to be done. But what? And what analytical tools should we employ to fix the problem?
‘Stress testing’ has become the stock answer to these questions. But the unease and suspicion with which this technique has been regarded has not melted away. The frog has not been kissed (yet) into a handsome prince. The current attitude seems one of resigned acceptance of a faute-de-mieux measure of risk: a far cry from an enthusiastic embrace of a new and powerful analytical tool. Two cheers, the mood seems to be, for stress testing. Can we do better? And why has stress testing been regarded as such an ungainly frog in the first place?
If by stress testing we mean the assessment of very severe financial losses arrived at without heavy reliance on statistical techniques, but by deploying instead a large dose of subjective judgement, some answers to the latter question are not difficult to see. Rather than paraphrasing, I would like to quote extensively from an article by Aragones, Blanco and Dowd (2001), who put their fingers exactly on the problem:
. . . traditional stress testing is done on a stand-alone basis, and the results of stress tests are evaluated side-by-side with the results of traditional market risk (or VaR) models. This creates problems for risk managers, who then have to choose which set of risk exposures to ‘believe’. [R]isk managers often don’t know whether to believe their stress test results, because the stress tests exercises give them no idea of how likely or unlikely stress-test scenarios might be . . . .
And again:
A related problem is that the results of stress tests are difficult to interpret because they give us no idea of the probabilities of the events concerned, and in the absence of such information we often don’t know what to do with them. Suppose for instance that stress testing reveals that our firm will go bust under a particular scenario. Should we act on this information? The only answer is that we can’t say. If the scenario is very likely, we would be very unwise not to act on it. But if the scenario was extremely unlikely, then it becomes almost irrelevant, because we would not usually expect management to take expensive precautions against events that may be too improbable to worry about. So the extent to which our results matter or not depends on unknown probabilities. As Berkowitz [1999] nicely puts it, this absence of probabilities puts ‘stress testing in a statistical purgatory. We have some loss numbers, but who is to say whether we should be concerned about them?’
The result of this state of affairs is not pretty: we are left with
. . . two sets of separate risk estimates - probabilistic estimates (e.g., such as VaR), and the loss estimates produced by stress tests - and no way of combining them. How can we combine a probabilistic risk estimate with an estimate that such-and-such a loss will occur if such-and-such happens? The answer, of course, is that we can’t. We therefore have to work with these estimates more or less independently of each other, and the best we can do is use one set of estimates to check for prospective losses that the other might have underrated or missed . . .
In modern finance, risk and reward are supposed to be two sides of the same coin. Risk is ‘priced’ in terms of expected return by assigning probabilities4 to outcomes. But when it comes to extreme events, absent of any probabilistic assessment, we don’t know how to ‘price’ the outcomes of stress testing. And if our confidence in assigning a probability to extremely rare events has been terminally shaken by the recent market events,5 the state of impasse seems inescapable.
Perhaps there is hope - and it is exactly this ray of hope that this book pursues. First of all, ‘probabilistic statement’ need not be equated with ‘frequentist (i.e., purely data-driven) probabilistic statement’. As I discuss in Chapter 4, there is a different way of looking at probability that takes into account, but is not limited to, the pure analysis of data. I maintain (and I have argued at length elsewhere6) that the subjective view of probability is every bit as ‘respectable’ as the purely-data-driven (frequentist) one. I also believe that it is much better suited to the needs of risk management.
This view, while not mainstream, is not particularly new, especially in the context of financial risk management - see, e.g., Berkowitz (1999). However, the subjective approach brings about an insidious problem. It is all well and good to assign subjective probabilities to stand-alone events. But, if we want to escape from Berkowitz’s purgatory, we will have to do more. We will have to combine different stress scenarios, with different subjective probabilities, into an overall coherent, albeit approximate, stress loss number at a given confidence level (or, perhaps, into a whole stress loss distribution). How is one to do that? How is one to provide subjectively these co-dependences - and tail co-dependences to boot? How is one to ensure that the subjectively-assigned probabilities are reasonable, let alone feasible (i.e., mathematically possible and self-consistent)?
This book offers two routes to escape this purgatorial dilemma. The first is the acknowledgement that the risk manager can only make sense of data on the basis of a model (or of competing models) of reality. A risk manager, for instance, should have a conception of the direction of causation between different events: does a dramatic fall in equity prices ‘cause’ an increase in equity implied volatilities? Or is it an increase in implied volatility that ‘causes’ a dramatic fall in equity prices? The answer, at least in this case, may seem obvious. Unfortunately, correlations, and even conditional probabilities, contain no information about the direction of causation. Yet, this information about causation, even if imperfect, is powerful. It is ignored in the frequentist approach at a great loss for the risk manager. Speaking about the sciences in general, Pearl (2009) points out that there is ‘no greater impediment to scientific progress than the prevailing practice of focusing all of our mathematical resources on statistical and probabilistic inferences’. I believe that exactly the same applies in the area of quantitative risk management.
If one is prepared to ‘stick one’s neck out’ and make some reasonable assumptions about the direction of the arrow of causation, just like Dante one can begin to glimpse some light filtering through the thick trees of the selva oscura. In the case of stress testing, the route to salvation is via the provision of information that is not ‘contained in the data’.
Sure enough, even if one can provide this extra information not all is plain sailing. Organizing one’s understanding about how the world might work into a coherent and tractable analytical probabilistic framework is not an easy task. Fortunately, if one is prepared to make some reasonable approximations, there are powerful and intuitive techniques that can offer great help in building plausible and mathematically self-consistent joint distributions of the stress losses that have been identified. These technical tools (Bayesian networks and Linear Programming) have been well known for a long time, but their application to risk management problems, and to stress testing in particular, has been hesitant at best. This is a pity, because I believe that they are not only powerful and particularly well suited to the problem at hand, but also extremely intuitively appealing. And in Section 2.2 of the next chapter I will highlight how important appeal to intuition can be if the recommendations of the risk managers are to be acted upon (as opposed to ‘confined to a stress report’).
Once we accept that we can, approximately but meaningfully, associate stress events with a probabilistic assessment of their likelihood, the questions that opened this chapter begin to find a compelling answer. We need stress testing, and we need stress testing now, because the purely-data-based statistical techniques we have been using have proven unequal to the task when it really mattered. Perhaps the real question should have been instead: ‘How can we do without stress testing?’
Of course, there is a lot more to risk management than predicting the probability of losses large and small. But, even if we look at the management of financial risk through the highly reductive prism of analysing the likelihood of losses, there still is no one single goal for the risk manager. For instance, estimating the kind of profit-and-loss variability that can be expected on a weekly or monthly basis has value and importance. Ensuring that a business line or trading desk effectively ‘diversifies’ the revenue stream from other existing lines of activity under normal market conditions is also obviously important. So is estimating the income variability or the degree of diversification that can be expected from a portfolio of businesses over a business cycle. And recent events have shown the importance of ensuring that a set of business activities do not endanger the survival of a financial institution even under exceptional market conditions. These are all important goals for a risk manager. But it would be extraordinary if the same analytical tools could allow the risk manager to handle all these problems - problems, that is, whose solution hinges on the estimation of probabilities of events that should occur, on average, from once every few weeks to once in several decades. This is where stress testing comes in. Stress testing picks up the baton from VaR and other data-driven statistical techniques as the time horizons become longer and longer and the risk manager wants to explore the impact of events that are not present in her dataset - or, perhaps, that have never occurred before.7
As I explain in Chapter 4, stress testing, by its very nature, can rely much less on a frequentist concept of probability, and almost has to interpret probability in a subjective sense. In Bayesian terms, as the time horizon lengthens and the severity of the events increases, the ‘prior’ acquires a greater and greater weight, and the likelihood function a smaller and smaller one.8 In my opinion, this is a strength, not a weakness, of stress testing. It is also, however, the aspect of the project I propose that requires most careful handling. Frequentist probability may make little sense when it comes to stress testing, but this does not mean that probability tout court has no place in stress testing. If anything, it is stress testing without any notion of probability that, as Aragones, Blanco and Dowd remind us, is of limited use. The challenge taken up in this book is to provide the missing link between stress events and their approximate likelihood - as explained, an essential prerequisite for action9 - without inappropriately resorting to purely frequentist methods.
The enterprise I have briefly sketched therefore gives us some hope of bringing stress losses within the same conceptual framework as the more mundane losses analysed by VaR-like techniques. The approach I suggest in this book bridges the gap between the probabilities that a risk manager can, with some effort, provide (marginal, and simple conditioned probabilities) and the probabilities that she requires (the joint probabilities). It does so by exploiting to the fullest the risk manager’s understanding of the causal links between the individual stress events. By employing the causal, rather than associative, language, it resonates with our intuition and works with, not against, our cognitive grain.
The approach I suggest is therefore intended to give us guidance as to whether and when we should really worry, and to suggest how to act accordingly. It gives, in short, tools to ensure that the stress losses are approximately but consistently ‘priced’. Hopefully, all of this might give us a tool for managing financial risk more effectively than we have been able to do so far.
This is what this book is about, and this is why, I think, it is important.

1.2 Plan of the Book

This book is structured in four parts. The first, which contains virtually no equations, puts stress testing and probabilistic assessments of rare financial events in their context. The second part presents the quantitative ideas and techniques required for the task. Here lots of formulae will be found. The third part deals with the quantitative applications of the concepts introduced in Part II. The fourth and last part deals with practical implementation issues, and equations therefore disappear from sight again.
Let me explain in some detail what is covered in these four parts.
After the optimistic note with which I closed the previous section, in Chapter 2 I move swiftly to dampen the reader’s enthusiasm, by arguing that stress testing is not the solution to all our risk management problems. In particular, I make the important distinction, too often forgotten, between risk and uncertainty and explain what this entails for stress analysis.
With these caveats out of the way, I argue that the expert knowledge of the risk manager is essential in constructing, using and associating probabilities to stress events. This expert knowledge (and the ‘models of reality’ that underpin it) constitutes the link between the past data and the possible future outcomes. In Chapter 3 I therefore try to explain the role played by competing interpretative models of reality in helping the risk manager to ‘conceive of the unconceivable’. Chapter 3 is therefore intended to put into context the specific suggestions about stress testing that I provide in the rest of the book.
In Chapter 4 I describe the different types of probability (frequentist and subjective) that can be used for risk management, and discuss which ‘type of probability’ is better suited to different analytical tasks. The chapter closes with an important distinction between associative and causal descriptions. This distinction is at the basis of the efficient elicitation of conditional probabilities, and of the Bayesian-net approach described later in the book.
In Part II I lay the quantitative foundations required for the applications presented in the rest of the book. Some of the concepts are elementary, others are less well-known. In order to give a unified treatment I deal with both the elementary concepts (Chapter 5) and the somewhat-more-advanced ones (Chapter 6) using the same conceptual framework and formalism. Venn diagrams will play a major role throughout the book. Chapter 7 shows how very useful bounds on joint probabilities can be obtained by specifying marginals and (some) singly-conditioned probabilities. Chapter 8 introduces Bayesian nets, and Chapter 9 explains how to build the conditional probability tables required to use them. This concludes the tool-gathering part of the book. (A simple introduction to Linear Programming can be found in Appendix Chapter 15.)
Part III is then devoted to the application of the conceptual tools and techniques presented in Part II. This is achieved by introducing two different possible systematic approaches to stress testing, of different ambition and scope, which are described in Chapters 10 and 11.
Finally, in Part IV I address more practical questions: how we can try to overcome the difficulties and the cognitive biases that stand in the way of providing reasonable conditional probabilities (Chapter 12); how we can structure our chain of stress events (Chapter 13); and how we can embed the suggestions of the book into a viable approach in a real financial institution (as opposed to a classroom exercise). Doing so requires taking into account the reality of its governance structure, its reporting lines and the need for independence of a well-functioning risk-management function (Chapter 14).
I have prepared Parts II and III with exercises. I have done so not because I see this book necessarily as a text for a formal course, but because I firmly believe that, in order to really understand new quantitative techniques, there is no substitute for getting one’s hands dirty and actually working out some problems in full.

1.3 Suggestions for Further Reading

Stress testing is the subject of a seemingly endless number of white, grey and variously-coloured consultation papers by the BIS and other international bodies. At the time of writing, the latest such paper I am aware of is BIS (2009), but, no doubt, by the time this book reaches the shelves many new versions will have appeared. Good sources of up-to-date references are the publication sites of the BIS, the IIF and the IMF.

Part I
Data, Models and Reality

Chapter 2
Risk and Uncertainty - or, Why Stress Testing is Not Enough
In the introductory chapter I made my pitch as to why stress testing is important and why I believe that the approach I propose can show us the way out of Berkowitz’s (1999) purgatory. I don’t want to convey the impression, however, that stress testing can be the answer to all our risk management questions. The problem, I think, does not lie with the specific approach I suggest in this book - flawed as this may be - but is of a fundamental nature. To present a balanced picture, I must therefore share two important reservations.

2.1 The Limits of Quantitative Risk Analysis

The first reservation is that the quantitative assessment of risk (and I include stress testing in this category) is an important part of risk management, but it is far from being its beginning and end. Many commentators and risk ‘gurus’ have stressed the inadequacies of the current quantitative techniques. The point is taken. But even if the best quantitative assessment of risk were available, a lot more would be required to translate this insight into effective risk management. The purpose of analysis is to inform action. Within a complex organization, effective action can only take place in what I call a favourable institutional environment.10 So, in a favourable institutional environment the output of the quantitative analysis is first escalated, and then understood and challenged by senior management. This is now well accepted. But there is a lot more, and this ‘a lot more’ has very little to do with quantitative risk analysis. The organizational set-up, for instance, must be such that conflicts of interest are minimized (in the real world they can never be totally eliminated). Or, the agency problems that bedevil any large organization, and financial institutions in primis, must be understood and addressed in a satisfactory manner. And again: an effective way must be found to align the interests of the private decision makers of a systemically-relevant institution such as a large bank with those of the regulators - and, more to the point, of society at large. And the list can go on.
VaR & Co have received so much criticism that it sometimes seems that if we had the right analytical tools, all our risk management problems would be solved. If only that were true! The institutional environment in which the risk management decisions are made is where the heart of risk management lies. Yes, the quantitative analysis of risk is part of this ‘institutional environment’ - and perhaps an important one - but it remains, at best, a start.

2.2 Risk or Uncertainty?

My second reservation is about our ability to specify probabilities (frequentist, subjective or otherwise) for extremely rare events when the underlying phenomenon is the behaviour of markets and of the economy. As the reader will appreciate, I make in this book ‘minimal’ probabilistic requirements, often asking the risk manager to estimate no more than the order of magnitude of the likelihood of an event. Nowhere in my book will the reader find the demand to estimate the 99.975th percentile of the loss distribution at a one-year horizon.11 But even my more limited and modest task may be asking too much. Let me explain why I think this may be the case.
One of the applications of stress testing that has been recently put forth is for regulatory capital. Regulatory capital has to do with the viability of a bank as a going concern - the time horizon is, effectively, ‘forever’. I do not know what ‘forever’ means in finance, but certainly it must mean more than two, four or even ten years. When the horizon of required survivability becomes so long, I am not sure that, for matters financial, we truly have the ability to associate probabilities, however approximate, to future events. Perhaps Keynesian (or Knightian) uncertainty provides a better conceptual framework.
What is the difference? ‘Risk’ and ‘uncertainty’ are today used interchangeably in the risk-management literature, but a careful distinction used to be drawn between the two concepts: the word ‘risk’ should be used in those situations where we know for sure the probabilities attaching to future events (and, needless to say, we know exactly what the possible future events may be). We should instead speak of uncertainty when we have no such probabilistic knowledge (but we still know what may hit us tomorrow). Indeed, as far back as in the early 1920s Knight (1921) was writing
. . . the practical difference between [. . .] risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from the statistics of past experience), while in the case of uncertainty, this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique . . . .
This distinction was kept alive for several decades. For instance, as game theory became in the post-war years an increasingly important technique in the economist’s toolkit, Luce and Raiffa (1957) were still clearly pointing out that the two concepts yield very different types of results and ‘solutions’ and devote in their textbook on game theory different chapters to the two categories. Yet, the boundaries between the two concepts have become increasingly blurred, and the two words are now frequently used interchangeably. So much so that the current prevailing view in economics has become that all probabilities are known (or at least knowable), and that the economy therefore becomes ‘computable’, in the sense that there are no ‘unknown unknowns’. Current mainstream economics firmly endorses a risk-based, not uncertainty-based, view of the world. In the neo-classical synthesis ‘uncertainty plays a minimal role in the decision making of economic agents, since rational utility-maximizing individuals are [assumed] capable of virtually eliminating uncertainty with the historical information at hand’.12
The consequences of this distinction are well presented by Skidelsky (2009) in his discussion of the Keynesian view of probability:
Classical economists believe implicitly, and Neoclassical economists explicitly, that market participants have complete knowledge of all probability distributions over future events. This is equivalent to say that they face only measurable risk . . .
In basing the calculation of regulatory (or economic) capital on the full knowledge - down to the highest percentiles - of ‘all probability distributions over future events’, the regulators have implicitly embraced the neo-classical view: that is, that when it comes to matters financial, human beings are always faced with risk, not with uncertainty, even when they are dealing with events of such rarity and magnitude that they could bring a bank to its knees.
Why has the concept of risk prevailed over uncertainty, despite the rather extreme assumptions about human cognitive abilities (and the world itself!) that it implies? From an academic perspective, unfortunately, dealing with uncertainty brings about rather ‘unexciting’ analytical results, often based on minimax solutions: we disregard probability completely, and we arrange our actions so as to minimize the damage if the worst (however unlikely) materializes. There is no great edifice of economic thought that can be built on such dull foundations. Succinctly put, ‘in conditions of uncertainty, economic reasoning would be of no value’.13 This is not a good recipe for exciting papers or for getting a tenure-track position at a prestigious university.
Matters are different when it comes to risk - i.e., when we assume that we can know the probabilities of future events perfectly. Dealing with risk rather than uncertainty allows us to speak about trade-offs and non-trivial optimality,14 and opens the door to much more exciting analytical results, such as expected utility maximization, portfolio diversification, rational expectations, the efficient-markets hypothesis, etc. - in short, to modern finance. No wonder risk has won hands down over uncertainty.
In addition, from a practical perspective, speaking of risk provides an illusion of quantifiability and precision that regulators like because of the supposed ‘objectivity’ of the rules it brings about.
But the fact that one set of results is more ‘sexy’, more fun to obtain and more handy to use than the other does not necessarily make that set more true - or more useful. The real question should be: leaving aside which approach is more fun to work with, is the problem at hand better described by a risk- or an uncertainty-based approach? If we are talking about the magnitude of weekly, monthly and perhaps even yearly losses, I believe that the risk (i.e., the known-probability-based) framework can yield very useful results. Traditional (frequentist) statistical techniques such as VaR do provide, in this domain, useful information. But what about much rarer events?
Stress testing tries to graft onto this substratum of frequentist information a less precise, but still probabilistic, assessment of risk. I think that this is useful, but I truly do not know how ‘far in the tail’ even the subjective approach can be pushed. When we deal with the cataclysmic events that can put at risk the whole financial system, I do harbour serious doubts as to whether stress testing - and stress testing for capital allocation purposes in particular - may not be already trying to stretch well into the Knightian-uncertainty domain. Would an intelligent risk manager have been able in late 2006 to give even an order-of-magnitude assessment of the probabilities attaching to the events that were to unfold in the next 24 months? As an attempt to help the risk manager with such a difficult task, I suggest in the next chapter a conceptual approach that may give some guidance in ‘conceiving the unconceivable’. But the task remains daunting.
These considerations notwithstanding, I have taken a leaf out of Keynes’ book and, as the quote that opens this work recommends, I have decided to follow a pragmatic approach, and to make use of a pared-down probabilistic approach. This is because, if private financial institutions and regulators want to associate capital to stress events - and this appears to be the choice of the moment - under the current conceptual framework some form of probabilistic assessment of losses is unavoidable. Indeed, imposing a link between capital and probability of losses constitutes one of the cornerstones of the current regulatory paradigm. I have some doubts about the wisdom of this approach,15 but, on the other hand, I have no better suggestions. I have therefore taken the approach that, given the present probability-related way of looking at capital, some link between large losses and the probability of their occurrence should be provided. We should therefore ask the risk manager to make use of those types of probabilities (frequentist or subjective) that are better suited to the problem at hand. And we should then set the terms of the problem in such a way that the ‘difficult’ probabilities (the joint distributions) are obtained or approximated from more cognitively resonant quantities (such as marginal or conditional probabilities) using as much information as possible about ‘how the world works’.
The approach I propose is therefore radically different from the frequentist VaR and Economic Capital methods, which ‘go for the jugular’ and attempt instead the direct estimation of the king probabilistic quantity, i.e., of the joint distribution. I have taken the approach, in short, that one should use different tools for different problems (on the basis that ‘when your only tool is a hammer, every problem begins to look like a nail’), and that it is better to be approximately right than precisely wrong.

2.3 Suggested Reading

For a clear and very readable discussion of the difference between uncertainty and risk in decision making, a good starting point is Luce and Raiffa (1957). An explanation of what uncertainty (rather than risk) implies for asset prices and economic prediction can be found in Skidelsky (2009), who discusses the issue in the context of Keynes’ views on the matter.

Chapter 3
The Role of Models in Risk Management and Stress Testing
Experience brings out the impossibility of learning anything from facts till they are examined and interpreted by reason; and teaches that the most reckless and treacherous of theorists is he who professes to let fact and figures speak for themselves.
Alfred Marshall quoted in Friedman and Jacobson Schwartz (1963 [2008])
 
In this chapter I deal with two distinct but related topics that have a direct bearing on stress testing.16 These topics are of a more general and abstract nature than the material covered in the rest of the book. They are, however, every bit as important.
The first topic deals with the role of models in arriving at an understanding of financial phenomena and attempting to predict future financial events. A crucial distinction I will make is between reduced-form and micro-specified (structural) models. My claim is that, when it comes to stress testing, micro-specified models are much better suited to the task. Indeed, I will try to explain why the much more commonly used reduced-form models can be particularly misleading in the case of stress testing.
The second topic deals with coordination and positive feedback mechanisms in financial markets. I believe this feature is very important, and helps our understanding of why, in some important circumstances, prices appear to move away from any reasonable understanding of ‘fundamentals’. The relevance of this ‘run-away behaviour’ for stress testing is self-evident. Important as it is, however, the understanding of coordination and feedback mechanisms is not as crucial to my overall argument as the understanding of the role played by models in financial predictions. So, if the reader is not convinced by my second thesis presented in this chapter, but accepts the first, the overall gist of the book will remain valid. If I fail to convince the reader of the validity of the first argument, however, then I will be unlikely to make a convincing case for the main approach proposed in this book.

3.1 How Did We Get Here?

As the saying goes, we learn from our mistakes. If this is true, after the events that unfolded during the 2007-2009 financial crisis the risk management profession should be comprised of some of the wisest individuals on the planet. How did we end up becoming so wise? And what have we learnt? More concretely: if any of the numerous new risk management ideas that are now been touted as the ‘new best practice’ had been put in place in, say, late 2005, would things have turned out any better? If not, what should we do instead?
Great upheavals urge a radical rethink of the way we make sense of reality. Finance and economics are no exceptions. The Great Depression of the 1930s gave impetus to the transition from classical to Keynesian economics. The economic woes of the late 1960s and of the 1970s urged the switch from Keynesianism to neo-classical economics. Similarly, the events of the late 2000s are bringing to the fore a radical rethinking of the neo-classical economics orthodoxy - its reliance, for instance, on the ability of rational investors and private-sector firms to self-regulate economic activities.17
As a consequence, the economic profession is in a state of ‘restlessness’, and vigorous criticism to the orthodox way of thinking is now coming not from mavericks ‘out on the fringes’ or from conspiracy theorists, but from mainstream, establishment economists and central bankers:
The modern risk management paradigm held sway for decades. The whole intellectual edifice, however, collapsed in the summer of last year [2007].18
Change is in the air. For this reason it is particularly important to put the current thinking about quantitative risk management in its intellectual context.
We did not get where we are by accident or ‘from scratch’, i.e., by thinking in the abstract: ‘how should we analyse financial risk?’ The logical underpinning of quantitative risk management as it has been practised in the run-up to the crisis is intimately enmeshed with the prevailing (neo-classical) conceptual framework of financial economics. Take, for instance, one of the cornerstones of the neo-classical finance edifice, Markowitz’s portfolio diversification. It shares obvious conceptual similarities with the risk management statistics, such as VaR or Economic Capital, that underpin the estimation of regulatory capital: indeed, ‘diversification’ is a key word both in Markowitz’s portfolio theory and in contemporary risk management. But efficient asset allocation and modern risk management share a far deeper intellectual legacy: the idea, that is, that Rational Man, equipped with a Perfect Computing Machine, is supposed to be able to estimate all the statistical properties of return distributions (in the case of Economic Capital down to the highest percentiles), and to use this information to make financial decisions under risk.
As the quote above reminds us, this way of looking at financial risk has held sway for decades. But if we are at a juncture when the very foundations of the edifice of neoclassical finance are being questioned - and I believe we are - then our way of looking at the management of financial risk must be revisited as well. That we are at such a turning point is perhaps most clearly shown (again) by the words of the chastened high priest of the Rational Investor school, Chairman Alan Greenspan (2009):
The extraordinary risk-management discipline that developed out of the writings of the University of Chicago’s Harry Markowitz in the 1950s produced insights that won several Nobel prizes in economics. It was widely embraced not only by academia but also by a large majority of finance professionals and global regulators. But in August 2007, the risk management structure cracked. All the sophisticated mathematics and computer wizardy essentially rested on one central premise: that the enlightened self-interest of owners and managers of financial institutions would lead them to maintain a sufficient buffer against insolvency by actively monitoring their forms’ capital and risk positions.
It is silly to claim that we got where we are because the assumptions that underpin this way of looking at financial problems are ‘wrong’. Model assumptions are always wrong, and to use McKenzie’s (2006) metaphor, a model is not a passive mirror of an external reality. When it comes to neo-classical finance, perhaps we have all been beguiled by the elegance of the construction, and we have begun taking the scaffolding (i.e., the assumptions) too seriously and literally. But, in the end, it is far too simplistic to claim that the root of all evil were ‘unrealistic assumptions’ or ‘the blind faith in the normal distribution’ - whose failure to describe returns is, by the way, one of the most uncontroversial and universally accepted facts in mainstream financial econometrics. When it comes to quantitative risk analysis, something more fundamental is at play. We do not need another model. What is required is a richer way of thinking about how models interact with financial reality.

3.2 Statement of the Two Theses of this Chapter

What does it mean that we require a new way of thinking about how models interact with financial reality? As I mentioned in the opening paragraphs of this chapter, I present two distinct arguments. Each argument can, in turn, be split into sub-theses, which for clarity I present below.
 
Argument 1: Centrality of Models
1. We must recognize that data (e.g., empirical return distributions) do not speak by themselves, but only make sense in the context of models of reality. Without a plausible (not necessarily a ‘true’) generative model, data analysis is blind.
2. This does not mean, however, that we should look for the Holy Grail of the correct interpretative model of reality. Especially when it comes to quantitative risk analysis, the search for a unique ‘true’ model (i.e., a unique ‘correct’ mapping from new information to price changes) may be misguided, futile and, at times, even harmful.
3. It is instead more fruitful to entertain the possibility of the coexistence of a plurality of plausible interpretative models of reality, ranging from the fully fledged, rigorously articulated, mathematically formalized and microstructurally founded models of the neo-classical synthesis, to the glorified rules of thumb used by traders.
4. Each of these models can be, and is, adopted or abandoned by market participants and analysts in an unpredictable fashion.19
The second argument requires the validity of the first, and builds on it as follows.
 
Argument 2: Importance of Coordination
1. Given the simultaneous coexistence of a variety of interpretative models of reality, and given the inability by traders to maintain a ‘conviction’ position for very long - the more so, the more turbulent the market conditions - market participants often find it advantageous to engage in a game of coordination.
2. In this game of coordination traders often adjust their positions not so much on the basis of their independent analysis of fundamentals, but by taking into account what other market players will do.
3. The coordinated actions of many traders can give rise to positive-feedback mechanisms, which can cause ‘wild’ price moves.
4. These coordination and feedback mechanisms may be active at several levels in the market and in the economy at large: from private investors to proprietary traders, from institutional investors to central banks, etc.

3.3 Defence of the First Thesis (Centrality of Models)

I now handle in order the points into which I sub-divided Argument 1.

3.3.1 Models as Indispensable Interpretative Tools

Quantitative risk management in general, and stress testing in particular, are nowadays typically approached as an exercise in statistical analysis of return distributions. These, in turn, are derived from historical records (time series) of risk factors. The assumption behind this approach is that the answers to all our quantitative risk management questions can be found just by ‘looking at the data’. Non-parametric approaches to Extreme Value Theory are the logical conclusion of this approach, and constitute its natural extension to the domain of stress testing.
Now, much too much has been written about whether the normal distribution provides an adequate description of financial returns (for many applications, it clearly does not), and about which alternative statistical distributions may be up to the task. Let us leave aside the fact that, given a finite amount of relevant data, choosing among various distributional alternatives is a statistically very difficult task.20 My claim is that, in itself , knowledge that our existing data are better fitted by, say, a Stretched Exponential or a Power Law than by a Gaussian distribution is of little help, especially when dealing with tail behaviour. When it comes to the real ‘black swans’, i.e., to the events that can create havoc of systemic dimension, I maintain that this type of analysis is of very little use.
This is a bold claim, because these more exotic distributions are often invoked as the solution to the quantitative analysis of the extremely rare event. To see how I can justify my assertion I must explain the difference between reduced-form and micro-structural models.
In general, reduced-form models abstract from the detailed mechanisms that generate the phenomenon at hand, and rely on observed regularities to fit to the data the model ‘free parameters’. In physics, for instance, the very complex quantum interactions among the electrons and between the electrons and the nuclei in a solid can be ‘reduced’ to a simple model of balls linked by harmonic springs. The strength of the spring is then fitted to some observable properties of the material (for instance, its elastic properties, or the slope at the origin of the phonon dispersion curve).
The fitting part of this approach is crucial to our discussion: physicists know that the springs do not really give a true micro-description of the solid, but force a link with physical reality by imposing the correspondence between a certain property of the idealized ‘toy model’ and the real solid under study. To go beyond this level of model reduction physicists then attempt to explain the strength of the spring on the basis of a theory of electron interactions. This is where ab initioab initio