Chapter 1
Models for Discontinuous Markets
The broadening and deepening of markets for risk transfer has marked the development of financial services perhaps more than any other trend. The past 30 years have witnessed the development of secondary markets for a wide variety of financial assets and the explosion of derivative instruments made possible by financial engineering. The expansion of risk transfer markets has liquefied and transformed the business of traditional financial firms such as banks, asset managers, and insurance companies. At the same time, markets for risk transfer have enabled nontraditional players to enter financial services businesses, invigorating competition, driving down prices, and confounding the efforts of regulators. Such specialist risk transfer firms occupy a number of niches in which they can outperform their more diversified counterparts in the regulated financial system by virtue of their specialized knowledge, transactional advantages, and superior risk management.
For all firms operating in risk transfer markets, traditional and nontraditional alike, the ability to create, calibrate, deploy, and refine risk models is a core competency. No firm, however specialized, can afford to do without models that extract information from market prices, measure the sensitivity of asset values to any number of risk factors, or forecast the range of adverse outcomes that might impact the firm's financial position.
The risk that a firm's models may fail to capture shifts in market pricing, risk sensitivities, or the mix of the firm's risk exposures is thus a central operational risk for any financial services business. Yet many, if not most, financial services firms lack insight into the probabilistic structure of risk models and the corresponding risk of model failures. My thesis is that most firms lack insight into model risk because of the way they practice statistical modeling. Because generally accepted statistical practice provides thin means for assessing model risk, alternative methods are needed to take model risk seriously. Bayesian methods allow firms to take model risk seriouslyâhence a book on Bayesian risk management.
Risk Models and Model Risk
Throughout this book, when I discuss risk models, I will be talking about parametric risk models. Parametric risk models are attempts to reduce the complexity inherent in large datasets to specific functional forms defined completely by a relatively low-dimensional set of numbers known as parameters. Nonparametric risk models, by contrast, rely exclusively on the resampling of empirical data, so no reduction of the data is attempted or accomplished. Such models ask: Given the risk exposures I have today, what is the distribution of outcomes I can expect if the future looks like a random draw from some history of market data? Nonparametric risk models lack model specification in the way we would normally understand it, so that there is no risk of misspecification or estimation error by construction. Are such models therefore superior? Not at all. A nonparametric risk model cannot represent any outcome different from what has happened, including any outcomes more extreme than what has already happened. Nor can it furnish any insight into the ultimate drivers of adverse risk outcomes. As a result, nonparametric risk models have limited use in forecasting, though they can be useful as a robustness check for a parametric risk model.
Parametric risk models begin life as a probability distribution, which is a statement of the likelihood of seeing different values conditional only on the parameters of the distribution. Given the parameters and the form of the distribution, all possibilities are encompassed. More parameters create more flexibility: A Weibull distribution is more flexible than an exponential distribution. Many risk models rely heavily on normal and lognormal distributions, parameterized by the mean and variance, or the covariance matrix and mean vector in the multivariate case. A great deal has been written on the usefulness of heavier-tailed distributions for modeling financial data, going back to Mandelbrot (1963) and Fama (1965).
Undoubtedly, the unconditional distributions of most financial returns have heavier tails than the normal distribution. But to solve the problem of heavy tails solely through the choice of a different family of probability distributions is to seek a solution at a very low level of complexity.
More complex risk models project a chosen risk distribution onto a linear system of covariates that helps to articulate the target risk. Regression models such as these seek to describe the distribution of the target variable conditional on other available information. The functional form of the distribution is subsumed as an error term. Familiar examples include the following:
- Linear regression with normally distributed errors, widely used in asset pricing theory and many other applications.
- Probit and logit models, which parameterize the success probability in binomial distributions.
- Proportional hazard models from insurance and credit risk modeling, which project a series of gamma or Weibull distributions onto a linear system of explanatory factors.
Parameters are added corresponding to each of the factors included in the projection. The gain in power afforded by projection raises new questions about the adequacy of the system: Are the chosen factors sufficient? Unique? Structural? What is the joint distribution of the system parameters, and can that tell us anything about the choice of factors?
It seems the pinnacle in financial risk modeling is achieved when parameters governing several variablesâa yield curve, a forward curve, a volatility surfaceâmay be estimated from several time series simultaneously, where functional forms are worked out from primitives about stochastic processes and arbitrage restrictions. Such models pass over from the physical probability measure P to the risk-neutral probability measure Q. In terms of the discussion above, such models may be seen as (possibly nonlinear) transformations of a small number of factors (or state variables) whose distributions are defined by the nature of the underlying stochastic process posited for the factors. When the number of time series is large relative to the parameters of the model the parameters are overidentified, permitting highly efficient inference from the data. Such models are the ultimate in powerful description, offering the means to capture the dynamics of dozens of interest rates or forward contracts with a limited number of factors and parameters.
Our hierarchy of risk models thus includes as elements probability distributions, parameters, and functional forms, which may be linear or nonlinear, theoretically motivated or ad hoc. Each element of the description may not conform to reality, which is to say that each element is subject to error. An incorrect choice of distribution or functional form constitutes specification error on the part of the analyst. Errors in parameters arise from estimation error, but also collaterally from specification errors. The collection of all such opportunities for error in risk modeling is what I will call model risk.
Time-Invariant Models and Crisis
The characteristics enumerated above do not exhaust all dimensions of model risk, however. Even if a model is correctly specified and parameterized inasmuch as it produces reliable forecasts for currently observed data, the possibility remains that the model may fail to produce reliable forecasts in the future.
Two assumptions are regularly made about time series as a point of departure for their statistical modeling:
- Assuming the joint distribution of observations in a time series depends not on their absolute position in the series but only on their relative position in this series is to assume that the time series is stationary.
- If sample moments (time averages) taken from a time series converge in probability to the moments of the data-generating process, then the time series is ...