An Excellent Primer on Risk Management (and Its Shortcomings)

John Kay in the Financial Times gives the best layman’s explanation I have seen of the most widely used risk management approach in financial institutions, value at risk (VAR) and tells us what’s wrong with it.

In a nutshell, VAR assumes a normal distribution of events (aka a bell curve). The problem is that prices of financial instruments aren’t normally distributed. They suffer from what is known as kurtosis, or fat tails (in simple terms, extreme events are more likely than they ought to be). Another problem, one that Kay does not discuss (no doubt due to space constraints) is that the distribution around the mean isn’t symmetrical (stocks and bonds have negative skewness, while commodities have positive skewness). Finally, any model is only as good as the data loaded into it. Many instruments are new and there isn’t enough history to be certain as to how they perform over the long haul.

What is a more than a bit nervous-making is these limitations of VAR are widely known, yet for many organizations, it is their primary risk management tool. VAR has been widely used for over ten years. One would think someone would have come up with a better mousetrap by now.

From Kay:

Financial institutions today have sophisticated risk management systems. Their senior executives, shareholders and regulators take comfort that these mechanisms protect them in a complex world. But are they right to think that?

Portfolio theory, first set out 50 years ago in the doctoral dissertation that would win Harry Markowitz the Nobel Prize for economics, provides the basis of the most widely used template, which was developed by JPMorgan. Remarkably, the bank published the details in the 1990s and subsequently hived off a business, Riskmetrics, which promotes it still.

ADVERTISEMENT

Suppose you invest in two asset classes – stocks and bonds. Then the overall risk on your portfolio depends on the riskiness of stocks, the riskiness of bonds and – crucially – on the relationship between the two. If different risks are inversely related – an umbrella shop makes money if it rains and an ice-cream stand makes money if it shines – then individually risky assets can be combined to create a portfolio with low overall risk. This textbook example is too good to be true. But as long as different risks are less than perfectly correlated, the process of aggregation reduces risk overall. These risk models attempt to quantify the benefits of this diversification magic.

When returns follow the pattern of classical statistical theory – they follow the normal distribution, the bell curve, which characterises so many natural and social phenomena – the whole problem can be summarised in what is called the variance/ co-variance matrix. Fed with such data, a computer can assess any asset distribution and calculate, day by day, the distribution of expected overall gains and losses.

The commonest way of describing this distribution is the value at risk – the size of loss that will be exceeded only with very low probability, such as one in 1,000. A chief executive can sleep soundly at night knowing that this value at risk is the largest likely loss in a tenure of 1,000 working days. If he is risk-averse, he can ask his people to tweak the model by setting an even higher hurdle, for the probability of unacceptable loss, say one in 10,000.

But models are only as good as the correspondence between the model and the world, and this is where problems begin. The assumption of normal distribution of returns seems to work well in times that are – well, normal. But what of abnormal times? Returns in financial markets show evidence of “fat tails” – there are more extreme events than the normal distribution allows. The crash of 1987, when markets dropped 20 per cent in a single day, just could not happen in standard statistical theory.

The variance/co-variance matrices are based on historical experience, most of it recent. While there have been blips – the 1987 crash, or the boom and bust associated with the “new economy” bubble – that experience has mostly been favourable. What is true in placid waters may not hold true in storms. When the Asian crisis hit in 1997-98, risks that had seemed to be uncorrelated with each other suddenly materialised together.

Every sophisticated institution has its own models back-tested against the experience of that institution. But this illustrates that the analytical problem is fundamental. The data used to back-test is, of necessity, drawn from a period when the institution did not experience the problems the risk models are designed to anticipate. The one thing we know with certainty about the banks, insurance companies and hedge funds that compete for our funds is that they did not go bust in the period from which their historic data are drawn.

That, unfortunately, is the only thing we know with certainty. The risk models financial institutions use ensure that it is very unlikely that these institutions will fail for the reasons that are incorporated into these models. That does not mean they will not fail, only that if they fail it will be for different reasons. Round the corner are what Nassim Nicholas Taleb, the trader and author, calls “black swans”: events that no one predicted, or could have predicted. They are not in the models because they are not, and cannot be, in the data. That does not mean they are not going to happen.

Print Friendly, PDF & Email