Regulatory Implications of the Failure of Quantitative Risk Management Approaches

A Bloomberg story today points out that the snowballing credit market crisis is an indictment of the use of quantitative measures of risk, particularly one of the longer established and still widely used approaches, value at risk. VAR uses historical trading patterns to determine the probability of loss to a certain percentage of certainty. Firms will set certain risk thresholds for certain types of risk, say 95% or 99% certainty of no loss over a certain time frame.

While VAR is popular, no firm relies solely on VAR. But a big problem is that it is the measure of risk that regulators understand best and thus tends to dominate conversations between regulators and their charges. Some have even claimed that securities firms look to VAR more than they would otherwise due to the role it plays in industry oversight.

What is wrong with VAR? There are three big shortcomings. First is that it relies on historical norms. When you have new instruments with limited trading history like mezzanine CDOs that have never been tested in either a down market or a weak economy, the past is often not a reliable guide to future performance. Second is that even instruments that appear to be the same over time, such as subprime loans, may not in fact be the same instrument in terms of economic performance and therefore trading risk. Subprime mortgages nominally have a ten-year history, but the product in its early years was issued in small volumes and consisted primarily of manufactured housing. And as we now know all too well, vintage 2004 subprimes were vastly better credit risks than the 2007 edition.

The third problem with VAR is that the underlying pricing models assume that securities prices have a normal (as in bell curve) distribution. But that just isn’t true. Securities prices exhibit fat tails (extreme moves are more probable than the models assume) and are not symmetrically distributed (stock prices, for example, exhibit negative skewness, meaning the negative portion of the distribution extends further from the mean than the positive end). This says that VAR needs to be used with a handful of salt in those extreme 95% to 99% measures, since that is where the model is most likely to break down (the way of compensating presumably would be to set your risk parameters considerably higher than you would otherwise). But from what I can tell at my remote range, most (all?) players have tended to assume that a very high degree of certainty is good enough.

Now from a practical perspective, since VAR is not the only risk metric in use, firms may compensate for the shortcomings listed above by other means (although recent results would say whatever tools they used were also flawed). But the most serious implication is that this failing is that it shows the regulators were emperors with no clothes. And it appears that none of them plans to hire a tailor.

From Bloomberg:

The risk-taking model that emboldened Wall Street to trade with impunity is broken and everyone from Merrill Lynch & Co. Chief Executive Officer John Thain to Morgan Stanley Chief Financial Officer Colm Kelleher is coming to the realization that no algorithm or triple-A rating can substitute for old-fashioned due diligence.

Value at risk, the measure banks use to calculate the maximum their trades can lose each day, failed to detect the scope of the U.S. subprime mortgage market’s collapse as it triggered more than $130 billion of losses since June for the biggest securities firms led by Citigroup Inc., Merrill, Morgan Stanley and UBS AG.

The past six months have exposed the flaws of a financial measure based on historical prices that securities firms use idiosyncratically and that doesn’t anticipate every potential disaster, such as the mistaken credit ratings on defaulted subprime debt…

Executives at Merrill, Morgan Stanley and UBS took steps in the past six weeks to overhaul their risk-management groups after internal models failed to foresee the first annual decline in house prices since the Great Depression that eroded five years of trading gains.

Goldman Sachs Group Inc., the firm with the highest nominal VaR, was the sole investment bank to report record earnings in the fourth quarter, while New York-based Merrill, which had the second-lowest nominal VaR of the five biggest U.S. securities firms, posted a $9.8 billion loss for the last three months of 2007, the biggest in its 94-year history…

Hiring risk managers and giving them more power won’t alter the mistake that led to last year’s slump and that was Wall Street’s dependence on statistics to quantify risks, [Nassim] Taleb {a research professor at London Business School and former options trader} said.

“We have had dismal failures in quantitative finance in measuring these risks, yet people hire quants and hire risk managers simply to back up their desire to take these risks,” he said. “There are some probabilities that you cannot compute.”…

All the New York-based firms base their calculations at a confidence level of 95 percent, meaning they don’t expect one- day drops to exceed the reported amount more than 5 percent of the time.

The amounts differ in part because every firm uses their own methodology and data. For instance, Lehman uses four years of historical data to calculate VaR, with a higher weighting given to more recent time periods, while Morgan Stanley provides VaR calculations using both four years and one year of market data.

“If you compare what peoples’ values at risk are versus what their losses were in the third quarter or fourth quarter, the numbers are astounding,” said David Einhorn, president and co-founder of hedge fund Greenlight Capital LLC in New York. “There are a lot of things that probably the value-at-risk model said would have trivial losses 95 percent of the time or 99 percent of the time but are now having a huge loss.”….

All of the risk-measurement tools failed to prepare Merrill for the unforeseen declines on triple-A rated securities backed by subprime mortgages, according to the company’s third-quarter filing with the U.S. Securities and Exchange Commission. The firm’s writedowns related to the highest-rated portions of CDOs backed by pools of home loans, which plunged in value as defaults on the underlying mortgages soared.

“VaR, stress tests and other risk measures significantly underestimated the magnitude of actual loss from the unprecedented credit market environment,” Merrill’s filing said. “In the past, these AAA ABS CDO securities had never experienced a significant loss in value.”

Securities firms developed statistical models during the early 1990s to better quantify risks as the trading of bonds, stocks, currencies and derivatives increased. J.P. Morgan & Co., now part of JPMorgan Chase & Co., helped popularize the use of value at risk as the primary measurement tool in 1994 when it published its so-called RiskMetrics system.

Four years later, two events helped demonstrate the drawbacks in using statistical analysis based on historical market movements to measure risk. Russia’s bond default sent fixed-income markets into a tailspin and Long-Term Capital Management LP, the Greenwich, Connecticut-based hedge fund run by former Salomon Brothers trader John W. Meriwether, had to be bailed out after $4 billion of trading declines.

Russia’s default risk was underestimated because value-at- risk computations used by investment banks depended on market events of the preceding two to three years, when nothing similar had occurred, according to Wilson Ervin, who’s now chief risk officer at Zurich-based Credit Suisse Group, Switzerland’s second-biggest bank after UBS.

Long-Term Capital Management, which amplified its risk by relying on borrowed money for most of its trading bets, blew up in part because it didn’t anticipate that investor panic after the Russian default would cut the value of any risky debt, whether it was issued by a country, sold by a company, or backed by mortgages.

The riskiest Russian and Brazilian bonds owned by the fund plunged far more than the safer Russian and Brazilian bonds that it had bet against as a hedge, according to “When Genius Failed,” the book written by Roger Lowenstein.

“In a market stress event, some individual sectors that previously appeared unrelated do move together, and as a result, the organization could take losses on both of them or even on positions that were previously deemed to be a hedge,” said Ed Hida, the partner who runs the risk strategy and analytics services group at Deloitte & Touche LLP in New York.

The other risk tool commonly used by securities firms, known as stress testing or scenario analysis, also failed to prepare the industry for the plummeting value of AAA-rated securities that had previously been deemed the most creditworthy, he said.

“Stress tests are only as good or as predictive as the scenarios used and in many cases the scenarios that played out were much more severe than people anticipated,” Hida said. “One lesson learned is that these stress tests should be broader, should consider more scenarios.”

Kelleher, who became Morgan Stanley’s CFO in October, explained the flaw in the firm’s stress testing in a Dec. 19 interview, the day the company reported its first unprofitable quarter.

“Our assumptions included what at the time was deemed to be a worst-case scenario,” he said. “History has proven that the worst-case scenario was not the worst case.”

At Credit Suisse, one of the firms that have so far skirted the worst subprime declines, Ervin said value at risk played no role in helping him navigate the market turmoil.

“Once you go into a crisis like this, I think risk is much more about sitting down with traders, and talking about very specific issues and scenarios,” he said. “VaR we know will kind of lag going into a crisis so we don’t really watch that as a crisis indicator.”

Still, Ervin said VaR provides a service if used every day because it can pick up fluctuations in the risk that the firm is taking in some distant region or an arcane product that might not otherwise be noticed.

Investment banks will continue to take unsafe risks as long as traders are rewarded for making profits, leaving shareholders, bondholders and sometimes taxpayers to shoulder the consequences, Taleb said.

Wall Street traders “make an annual bonus and get an annual review based on risks that don’t show up on an annual basis,” Taleb said. “You have all the incentive in the world to take these risks.”

Print Friendly, PDF & Email

10 comments

  1. Paul

    This gets to the crux of what has gone wrong in the financial markets. Quantitative risk management has failed because the use of VaR-type measures gives spurious accuracy to the statistics; the models both for pricing structured notes and for measuring their risk are still fundamentally based on the assumption of a normal distribution (see every quantitative finance course and the use of Ito’s lemma), almost forty years after Mandelbrot showed that this does not hold; and bank and hedge fund traders don’t really care since they are all long a performance call option and it is in their interest to increase the riskiness of their bets. All in all, this is a recipe for disaster. If engineers designed bridges or skyscrapers using such faulty “science” they would be disbarred and put in jail.

  2. doc holiday

    Although I know zip about much today, I do ponder the theoretical possibility that around the period of 9/11 as mortgage rates were dropped to a 40+ year low, the vast majority of people in The George Bush Ownership Society, refinanced or were able to move up the housing food chain, and in both cases, virtually every in America (possibly beyond) obtained a new mortgage with new terms.

    Many of these new-era mortgages are at the core of the subprime contamination linked to underwriter mis-management which IMHO is related to software based loan applications which often included no-doc loans and various hybridized manipulated variables which resulted in our current pool of chaos.

    Furthermore, these new-era mortgages were using rik variables associated with previous generation mortgage trends/history and data that was based on defaults linked to a previous era of greater responsibility, where grandpa and gramma had 30 year conventional loans that were rock solid for 30 years — versus a society of home-flippers that have attention deficit and casino/lotto fever.

    The point being, the new era loans did not have ANY default history associated with any verifiable patterns which could be modeled. The models used by rating agencies were based on inappropriate data not relevant to these new loans, thus we have failure rates which were not projected, because they were improperly modeled, because the data used was outdated.

    IMHO, this is why the rating agencies and anyone in this housing bubble ignored the reality of unrealistic increases in value and thus risks associated with LTV; it was obvious to anyone that the market was well beyond the froth stage which Greenspan eluded to around 2004/2005. It was insane of these people to not verify more data and to backtest models with cross referenced checks; what are they thinking about? Ill tell you one thing they didnt understand as a collusive group and that is CRT *credit risk transfer)!

    Here is interesting link: http://www.ecb.int/pub/pdf/other/riskmeasurementandsystemicrisk200704en.pdf

    RISK MEASUREMENT AND SYSTEMIC RISK

  3. Anonymous

    Very misleading.

    Var was developed and aggressively promoted by the industry first, and then sold to the regulators. You paint this as a problem of regulators being biased in their preference for such models. That’s inverted to the actual history.

  4. foesskewered

    Looking at the recuitment pages, you’d probably never know that any quants or models used by banks were broke, they are looking for experienced personnel who’ve worked with quant funds and models used by banks for the last 5 years. Apparently they are not worried that these personnel have mostly experience with “seriously flawed models” , makes you wonder how much tinkering and eventual improvement they plan to make with the current models.

    Not sure about anyone else but do think that some of these pattern afficianados would make excellent conspiracy theorists or thelogians, not that either profession has much connection apart from “mystical patterns”, gosh, offending too many people today.

  5. Yves Smith

    10:37 PM,

    With all due respect, you need to widen your frame of reference. I was working with one of the top derivatives players in the early 1990s, just as the big Wall Street firms were starting to get into the industry.

    The Fed, thanks largely to Greenspan’s strong libertarian bias, took a “let a thousand flowers bloom” approach. They let the industry develop its own risk management techniques and just watched (note that the regulators have a legitimate reason to be concerned about risk, since that in term determines how much capital a regulated institution needs to hold).

    There was no attempt to intervene in these new products and businesses, and only a limited effort to understand (the math was way beyond the examiners).

    VAR was hardly the only approach/system being marketed aggressively in those days (and remember, in those days, the banking industry was even more fragmented than now and the Fed took less interest in Wall Street. That happened in the wake of the LTCM crisis).

    The two systems being marketed most aggressively to banks was JPM’s VAR and Bankers Trust’s RAROC (Risk Adjusted Return on Capital). RAROC is a better system for banks, since it gives parameters for product pricing (ie, it was a system that could get everyone on the same page in terms of how to design and price products from the perspective of the bank’s true risk and capital costs). But when BT went down due to reasons unrelated to RAROC, bye bye RAROC,

    So the regulators quite deliberately let there be a vacuum as far as risk management of complex new instruments was concerned, and set themselves in a very passive role. Nature abhors a vacuum, and JPM exploited that situation.

    But as I said earlier, no firm relies solely on VAR; they use multiple systems. Yet the Fed and OCC are woefully ignorant of those other approaches. It is a regulator’s responsibility to ride herd on its charges. Even if the industry did aggressively market VAR to them, there is no reason for them to have accepted it passively, and to fail to understand the other techniques in use.

  6. Anonymous

    This may be what holds our ecomomy together:

    Coherence (philosophical gambling strategy)
    http://en.wikipedia.org/wiki/Coherence_%28philosophical_gambling_strategy%29

    In a thought experiment proposed by the Italian probabilist Bruno de Finetti in order to justify Bayesian probability, an array of wagers is coherent precisely if it does not expose the wagerer to certain loss regardless of the outcomes of events on which he is wagering, provided his opponent chooses judiciously.

  7. vlade

    Yves:
    “(the math was way beyond the examiners).”
    This touches a subject close to my heart – I think that (most) the regulators are just not up to the task at the moment.
    There’s (or should I write there was ) no incentive why the best and brightest would even consider a move to regulator instead of starting their own HF.
    As a result, you don’t get someone who can go toe-to-toe with even a third rate bank on quantitative issues and look even remotely sensible.
    Or someone who’s able to point out why all those quantitative models aren’t reality and should not work as a sole substitution for reality (as to do that you need to show that you understand them in the first place).
    BTW, I speak from experience here.

    Just look at Basel2 – it’s so far behind the curve that it’s in the current environment doing more harm than good.

  8. Anonymous

    10:37/11:06

    With all due respect, I worked in the treasury/risk management area of a large integrated bank/dealer for almost 30 years, and saw the full evolution.

    RAROC is a capital attribution system that relied on the same mathematics as Var, which is a risk measurement system. RAROC used Var as a conceptual subset. RAROC superseded Var in scope but certainly did not displace it per se. They were complementary systems, not mutually exclusive. Bankers Trust as you know was the inventor of RAROC. But RAROC did not disappear with the demise of Bankers Trust. It’s had iterations, but is still used, just as Var is.

    I agree there were other modes of risk analysis. Traders more or less came to ignore Var except for risk limits. Traders like to develop their own risk measures, like simple measures of interest rate sensitivity, more than the cumbersome macro probabilistic measures inherent in Var. It allows them to do the probabilities intuitively. Var and RAROC were systems for risk managers more than traders.

    Var was eventually complemented with scenario analysis, which was supposed to take care of wild market events. The problem is that scenario analysis was a function of imagination among other things, including seeing the next unthinkable risk, such as a general meltdown in credit derivatives or bond insurers. Scenario analysis was a poor substitute for wisdom.

    The ‘operational risk’ lay in the degree of marketing of Var by the risk area to the senior executive committees and the Boards, at the expense of more insightful qualitative analysis. This was probably a function of the quantitative DNA in the risk management executive.

    Yes, regulators are not up to the task and they can be passive, and are probably underpaid based on the ‘blame put’ that the industry enjoys when the blow ups come along. But they were also sold a bill of goods. Don’t put all the blame there.

  9. Yves Smith

    Anon of 11:06,

    I owe you a wee apology and a clarification, but we also have a difference of philosophy.

    BT sold a packaged implementation of RAROC. My impression is that the use of RAROC dropped off considerably due to the obvious fact that BT would not longer be there to support the package. Some banks and private vendors stepped into the breech, but I am not clear on how consistent the approaches were post BT

    BT also made noise that their approach was not the same as VAR. I had understood there were some differences in the risk metrics, but per your point, they may have been so subtle as to be effectively meaningless.

    However, I still beg to differ with you on the regulatory posture, and that may be because I grew up in the securities industry (less heavily regulated than banking) in the early 1980s, when regulators were not afraid to regulate.

    I am sure you appreciate how easy it is for organizations to blow themselves up with derivatives. To have taken such a passive approach when these firms have FDIC deposits (in a downside scenario, the public will pay for losses) is inconceivable.

    Put another way: it would be OK to keep the industry on a loose leash IF the authorities understood what they were doing. They didn’t and they still don’t. As the FT’s John Dizard pointed out in a FT article, terabytes have been written about how most risk management tools, from Black-Scholes onward, assume a normal risk distribution and therefore fail to make sufficient allowance for tail risks. That is pretty basic, and the idea that the industry pushed VAR isn’t an adequate excuse for ignornace.

Comments are closed.