Yves here. Donald Mackenzie, author of An Engine, Not a Camera, which described how the development of models like Black Scholes affected practice in the financial services industry, has written a paper which describes the role of the Gaussian copula function, which a Felix Salmon story cover story in Wired depicted as central to the credit crisis. This is a crude summary, but here goes. When you mix a bunch of assets together and you want to know how much you’ve reduced the risk of price movements, you need to know the correlation of their price movements. Similarly, if you are putting together a structure, like a collateralized loan obligation, and want to diversify the assets to reduce the reduce the odds of default, you’d need to know the default correlations. The Gaussian copula model was used historical CDS spreads to develop default correlations. The model was used widely in the construction of collateralized loan obligations which are actually a type of CDO, but the popular press has taken to using that term only for “asset backed securities CDOs” or ABS CDOs. The problem was that Felix also suggested that the model was integral to the growth of all types of CDOs. Lisa Pollack, in summarizing the key bits of the paper, explains why this is an overstatement, that the Gaussian copula formula wasn’t used in creating ABS CDOs (or even the underlying mortgage securities). There is another way to know that this formula was not used in ABS CDO pricing. The template for CDS on asset backed securities was published by ISDA in June 2005. Mortgage backed securities and ABS CDOs were well established by then. The only change you saw (and it was a biggie) that resulted from the development of CDS on mortgage securities was the development of synthetic and heavily synthetic CDOs, and their structures copied those of old-fashioned CDOs made entirely of bonds.
Cathy O’Neil, a quant, looks at the key question raised by this paper: why do flawed models become widely accepted? I’d hazard that it has to do with distaste for complexity. Decision-makers really want simple heuristics. They don’t like ambiguity or having to weigh lots of tradoffs, even though that is what they are paid to do. Yet oversimplifying complex situations is seen as legitimate; look for instance, at how widely the flawed model, Value at Risk, has been adopted.
By Cathy O’Neil, a data scientist and member of the Occupy Wall Street Alternative Banking group. Cross posted from her blog, mathbabe
Recently a paper came out written by Donald MacKensie and Taylor Spears. It’s about the role of the Gaussian Copula model in the credit crisis, and it’s partly in reaction to Felix Salmon’s article in Wired from February 2009. Both Felix Salmon and Lisa Pollack have written responses to this paper, and they’re quite entertaining and worth a read.
Without going into too many details about the underlying models, which I might do in another post, I wanted to spend some time appreciating this paper for bringing up two issues that I believe far too few people give notice to:
- The politics of being a quant. The pressures on a quant inside an investment bank, a ratings agency, on a trading desk, or for that matter in a risk group are real and need to be understood.
- The narrative of blame. Who gets blamed when a model fails? For that matter, who is responsible for making sure it works at all?
In the paper, they discuss the concept of a “model dope,” which is a rhetorical device helping you imagine an idiot who ‘unthinkingly believes in the output of the model’. The paper explains that, as far as they could tell, there were no such actual people, that the quants they interviewed all knew the model was and is flawed and overly simplistic.
I completely believe this, and I think it wouldn’t surprise any quant who’s worked in the industry. Quants are the guys who get metaphorically paraded out in front of the bank, with their Ph.D. hanging out as a kind of badge, but when they get back to work are put back in the mines. It’s a trader’s world, or a salesman’s world, and nobody asks the quants for their nuanced opinion on the validity of basing billions of dollars in transactions on these models if the P&L looks good.
Let me say it this way: how many places employing quants to create risk or hedging models have their quants actually in charge of stuff? Very, very few is the answer. The quants are not in charge, they rarely have real power, and as soon as they produce something semi-functional and useful, they no longer own that thing – it’s been taken away from them and is owned by the real power brokers.
Which is not to say the guys in power don’t kind of understand the stuff- they do, they’re smart, but they’re not typically wedded to the idea of intellectual integrity. They typically understand it well enough to see how it can be gamed.
So I don’t think it was the quants that were promoting the wide use of the Gaussian Copula model. In the paper, they explain that it happened for essentially political reasons:
First, it was easy to talk about, since an entire correlation matrix of default was boiled down to one number, “base correlation”:
“If traders in one bank … had to ‘talk using a model’ to traders in a different bank that used a different copula, the Gaussian copula was the most convenient Esperanto: the common denominator that made communication easy.”
Next, it allowed traders to book P&L on the same day they made a trade:
“The most important role of a correlation model, another quant told the first author in January 2007, is as ‘a device for being able to book P&L’,”
Next, once it was widely used, it had staying power just because it was difficult to explain something else, not to mention difficult to admit the current model’s flaws:
“Here, the fact that the Gaussian copula base correlation model was a market standard provided a considerable incentive to keep using it, because it avoided having to persuade accountants and auditors within the bank and auditors outside it of the virtues of a different approach.”
Putting that stuff together, we can see that the mere, lowly quant’s objection, if there was one, that the model sucked was the least of the considerations of the powers-that-were:
“From the viewpoint of both communication and remuneration, therefore, the Gaussian copula was hard to discard.”
As to the question of blame, that’s also all about power. Just because the objections of quants were likely ignored doesn’t mean we can’t blame them after the fact – that’s another useful thing about quants, since they even admit the models were overly simplistic. Easy fall guys.
By the way, I’m not saying that quants rebelled against the misuse of their models, that they tried their best to warn the public of the known flaws of the Gaussian Copula or any other model for that matter. In fact I don’t know of many quants who did stand up to these assholes, partly because they were paid really well not to, and partly because they were not the alpha males in the place.
I’m just trying to point out that blame can get kind of murky. If a quant comes up with a model and says up front, hey this is just a sketch of something, it’s not totally realistic, but it’s better than nothing, and then the investment bank ignored the quant’s misgivings and bets the house on the model, who is responsible for the resulting risk?
In other words, I’d love the quants to grow some balls, but it’s going to take a major revolution in the power structure for that to be enough.
The two issues of politics and blame raise for me a larger question in reference to modeling. Namely, why and how to models develop?
[This is a cultural question, and separate from the standard (and interesting) questions you usually hear people ask of a model:
- What does the model claims to do?
- How well it works with real data?, and
- If it is widely employed, how the model affects the market itself?]
- To simplify a businessman’s day. Instead of reading out results from 5 trading desks, we want to dumb it down to one single number, so we employ a modeler to come in and do their best to summarize with one P&L number and one risk number. In other words, it’s the modeler’s job to turn a report into a sound bite. Of course the problem with that model genesis is that it doesn’t necessarily make sense to combine a bunch of numbers into one number. Sometimes the world is actually complex and needs to be understood with a nuanced view. Sometimes a sound bite isn’t enough.
- To sound incredibly smart – in other words, pure spin doctoring. I encounter this more in tech than in finance, where there are enough model-savvy people that you can’t be quite as blithe about hiding bullshit in a model. But this is real in finance too, and I think is used to confuse regulators all the time.
- To dissect, or attempt to dissect, various kinds of ‘unintentional risk’ from ‘intentional positions’. This is the single most dangerous kind of model, because on paper it can look so good, and can seem to work for so long. Credit default swaps can be thought of as manifestation of this goal – an attempt to separate default risk from holding-a-bond risk. The problem we face is that our models are never really that good, or even testable, and there are unintended consequences of these new-fangled contracts that sometimes cause catastrophic events.
- Of course, in quant shops like D.E. Shaw or RenTech or Citadel, there are also quants who try to predict the market or trade superfast on currencies, which is different from the stuff I’ve been talking about which mostly deals with hedging and risk, with different kinds of corresponding risks.