I hate beating up on Gillian Tett, because even a writer is clever as she is is ultimately no better than her sources, and she seems to be spending too much time with the wrong sort of technocrats.
Her latest piece correctly decries the fact that no one has the foggiest idea of what might have happened if Greece defaulted (note that we are likely to revisit this issue in the not-too-distant future). But she makes the mistake of assuming the problem could have been solved (in the mathematical sense, that the outcome could have been predicted with some precision) by having better data. That is a considerable and unwarranted logical leap:
Today banks and other financial institutions are filing far more detailed reports on those repo and credit derivatives trades, and regulators are exchanging that information between themselves. Meanwhile, in Washington a new body – the Office of Financial Research (OFR) – has been established to monitor those data flows and in July US regulators will take another important step forward when they start receiving detailed, timely trading data from hedge funds, for the first time.
But there is a catch: although these reports are now flooding in, what is still critically unclear is whether the regulators – or anybody else – has the resources and incentives to use that data properly. The bitter irony is that this information tsunami is hitting just as institutions such as the Securities and Exchange Commission are seeing their resources squeezed; getting the all-important brain power – or the computers – to crunch those numbers is getting harder by the day.
That means that important data – on Greece, or anything else – could end up languishing in dark corners of cyberspace. That is a profound pity in every sense. After all, if the data could be properly deployed, it might do wonders to show how the modern global financial system really works (or not, in the eurozone.) Conversely, if data ends up partly unused, that not just creates a pointless cost for banks and asset managers – but could also expose government agencies to future political and legal risk, if it ever emerges in a future crisis that data had been ignored.
Since important information about the last crisis has been given short shrift, it’s a given that more data won’t necessarily yield a commensurate increase in understanding. We’ve lamented how, for instance, a critically important BIS paper debunking the role of the saving glut in the crisis and the use of the “natural” rate of interest in economic models, has been largely been ignored. Similarly, from what we can tell, there is perilous little understanding of how heavily synthetic and synthetic CDOs turned a US housing bubble that would have died a natural death in 2005 into a global financial crisis.
And Tett’s focus on “data”, no doubt reflecting the preoccupation of the officialdom, is a big tell. Economists routinely exhibit “drunk under the streetlight” syndrome: they prize analyzing big datasets, aren’t good at developing them (this was a huge beef of Nobel Prize winner Wassily Leontief), and pretty bad at doing qualitative research (they’d rather to do thought experiments and even when they undertake survey research, the resulting studies have strong hallmarks of a failure to do proper development and validation of the survey instrument).
Now, to the prospects for performing diagnostics and preventing the next crisis. One the one hand, it is a disgrace that the authorities didn’t have a good grip on who was on the wrong side of Greek credit default swaps. The US banks were thought to be reasonably exposed; that’s one reason Treasury Secretary Geithner was unduly interested in this situation (recall when Geithner intervened what was seen as decisively against an Irish effort to haircut €30 billion of unguaranteed bonds?). This is inexcusable, particularly in the wake of the financial crisis. We’ve harped on the fact the likely reason that Bear was bailed out was due to its credit default swap exposures. At the time of the Bear failure, Lehman, UBS, and Merrill were seen as next. The authorities went into Mission Accomplished mode rather than putting on a full bore, international effort to get to the bottom of CDS exposures. And the Greek affair suggests they’ve continued to sit on their hands.
This matters because, as Lisa Pollack illustrated in a neat little post, supposedly hedged positions across counterparties can quickly become unhedged if one counterparty fails. So a basic data gathering exercise would at least help in identifying who is particularly active and has high exposures to specific counterparties and products.
But this is of less help with big financial firms than you might think. While Lehman was correctly seen as being undercapitalized well in advance, pretty much no one foresaw Bear’s failure. It went down in a mere ten days. Confidence is a fragile thing. Similarly, while some positions are not very liquid or all that easy to hedge (think of our favorite bete noire, second liens on US homes), in general big financial firms have dynamic balance sheets. With more extensive reporting, could regulators have seen and intervened in MF Global’s betting the farm on short-dated Italian government debt? Even if they had perceived the risk, Corzine would have argued that the trade would have worked out (and it did even though the firm failed by levering it too much).
And think what would have happened if the regulators had gone in. In our current overly permissive regime, intervening to shut down MF Global would have been seen as the impairment or destruction of a profitable business. No one would know the counterfactual, that the firm would not only die but also lose customers boatloads of money. Or a swarm by regulators could have precipitated a customer run, again taking the firm down. In the current environment where executives have good access to the media and highly paid PR professionals to present their aggrieved messaging, it’s going to take some pretty tenacious and articulate regulators to swat back their arguments.
While it is hard to object to having better data, and we desperately need better information in some key policy areas (the lack of good information in the housing/mortgage arena and in student debt is appalling), more data is unlikely to get us as far in the financial markets sphere as Tett hopes. The problem is, as we and others have discussed before, is that the financial system is tightly coupled. That means that processes progress rapidly from one step to another, faster than people can intervene. The flash crash is a recent example.
There are many reasons why tightly coupled systems are really difficult to model. They tend to undergo state changes rapidly, from ordered to chaotic, and you can’t see the thresholds in advance. And financial systems also have the nasty tendency for products that were uncorrelated or not strongly correlated to move together as investors dump risky positions and flee for the safest havens. So exposures that might not seem to all that problematic can become so when the system comes under stress (who in fall 2007 would have thought that auction rate securities would blow up, for instance, or more important, in the heat of the crisis, even Treasuries were not accepted as repo collateral?).
And this problem is made worse by the fact that economists have long been allergic to the sort of mathematics and modeling approaches best suited to this type of analysis, namely systems dynamics and chaos theory. I discussed both these aesthetic biases at length in ECONNED, but the very short version is that following Paul Samuelson, economists have wanted to put the discipline on a “scientific” footing, and that meant embracing the “ergodic” axiom. Warning: a lot of natural systems aren’t ergodic. The ergodic assumption means no path dependence and no tendency to equilibrium. If you get a good enough sample of past behavior, you can predict future behavior. If you think these are good foundation for modeling financial markets, I have a bridge I’d like to sell you.
If we want to reduce the frequency and severity of financial crises, it isn’t a data problem. It’s a systems design problem. As Richard Bookstaber wrote in his book A Demon of Our Own Design, published before the crisis, the most important thing to do in a tightly coupled system to reduce risk is reduce the tight coupling. Measures to reduce risk in a tightly coupled system often wind up increasing them because the tight coupling means intervention is likely to be destabilizing. And we all know what the big culprits are. It does not take better data capture to figure this out. Tett flagged two in her piece. The obvious one is credit default swaps. They serve no social value and are inherently underpriced insurance (adequate CDS premiums for jump to default risk would render the product uneconomic to buyers). Underpriced insurance, given enough time, blows up guarantors who take on too much exposure (AIG, the monolines, and the Eurobanks like UBS who had near death experiences by being de facto guarantors by holding synthhetic/hybprid AAA CDO tranches are all proof). But has anyone in the officialdom taken the remotest interest in addressing a blindingly obvious problem? No.
Similarly, Tett mentions that a Fitch study that ascertained that banks were back to their old habit of using structured credit products as repo collateral. We’ve also discussed how problematic that is; the BIS flagged it more than a decade ago. And despite the fact that it should be bloomin’ obvious that using anything other than pristine collateral for repo is a source of systemic risk, since it was a cause of trouble before, the officialdom is loath to intervene. They’ve bought the “scarcity of good collateral” meme pushed by the banks. While narrowly that is correct, they haven’t sought to question why so much collateral is really necessary. The big driver pre-crisis, as we pointed out, was the explosion in derivatives (as values fluctuate, counterparties have to post collateral or have their position closed out). The growth in those dubious CDS was a major contributor. Moreover (and this comes from someone who has worked with derivatives firms), many, if not most over the counter derivatives (and certainly the most profitable) are used to manipulate accounting and for regulatory arbitrage. The overwhelming majority of socially valuable uses of derivatives can be accomplished via products that can be traded on exchanges, but regulators have been unwilling to push back on the industry’s imperial right to profit, no matter how much it might wind up creating for the rest of us. (We admittedly have additional drivers post crisis, such as QE eating up Treasuries, but there is perilous little critical examination of the demand side of the equation).
So the answer does not lie in better data. It lies in the willingness of the authorities to stare down the financial services industry. And the next financial crisis is likely to be a necessary, but perhaps even then not sufficient, condition for that change in attitude.