As it turns out, information is not perfect, volatility does not define risk, markets are not efficient, the individual is adaptable.
– Dr. Michael Burry, UCLA Economics Commencement Speech, June 20, 2012
As markets seized in 2008 as counterparty risk became apparent and bankers stopped lending to one another Treasury knew that they had a problem, though when they tried to find the source of the problem – rather than the symptom – they found a black hole, a dearth of data. Former Assistant Secretary Treasury Phillip Swagel describes this unwelcome surprise in a 2009 paper, The Financial Crisis: An Inside View:
Two main policy proposals aimed at calming the financial markets emerged from the August episode: the so-called Master Liquidity Enhancement Conduit (MLEC), or “Super SIV,” a common vehicle in which banks would hold their illiquid assets, and a mortgage information database that would provide individual loan-level information on the quality of underwriting and subsequent performance of mortgages, and thereby facilitate analysis of complex MBSs and their derivatives. Neither of these efforts came to fruition, although the American Securitization Forum (ASF) independently began to work on a mortgage database under the rubric of their “Project Restart.”
Despite that components of the more expensive and higher risk Super SIV arguably manifested in the various TALF programs not widely known when the paper was published the much less risky and less expensive Project Restart never even got its first start; we continue to fly blind. Later, Swagel concludes about the idea “What was surprising was that this database did not exist already – that investors in MBSs had not demanded the information from the beginning.”
I’ll add that what’s even more surprising is that neither government nor investors are willing to enforce the basic laws that lead to a lack of transparency even now, years later, despite that these issues continue to decimate the underbelly of the economy.
Take data from the National Association of Realtors (OK, I can’t resist, quoting the old Borscht-belt shtick, take it please). This data, used regularly by government and pundits, consists of summary data – not primary data – retrieved by a lose collection of entirely independent real-estate listings scattered throughout the country. Using data from a conflicted group that is continually revised downward as dispositive is akin to scanning Craigslist hook-up ads to determine upcoming household formation and project the birth rate.
Nevertheless, website like Calculated Risk faithfully plot this it in simplistic red and blue lines, then “prove” it by showing a correlation to other real-estate data-sets, like those that scan online real estate want-ads. Since the latter is often electronically generated from the former it’s not surprising that they don’t closely correlate; rather it’s surprising they don’t match precisely. However, unless a person is trying to gauge the completeness of online real-estate ads to MLS data the correlation is entirely meaningless. As Einstein noted when revising Occam’s Razor, “Everything should be kept as simple as possible, but no simpler.”
I’ve been working on a database of loan-level information aggregated from investor reports for a long time now. It’s a massive, complicated, expensive, and tedious project. To put it into perspective so far I have 11.74 million mainly bubble-era loans covering 378.3 million payment records. It covers every state, everywhere, though focuses more on states that had heavier Alt-A and subprime exposure and is loaded onto what is essentially a supercomputer that would be unaffordable except Amazon rents it. When finished I’ll cross-reference it to property records information to see if information relayed to investors on delinquencies, losses, and the generalized state of the trusts matches what is being filed in courthouse records.
Substantive surprises pop out when one studies detailed primary data. For example, while trying to figure out how long until the banks started to pound people to the pavements again I found an interesting tidbit: JP Morgan and Bank of America have been writing off their subprime loans at a furious clip lately. In March and April, using ZIP codes that begin with 334 – my own beloved and severely impaired West Palm Beach, FL backyard – I found that the top MBS issuers writing off loans were, in this order, Bear Stearns, J.P. Morgan Mortgage Acceptance Corp., Merrill Lynch, First Franklin, and Lehman LXS. The former two are, of course, JP Morgan and the latter two are, of course, Bank of America. JPM hides their Washington Mutual loans or I’d expect that Lehman’s spot would be taken by WaMu.
Writing off bad-debt is usually caused by short sales, principal reductions, or finished foreclosures. Since the pace of the write-offs exceeds the number of REOs – which, as the want-ads show is relatively low – it’s clear that the banks have been dumping debt, which is a positive step towards reaching a housing floor. I was impressed – almost shocked at the prospect about writing something positive about either bank (something I’ll admit that’s never happened) – that I called JPM asking what they were doing. After being bounced around by a few people they failed to return my calls; maybe their own PR people have forgotten how to manage a positive inquiry after all these years.
I’ve been asked to publish my data, for free of course, but I have a family and employees to feed, plus a tech powerhouse to maintain, though if it was affordable I would if I could. At least I’ll continue writing about it, albeit with the caveat that deriving patterns from massive and complex data-sets can sometimes lead to substantive findings but at other times is no more useful for divining economic insights than a Rorschach test.
I was able to cull my data because of rules and practices promulgated during the Bush era. Few people thought that business and government could get much worse but – as we watch JPM openly flout requirements to open the WaMu data depository to the public, and leave the legacy Fannie Mae and Freddie Mac data private – this seems like the latest Obama letdown. We can argue until we’re red and blue in the face about the meaning of various so-called economic indicators, and we can transfer those tea leaves to digital paper in easy-to-read graphs and charts. But the fact is even my own enormous database is still missing an enormous number of loans, and companies who have access to the information are highly conflicted by industry capture.
Maybe we’ve reached a bottom, like the cheerleaders relentlessly proclaim, or maybe we’ll reach one in 2013, like the economists of Fannie Mae predict, or maybe we’ve reached a place where prices will remain stagnant – neither going up nor down much – as Prof. Robert Shiller predicts. Or maybe there are regional bubbles driven my microeconomic trends, like the rise in Miami condos driven by South American investors or the bump in Arizona real-estate driven by cash-rich cold Canadian
I try to be modest but believe I’ve reached a place where I can call myself a leading expert in housing data, yet all I can say definitively is that if the banks go into a tailspin again the current people at Treasury won’t be substantively much better off than their predecessors, and something is profoundly wrong about that. It’s time to restart Project Restart, to standardize this information, and make it widely available. I’m unfortunately not worried about that putting me out of business because I know that it will never happen.