"The myth of the riskometer"

Jon Danielsson has provided a great post at VoxEU (hat tip Independent Accountant) that calls into question the very premise of modern risk management, namely, that risk can be accurately quantified.

This notion goes far further than the discussion of Value at Risk last the weekend keying off an article in the New York Times by Joe Nocera. That piece, and much of the discussion around risk management, often assumes that the choice is relying on models versus relying on judgement.

But the point implicit in Danielsson’s discussion is fundamental. If we really cannot measure risk with any precision, that renders the judgement versus models question moot (“judgement” is experience-based, usually some form of pattern recognition, and the person making the decision may not always recognize how he came to his conclusion. Danielsson also explains that historical comparisons have limited value).

But where does that leave us, a reader might wail. Well, is it better to have undue faith in bad diagnostics, or to go forth knowing you really don’t and cannot know very much? Unfortunately, the popularity of psychics suggests that people are desperate for some, any method for reducing the uncertainty over what the future may hold (a financial journalist told me that psychics are very popular among Wall Street professionals).

False confidence can and does lead to undue risk taking. Danielsson advocates moving away from regulatory approaches that seek to calibrate risk levels, and instead rely on simpler methods, like leverage.

From VoxEU:

Much of today’s financial regulation assumes that risk can be accurately measured – that financial engineers, like civil engineers, can design safe products with sophisticated maths informed by historical estimates. But, as the crisis has shown, the laws of finance react to financial engineers’ creations, rendering risk calculations invalid. Regulators should rely on simpler methods.

There is a widely held belief that financial risk is easily measured – that we can stick some sort of riskometer deep into the bowels of the financial system and get an accurate measurement of the risk of complex financial instruments. Such misguided belief in this riskometer played a key role in getting the financial system into the mess it is in.
Unfortunately, the lessons have not been learned. Risk sensitivity is expected to play a key role both in the future regulatory system and new areas such as executive compensation.

Yves here. Danielsson exaggerates a bit. I am not sure anyone would say risk can be measured “easily” but there is a widespread premise that it can be estimated with reasonable accuracy. Back to the piece:

Origins of the myth

Where does this belief come from? Perhaps the riskometer is incredibly clever – after all, it is designed by some of the smartest people around, using incredibly sophisticated mathematics.

Perhaps this belief also comes from what we know about physics. By understanding the laws of nature, engineers are able to create the most amazing things. If we can leverage the laws of nature into an Airbus 380, we surely must be able to leverage the laws of finance into a CDO.

This is false. The laws of finance are not the same as the laws of nature….nature does not generally react to the engineer.

The problem is endogenous risk

In physics, complexity is a virtue. It enables us to create supercomputers and iPods. In finance, complexity used to be a virtue. The more complex the instruments are, the more opaque they are, and the more money you make. So long as the underlying risk assumptions are correct, the complex product is sound. In finance, complexity has become a vice.

We can create the most sophisticated financial models, but immediately when they are put to use, the financial system changes. Outcomes in the financial system aggregate intelligent human behaviour. Therefore attempting to forecast prices or risk using past observations is generally impossible. This is what Hyun Song Shin and I called endogenous risk (Danielsson and Shin 2003).

Because of endogenous risk, financial risk forecasting is one of the hardest things we do. In Danielsson (2008), I tried what is perhaps the easiest risk modelling exercise there is – forecasting value-at-risk for IBM stock. The resulting number was about +/- 30% accurate, depending on the model and assumptions. And this is the best case scenario. Trying to model the risk in more complicated assets is much more inaccurate. +/- 30% accurate is the best we can do.

Applying the riskometer

The inaccuracy of risk modelling does not prevent us from trying to measure risk, and when we have such a measurement, we can create the most amazing structures – CDOs, SIVs, CDSs,…Unfortunately, if the underlying foundation is based on sand, the whole structure becomes unstable. What the quants missed was that the underlying assumptions were false.

We don’t seem to be learning the lesson, as argued by Taleb and Triana (2008), that “risk methods that failed dramatically in the real world continue to be taught to students”, adding “a method heavily grounded on those same quantitative and theoretical principles, called Value at Risk, continued to be widely used. It was this that was to blame for the crisis.”
When complicated models are used to create financial products, the designer looks at historical prices for guidance. If in history prices are generally increasing and risk is apparently low, that will become the prediction for the future. Thus a bubble is created. Increasing prices feed into the models, inflating valuations, inflating prices more. This is how most models work, and this is why models are often so wrong. We cannot stick a riskometer into a CDO and get an accurate reading.

Risk sensitivity and financial regulations

One of the biggest problems leading up to the crisis was the twin belief that risk could be modelled and that complexity was good. Certainly the regulators who made risk sensitivity the centrepiece of the Basel 2 Accord believed this.

Yves here. I am not certain if you had asked regulators in 2006 whether complexity was good that they would have said yes. But in fact they were fully on board with new methods that allowed for the slicing, dicing, and widespread distribution of risk. The belief was that this led to wider diversification and that was entirely a good thing. Back to the article:

Under Basel 2, bank capital is risk-sensitive. What that means is that a financial institution is required to measure the riskiness of its assets, and the riskier the assets the more capital it has to hold. At a first glance, this is a sensible idea, after all why should we not want capital to reflect riskiness? But there are at least three main problems: the measurement of risk, procyclicality (see Danielsson et. al 2001), and the determination of capital.

To have risk-sensitive capital we need to measure risk, i.e. apply the riskometer. In the absence of accurate risk measurements, risk-sensitive bank capital is at best meaningless and at worst dangerous.

Risk-sensitive capital can be dangerous because it gives a false sense of security. In the same way it is so hard to measure risk, it is also easy to manipulate risk measurements. It is a straightforward exercise to manipulate risk measurements to give vastly different outcomes in an entirely plausible and justifiable manner, without affecting the real underlying risk. A financial institution can easily report low risk levels whilst deliberately or otherwise assuming much higher risk. This of course means that risk calculations used for the calculation of capital are inevitably suspect.

The financial engineering premium

Related to this is the problem of determining what exactly is capital. The standards for determining capital are not set in stone; they vary between countries and even between institutions. Indeed, a vast industry of capital structure experts exists explicitly to manipulate capital, making capital appear as high as possible while making it in reality as low as possible.

The unreliability of capital calculations becomes especially visible when we compare standard capital calculations under international standards with the American leverage ratio. The leverage ratio limits the capital to assets ratio of banks and is therefore a much more conservative measure of capital than the risk-based capital of Basel 2. Because it is more conservative, it is much harder to manipulate.
One thing we have learned in the cri
sis is that banks that were thought to have adequate capital have been found lacking. A number of recent studies have looked at the various calculations of bank capital and found that some of the most highly capitalised banks under Basel 2 are the lowest capitalised under the leverage ratio, an effect we could call the financial engineering premium.

As Philipp Hildebrand (2008) of the Swiss National Bank recently observed “Looking at risk-based capital measures, the two large Swiss banks were among the best-capitalised large international banks in the world. Looking at simple leverage, however, these institutions were among the worst-capitalised banks”

The riskometer and bonuses

We are now seeing risk sensitivity applied to new areas such as executive compensation. A recent example is a report from UBS (2008) on their future model for compensation, where it is stated that “variable compensation will be based on clear performance criteria which are linked to risk-adjusted value creation.” The idea seems laudable – of course we want the compensation of UBS executives to be increasingly risk sensitive.

The problem is that whilst such risk sensitivity may be intuitively and theoretically attractive, it is difficult or impossible to achieve in practice. One thing we have learned in the crisis is that executives have been able to assume much more risk than desired by the bank. A key reason why they were able to do so was that they understood the models and the risk in their own positions much better than other parts of the bank. It is hard to see why more risk-sensitive compensation would solve that problem. After all, the individual who has the deepest understanding of positions and the models is in the best place to manipulate the risk models. Increasing the risk sensitivity of executive compensation seems to be the lazy way out.

This problem might not be too bad because UBS will not pay out all the bonuses in one go, instead, “Even if an executive leaves the company, the balance (i.e. remaining bonuses) will be kept at-risk for a period of three years in order to capture any tail risk events.” Unfortunately, the fact that a tail event is realised does not by itself imply that tail risk was high, and conversely, the absence of such an event does not imply risk was low. If UBS denies bonus payments when losses occur in the future and pays them out when no losses occur, all it has accomplished is rewarding the lucky and inviting lawsuits from the unlucky. The underlying problem is not really solved.

Conclusion

The myth of the riskometer is alive and kicking. In spite of a large body of empirical evidence identifying the difficulties in measuring financial risk, policymakers and financial institutions alike continue to promote risk sensitivity.
The reasons may have to do with the fact that risk sensitivity is intuitively attractive, and the counter arguments complex. The crisis, however, shows us the folly of the riskometer. Let us hope that decision makers will rely on other methods.

Print Friendly
Tweet about this on Twitter0Digg thisShare on Reddit0Share on StumbleUpon0Share on Facebook0Share on LinkedIn0Share on Google+0Buffer this pageEmail this to someone

36 comments

  1. bg

    Yves,

    I will play devils advocate here. I am a hard sciences type, and have a natural inclination to “everything can be measured” or even “everything can be contained.” This is a weakness I probably share with others, and I believed that Fannie/Freddie had locked in unassailable hedges on all 26 letters in the greek alphabet years ago.

    But just because me – and many of my cohort – were naive, and deadly wrong, does not mean that it is okay to switch poles automatically.

    Risk is measurable, and statistics are valuable. The problem is that they can be easily abused, and (in my example above) the scientists can overly tunnel at the expense of common sense.

    Our vocabulary is filled with new idioms (black swan, moral hazard, regulatory capture, correlation goes to 1, Minsky moment) that is allowing us to put more variables into the equation. Managing risk is incredibly important, and I doubt we will stop trying to improve alpha over time with improved techniques. But I know we have to recover from artifical alpha first.

  2. Anonymous

    We’re hearing more and more of this burbling about leverage ratios (ie, NO risk adjustment) from both inside and outside central banks recently.

    But really, do they seriously think banks should hold as much capital against some project finance to a dubious company, or against a CDO, as they do against a loan to a prime (non-US) mortgage borrower with a LTV of 60%? Or against a US Treasury bond? Give me a break. Asset quality matters.

  3. Anonymous

    Take a biological analogy.

    A monoculture can offer the opportunity to develop exceptional crop yields at low input costs.

    However, it also exposes the crop to extremely high risk of failure.

    Because the failure modes are inherently not all known and not quantifiable (microorganisms, pests, etc. adapt and evolve), the risks taken can only be understood when a catastrophe happens.

    Yet, looking around agriculture, there are far more examples of monocultures than deliberate attempts at diversification.

    D

  4. Anonymous

    Ahem as I said you can not model humans, full stop. First we need AI/Artificial Intelligence programs, just to collect and input data/parameters.

    @bg, Science shows us every day how stupid we really are. I spend a lot of time talking to Friends at TRW, NOAA@Boulder and JPL. I find they over whelmingly conclude that we are born with a dunce hat on and most never take it off, by choice, with arrogance the key factor. Risk is measurable to a gross point and thats all. Try military risk assessment and have that go wrong, with all the best planing and its a hole new ballgame in a blink of an eye. I think we did it just as bit of CYA cover your ass for the brass.

  5. Blissex

    «the very premise of modern risk management, namely, that risk can be accurately quantified. »

    The whole discussion is a waste of time from this point. The guy hasn’t read his Keynes, and the vital distinction he makes between risk and uncertainty.

    If it can be quantified (in an actuarial sense), it is risk; if imponderable it is uncertainty.

    So for example the percentage of people that will die at 80yo involves a risk for an insurer, because the probabilities can be quantified pretty easily; that previous statistics on mortality will apply in 40 years time instead is uncertain.

    When doing “risk” management banks should model risk actuarially, and then guesstimate uncertainty, stating their assumptions.

  6. M

    bg,
    I would qualify your statement that ‘Risk is measurable…”. Risk is certainly measurable, but with some degree of error. Any model is by definition a simplification of the real world, and any results from that model may therefore be only as good as the model is as an approximation.

    The problem with risk measures is not, in of itself, that they are prone to errors. If the numbers are used while acknowledging their limitations, that would be fine. The real problem is that people seem to have a knack for ignoring the fact that risk estimates are just that, estimates, and tend to take the numbers (such as VAR figures) as gospel.

    And the biological analogy is a good one – Wall Street and the City of London have become monocultures. It remains to be seen how they can adapt and evolve.

  7. ketzerisch

    For all its complexity math is basically simple: You draw a few assumptions and following strict logic come to a conclusion; model in case of risk management.

    So if you apply a model you always have to question the assumptions. This is the moment judgment enters the game. There is no meaning in questioning the model once you agree to its assumptions.

    Now, from a financial stability point of view, the problem arises if to mans market participants trade under the same assumptions; e.g. the efficient market hypothesis. If that assumption turns out to be wrong, you’re left with a financial crisis.

    So, this makes a case for NOT prescribing a risk model like Basel 2 does. Because this implicitly leaves all market participants exposed to the same assumptions. This is the seed for the next financial crisis.

  8. Ginger Yellow

    It’s worth noting that the Basel Committee is looking at introducing an absolute leverage requirement into the capital accords. Most bankers I speak to think it will be introduced within the next five years.

  9. russell1200

    Yves:

    With regards to assumption of simple measurement. From a friend of mines (anti-) insurance blog:

    http://slabbed.wordpress.com/2008/12/02/the-slabberator-causes-some-commotion/

    The screen shot of Allstates’ dashboard is the end result of their VAR computations. You will note that reinsurance has all of two variables, and that this is not the type of tool used by lower or mid level management.

    My personal favorite is the “one variable fits all” pandemic control.

  10. Anonymous

    I dare post a comment here although I am far from being an economist. Need I say that I have learned a lot on your post, Yves.
    However, on the topic of risk, I venture the following: most of the calculations done by quants fall in the category of “perturbation theory”, which work reasonnably well -depending on the assumptions of course – when what you are working on is negligible compared to the whole. But when everyone plays the same game with billions of dollars that are close to the whole quantity of REAL money in circulation, then all perturbation theory breaks down and either chaos or collapse occurs. A good example is the CDS. When only a few actors enter into an insurance agreement to insure against a bond one holds defaulting, no great harm is done, even if the risk is not well estimated by the insurer. But when huge numbers use CDS on bonds they do not hold and the total notional amount goes into the trillions, then havoc will occur with a probability terribly close to one.

  11. Anonymous

    Hasn’t one of Buffett’s arguments always been that he’d rather be roughly right than precisely wrong – i.e. arguing against the artificial precision so prevalent on Wall Street – and he has of course made his boners as well despite writing a limit into every insurance exposure in the last three decades or so after watching a single small Geico policy cost tens of millions. There are stability/instablity arguments that make sense as well, such as that guy who wrote about power laws and Taleb’s insights into the difficulty of assessing risk. All in all, the arguments seem to boil down that some level of *liquidity* is generally desirable despite the foregone opportunity cost in that the liquidity sharply reduces risk. (and of course, the inverse should be true as well – which is that any degree of leverage also sharply increases risk.

    Thanks for an intelligent discourse….

  12. Anonymous

    Me thinks risk can only be measured after the fact. You can’t measure Nessy the Lock Ness Monster until you find her. JustinTheSkeptic

  13. Anonymous

    Economist Nassim Nicholas Taleb, author of the Black Swan, and his mentor, mathematician Benoit Mandelbrot, speak with Paul Solman about chain reactions and predicting the financial crisis:

    http://bigpicture.typepad.com/comments/2008/10/pbs-video-taleb.html

    I think there is more to Complexity and its problems than just the rather benign collapse of the Financial System. If the same Risk Analysis and Hubris have been applied to other Systems than we are done.
    Who are the Experts, like Financial Experts, who manage the Cern Collider? (Everything ok here, boss.)
    Or the unregulated (sound familiar to DERIVATIVES?) emerging Nanotechnology?
    Or the experimentation being performed in thousands of BioResearch Labs?
    As just one example of Complexity and the Human Incapacity to manage it:
    Did the designers of the Airliners which crashed into the World Trade Towers ever consider the fact that these could be turned into Guided Missiles?

    Technology expands as our capacity to manage it and its Complexity diminishes. I would feel better if Taleb’s Fat Tony were managing some of these matters.

  14. Richard Smith

    Ketzerisch,

    Your bond holder point was a good one (I had been blethering all over that thread yesterday so didn’t respond then).

    But ThatLeftTurnInABQ is also right – there is ultimately a social cost. The pool of capital that goes into bond investment is finite, and destroying great chunks of it the way Lehman et al did *hurts*. If bank bondholders know that they are going to get systematically stuffed by risk mismanagement, they may at least demand higher returns for their risk; at worst banks will have to look elsewhere for their capital (taxpayer, other mugs if they can be located). That pushes up the cost of capital for banks and *everyone* ends up paying for that in a variety of ways.

    John Hempton at Bronte Capital made a related point in respect of the inconsistency of various state interventions in financial institutions that we saw in ’08 – inconsistency which also scares off bond investors.

    It all comes down to trustworthiness; not much of that about at the moment.

  15. Anonymous

    As just one example of Complexity and the Human Incapacity to manage it:
    Did the designers of the Airliners which crashed into the World Trade Towers ever consider the fact that these could be turned into Guided Missiles?

    anon,
    I don’t think that they did.

    On the other hand the architects of the WTC did in fact design the buildings to withstand a direct collision from a Boeing 707.

    It was in the specs. :)

    JohnC

  16. River

    Anon at 7:07, I agree with you, Taleb and Mandlebrot. The risk models created by the mathgeeks using the bell curve are the problem and the universities are still teaching this bloney.

    Psychics are every bit as good at future predictions as professionals in any field, according to Taleb. IOW, none of them are worth a plug. ‘The Black Swan’ makes convincing arguements that risk assessment is practically impossible because we do not understand that lots of black swans occur. Problem is, humans using metaphor in retrospect rationalize these black swans into ‘forseeable events’, when they were in fact not forseeable events.

    Of course, most economist, brain surgeons, etc, will never admit that they are not better at forseeing events than a psychic…it would break their rice bowl!

    The exception I have seen is Willem Buiter. He admitted the truth and I doubt he will ever be forgiven for it.

  17. Gentlemutt

    Yves, here is another anecdote for you, brought to mind by the UBS bonus/malus tactic mentioned above: I met with the now-famous ex-head of Risk at a now-defunct investment bank a few years ago, and he worried aloud about how to control the risks taken by the mortgage/CDO/derivatives traders. He politely restrained a grimace in disbelief at my naivete when I suggested he hold back payouts until portfolios matured or were liquidated (at which point there was no doubt about realized p&l).

    Sounds like UBS is getting partway there. Traders can be exceptionally rational economic actors, almost like those described in some economic textbooks; if they know their income is strictly a function of "realized p&l" they will maximize that, and those who cannot wait will depart the seat. Those who can appreciate the value of building a kind of 'rolling annuity' will occupy the trading seat and happily run a long-term business. Not complicated stuff….Cheers!

  18. DownSouth

    Economists and finance industry “experts” really have to be some of the most benighted people on the face of the earth. Nassim Nicholas Taleb really gets it right when, in The Black Swan, he places them (along with their flat-earth, gravity-defying models) in the same group with the practitioners of astrology, superstitions and tarot-card readers.

    But the real blind spot of finance industry professonals is what has happened to their standing in society. Perhpas the most important insight Daniel Yanelovich makes in his book Coming to Public Judgment is this:

    The best criterion for judging the quality of expet opinion is whether it proves to be right or wrong.

    Most economists and finance industry experts are totally oblivious to this simple truth. They have operated in their ivory towers and risk-free environments (thanks to 30+ years of U.S. Government protection and bailouts) for so long that they believe their immunity to accountability is a fact of nature.

    99.999% of economists and finance industry practioners can be classified in the “baffle-’em-with-bullshit” category. But what they don’t understand is that the public doesn’t really care about their bullshit–their arcane formulas, insider lingo and shop talk. When someone flips the light switch, chances are they don’t know anything about electricity (or even want to know anything about electricity). All they want is for the lights to work.

    And economists and finance industry practioners have been miserable failures at making the lights work.

    These same sentiments were expressed by Frederick Lewis Allen in Since Yesterday:

    For the rich and powerful could maintain their prestige only by giving the general public what it wanted. It wanted prosperity, economic expansion. It had always been ready to forgive all manner of deficiencies in the Henry Fords who actually produced the goods, whether or not they made millions in the process. But it was not disposed to sympathize unduly with people who failed to produce the goods..

    Some, like this Jon Danielsson, Taleb and Yves, realize they need to get out ahead of this thing before they lose thier head to the guillotine or get shipped off to the Gulag. But most people in the finance industry are clueless.

    Charles Dickens’ in A Tale of Two Cities captured the type perfectly:

    Military officers destitute of military knowledge; naval officers with no idea of a ship; civil officers without a notion of affairs; brazen ecclesiastics, of the worst world worldly, with sensusal eyes, loose tongues, and looser lives; all totally unfit for their several callings, all lying horribly in pretending to belong to them, but all nearly or remotely of the order of Monseigneur, and therefore foisted on all public employments from which anything was to be got… People not immediately connected with Monseigneur or the State, yet equally unconnected with anything that was real, or with lives passed in travelling by any straight road to any true earthly end, were no less abundant. Doctors who made great fortunes out of dainty remedies for imaginary disorders that never existed, smiled upon their courtly patients in the ante-chambers of Monseigneur. Projectors who had discovered every kind of remedy for the little evils with which the State was touched, except the remedy of setting to work in earnest to root out a single sin, poured their distracting babble into any ears they could lay hold of, at the reception of Monseigneur. Unbelieving Philosophers who were remodellig the world with words; and making card-towers of Babel to scale the skies with, talked with Unbelieving Chemists who had an eye on the transmutation of metals, at the wonderful gathering accumulated by Monseigneur. Exquisite gentlemen of the finest breeding, which was at that remarkable time–and has been since–to be known by its fruits of indifference to every natural subject of human interest, were in the most exemplary state of exhaustion, at the hotel of Monseignerur.

  19. John Rosevear

    Maps never perfectly capture all of the nuances (or dangers) of the territories they represent. Maps are still useful. Models are likewise still useful, as long as they’re understood as maps and not mistaken for territory.

  20. Anonymous

    There are no good or bad decisions, only decisions made on expectations. The decisions are then rendered good or bad by subsequent events.
    Those events cannot be known at the time the decision is made.

  21. Anonymous

    Great post, as always.

    I keep wincing when I read the term “financial engineering”. Why must the world of finance borrow the mantle of the profession of engineering? Can finance not establish its own credibility without needing to misappropriate it from another field?

    The simple truth is that the models used in finance would get you fired if attempted in any field of engineering. Engineers are obsessed with validating models, and much of a good engineering education is based on the insistence on understanding the limits of validity of each model. Any practitioner of engineering who advocates using models well outside their domain of applicability risks being shunned (e.g., having their professional license revoked).

    We don’t see buildings standing up for only a couple years, and then suddenly the entire inventory collapsing or requiring a trillion dollars worth of propping up from the government. We don’t see airplanes suddenly falling out of the sky at once, or cell phones suddenly deciding not to work anywhere (or at least, we don’t see our engineered machinery fail in such a brittle manner except in science fiction, e.g., the Day the Earth Stood Still).

    We don’t see these kinds of failures because engineers respect that their models are merely models, and the profession constantly tests the boundaries of the models before deploying them in the real world. In engineering, intelligent mavericks like Taleb are respected, not dismissed.

    When the finance profession starts growing up into something exhibiting such respect for the limits of their models, then they get to use the term “engineering”. Until then, practitioners of finance are merely borrowing credibility from elsewhere, and we can now clearly see that such borrowing has exceeded finance’s credit limit, both literally and figuratively.

  22. Roger Bigod

    I recall a house staff conference in a teaching hospital at which we were discussing a case of diabetic acidosis. This is a life-threatening emergency in which the patient has a metabolic disturbance that decreases response to insulin, so the blood glucose rises to sky-high levels. Insulin has to be given in doses that would be very dangerous in ordinary circumstances, then tapered off as the situation corrects and the blood glucose falls. There was a simple formula for calculating the insulin dosage in terms of the blood glucose, known as “sliding scale”.

    The conference was conducted by the chairman of the department of internal medicine, the best administrator of anything I have ever encountered. In his spare time he edited one of the two major textbooks of the subject. One of the residents mentioned the sliding scale formula and he reacted by saying “I’ve asked you not to use sliding scale on this service. It makes you focus in on one number and ignore all the other information you get from the patient’s general condition and the other blood chemistries.” Obviously, he was saying a lot more than a specific point about one disease.

    Obviously, a bank CEO has to rely on numbers more than a medical intern. Still, the anecdote raises the question whether a figure like VAR leads executives to ignore large amounts of side data that should be taken into account.

    I can make a devil’s advocate argument that the sliding scale formula is superior in many real life situations when the intern is too busy, distracted or inexperienced. Doing it the chairman’s way requires more cognitive effort, time and training. Out in the real world, it may have its uses. But CEO’s of major banks don’t have the excuses of a sleep-deprived intern in an overcrowded ER.

  23. doc holiday

    Re: "the laws of finance react to financial engineers’ creations, rendering risk calculations invalid."

    >> My take on this, is that most risk models used historic norms as a basis for some calibration to model or characterize default or some improbable deviation from the norm, e.g, most models always defaulted to some arbitrary values picked out of The Great Depression Hat — however, the trouble with that type of model bias has easily been demonstrated as being retarded with the latest systemic collapse. The reason for the correlated inaccuracy is fairly obvious, i.e, in The Great Depression, our financial system did not have $65 Trillion in bullshit derivatives connected to a global casino of chaos — thus as the bubbles grow and rely on outdated risk mechanics, variables and conditions, the probability of understanding risk are highly diminished.

    The lack of realistic modern and updated risk inputs have placed rating agencies and regulators into positions of being idiots that are clueless as to what they are doing, because they pretend to understand risk factors that are 50 years old. The rating agencies essentially provided false and misleading information and no in the government had a clue as to what was going on , and … this whole thing is beyond retarded and at this point boring, because Obama and the fools left standing will never demand realistic risk models, and all these fools will just go back to business as usual….

    Barf!

  24. William Mitchell

    There is a lot of mileage in just two common-sense distinctions: risk mostly means exposure, and accuracy does not require precision.

    Consider a risk measurement system with just three levels: “safe,” “sketchy,” and “insane,” which depict the rough likelihood of a significant, permanent impairment of investor capital.

    In a borrow-short-lend-long business like banking, a leverage ratio of 20 to 1 is clearly sketchy, and 60 to 1 is clearly insane. You don’t need high precision — the standard deviation of returns, Gaussian vs Cauchy distribution, asymmetry, etc. — to see this. It’s enough to know that a panic ensues every few decades, and if you want to survive it, you need a low leverage ratio.

  25. Carlosjii

    `It’s a poor sort of memory that only works backwards,’ the Queen remarked.

    from January 6, 2009 8:21 AM
    The best criterion for judging the quality of expet [sic] opinion is whether it proves to be right or wrong.

    Mine from May 2005 – The horoscope of the US is subject to significant inharmonious energy during the period 6 July 2005 through 26 October 2005, with a peak negative period occurring during the first three weeks of September. Additionally, these energies are confluent with an aspect of strong, unexpected change to the US’s financial structures. This aspect was last operative in 1907.

    And from NOW – As both a close follower of the financial markets and a world class astrologer I make the following predictions for the US financial system;
    1. The Federal Reserve will be dismantled or substantially reorganized in 2009 with peak periods of transformation occurring 9 Jan 2009, 10 July 2009, and 10 November 2009.
    2. Extreme problems with housing AND the US’s perennial good luck and excesses associated with food will occur in the period March 2009 through November 2011
    3. A cataclysmic US market bottom in real terms, will occur about 2010.

    Just remember – the ‘Creature from Jekyll Island’ is the world’s largest Ponzi scheme!

  26. Been there

    Danielsson’s conclusions are dead on –“In spite of a large body of empirical evidence identifying the difficulties in measuring financial risk, policymakers and financial institutions alike continue to promote risk sensitivity…”

    IHO his most revealing insight is where he identifies the underlying problem as endogenous risk (i.e. inherent within the organism itself) – “We can create the most sophisticated financial models, but immediately when they are put to use, the financial system changes. Outcomes in the financial system aggregate intelligent human behaviour. Therefore attempting to forecast prices or risk using past observations is generally impossible…”

    In this light, financial models that are applied to markets can be seen as tools attempting to enhance performance of a particular portfolio or fund in the same way that certain synthetic and natural substances can be used to enhance the performance of the human body. Certain substances are deemed appropriate (fish, fruits, vegetables, etc.) while some enhancers are known to be toxic (alcohol, tobacco and drugs). That’s why we have risk assessment systems in place (FDA and others) to protect society from the unlimited availability of known toxic substances.

    As an aside, it seems in the current case, especially with regards to CDS’s; we relied too much upon individual organizational self applied risk-assessment processes for determination of CDS toxicity. We allowed individual firms to convince us that they could predict financial outcomes from transactions they initiated with high degrees of certainty even though there were an infinitely large set of variables over which they also needed to have predictive powers. Further, as Danielsson says, the very action of initiating those transactions was exponentially increasing the potential number of variables with which they had to deal.

    Choose your own drug metaphor, but it was the equivalent of allowing amphetamine-like substances to be freely distributed over an extended period of time without any appropriate supervision. It’s clearly apparent that no one really understood the multitude of side-effects that would arise, nor the other repercussions arising from the long term abuse of these so called financial innovations. We let witch-doctor-like financial high priests intimidate us with jargon, while they misrepresented the risk of what they were doing. Now all that is left is a poisoned financial system.

    Continuing to rely on these discredited financial models will only yield the same results that our ancestors got eons ago while they danced around campfires.

  27. Francois

    Any financial model MUST hold one paramount virtue: robustness.

    Meaning…if I change the parameters of the equation, does the model still perform well? Should house prices drop by 1% at year #2 and 3, then rise again by 1.2% at year #4 to 6, does the underlying model still performs?

    If I backtest the model over a period of 40 years instead of 15 or 20 years, do I still get good performance?

    That is, the accuracy or precision or whatever you want or like about those fascinating calculations of risk are worthless if the model isn’t robust.

    That is one important reason why Trend Followers like Bill Dunn, John Henry, Salem Abraham, Jerry Parker and Ed Seykota are so successful. Their goal is not to find the 30th decimal of the risk number, but to make sure that whatever the actual market risk, the model can withstand normal fluctuations in risk AND REACT to excess risk appropriately.

  28. Anonymous

    Will Add.

    Amount of leverage is key.

    Historic norms are faulty.
    Models are too easily manipulated to rake in own fees, commissions.
    Bonuses must be tied and correlated with long term.

    Unbridled greed and avarice did not serve the country well.
    There are some natural laws which have some consequences when violated.

    independent

  29. Eric L. Prentis

    Modeling financial markets assumes the primary premise, i.e., “stock prices are independent random variables.” Many academic studies report that daily stock price movements are independent, however, my colleagues are testing the wrong data to answer the question, “are markets predictably cyclical?” Answering this fundamental question requires stripping out individual and industry factors moving daily stock prices, accomplished by using only systematic prices of diversified market portfolios. In addition, the chatter of daily price movements are smoothed using monthly data and moving average trend lines. Analyzing systemic stock data shows that markets are “predictably cyclical,” ergo, stock prices aren’t independent random variables and VaR models are fatally flawed.

  30. Anonymous

    agree with the comments regarding leverage…to much of it and one is left exposed in a downturn…individuals, corporations,
    governments…frugal or responsible
    spending will not stop a business cycle…however, it will avoid the
    mess the world is in at the moment..
    it’s all about having some pennies left in the piggy bank for those rainy days..

  31. m donner

    all this talk about risk seems to miss what seems obvious to me as a complete layman…that folks always seem to try to push the edge of the envelop with more and more leverage. can we agree that putting more money aside in good times is better then not? do businesses fund pension plans and put money aside for a rainy day…without the risk of someone doing a hostile takeover cause there is too much cash in the bank.
    imagine…too much cash in the bank and not enough leverage and risk…how did it come to this unassailable principle??
    the bottom line is the forever need to ‘grow’ and when we don’t produce real useable stuff not real stuff gets created and counted towards the GNP to make growth seem like its happening when its really the same money being used over and again as debt and leverage piles up.
    safer risk formulas will demand more money in reserve (not working) and not doing double and 40x duty to create the illusion of the almighty ‘growth’
    god help us if we had to make real things, not litigate and or create new and improved financially innovative products. when did financial service move to financial production??
    and who can we trust to check the risk models being used by companies too big to fail…corporations that at one time had to get renewable license to operate for the public good….except in deleware of course….

  32. Michael S

    Real (civil) engineers do not believe risk can be measured. They use rule of thumb. The intent is to make whatever is being designed as safe as possible, yet still remain within reasonable limits in terms of cost and ability to meet deadlines.

    Financial engineering is different. Cutting corners to make more money is encouraged. You couldn’t last long as a civil/mechanical engineer if you treated risk that way.

  33. Anonymous

    But the fundamental problem is still that many in the industry implicitly believed/hoped that the models were actually reality. When in fact the models are not even very good models of models. This is also true of any mathematics – mathematics are also models created by humans, sometimes math very accurately models reality but that does not mean that reality is mathematics.

    Financial instruments are models of models of models etc, very far removed from any ‘reality’ most of us are familiar with.

    The assumptions in the models cannot have any legitimacy because each assumption is based either on old science or no science. We have no idea how humans really make decisions, for example.

  34. Anonymous

    I may be financially unsophisticated, but that enables me to see that when banks think they can dilute risk by selling it to somebody else, that means that the whole system is injected with risk many many many times over and the whole becomes a rotting corpse of risk, a teetering Jenga tower in a hurricane.

    I suppose if only one bank diluted its risk by securitizing its loans, in a year the system would still be basically sound, but would get more risky by the decade. Unfortunately, securitization of mortgage securities (and other stuff) was widespread, so contamination happened very quickly.

    One common-sense measurement that should be used in risk management is “Is this the kind of thing that I want to do something that I’d want everybody to do?” We can bet that if one company or bank or city or household does it [whatever risky action is being considered] and succeeds, news will get around and if a benefit or profit is to be gained with less effort through it, the practice will spread like wildfire. And with the spreading practice, the pool of good borrowers who could repay their loans was quickly exhausted and to continue doing business borrowing standards had to drop, which also increased risk.

    “Risk means exposure.” I seem to remember a Benjamin Graham advocating in “The Intelligent Investor” that investments be made with an eye toward preserving principle before just about everything else and that should be done by preserving a margin of safety, buying at a discount.

    For a buyer trying to buy a house right now as prices are falling, only steeply discounted houses offer any kind of margin of safety. Any percentage of drop in value represents an actual increase in the interest rate paid on the house. So what if you pay 4.8% interest? If your house drops in value by 20%, you are actually paying 24.8% interest.
    The only way I can see for Americans to get out of this crisis as quickly as possible is to stop depending on government bailouts and start cutting costs by combining households. If two small families were to live in the same house, sharing financial responsibility for a single mortgage paid by one of them, then they could get out of debt faster and accumulate a down payment or a rainy day fund faster. It’s going to be small-scale cooperation such as this that is going to get us out of this mess the fastest, not government manipulations of monetary policy or fleeting stimulus checks.

Comments are closed.