Risk Management Sanity Check

To read Nassim Nicholas Taleb, you’d think that the entire world of finance was in thrall to evil Gaussian models and their cousins, like Black Scholes. The occasional howls from quants last year of 15 sigma and worse events would seem to confirm that view.

Yet I have also seem some references here and there of allowances made for fat tails.

Hopefully readers will be as interested as I am in any light informed commentors can shed on this topic.

In particular, I am curious as to:

To what extent is allowance made on trading desks and at quant hedge funds for extreme event risk? How is it made? Is it via adjustments to Gaussian risk management models, use of other risk management techniques, or less formal (model based) considerations? What role does VaR play versus other approaches? How prevalent are improved versions of VaR, or is that an oxymoron?

How up to speed (at bigger banks) is senior management? Do they get the more granular/risk savvy management information reports, or seriously dumbed down versions?

Is there any sign that financial regulators are moving beyond VaR and leverage ratios as their main risk assessment approaches (no, those stress tests do not count)?

How much (if at all) are institutional salesmen aware of the risks embedded in more complex products (or is this moot since that sort of stuff isn’t trading much)?

Ex quant hedge funds, has there been any marked change in risk management approaches on the buy side? If so, where (what types of players) and what new measures?

That’s a long list, and I would imagine that even informed readers would see only a piece of the equation, but I wanted to be specific so as to avoid people talking past each other (as in not agreeing because they were addressing different aspects of current practice).

Also, does anyone know to what extent CFA training addresses this issue (besides giving it lip service)?

Thanks!

Print Friendly, PDF & Email

38 comments

  1. Kurt Osis

    The first question is what is risk. People can’t even agree on the answer to that, most don’t even seem to think about it.

  2. Yves Smith

    Kurt,

    I am looking for comments from risk management professionals, and trust me, there is a strong consensus on what that means, namely, volatility of returns (as far as I can tell, the notion of defining risk more in accordance with commonsensical notions like avoidance of loss or negative outcomes has not penetrated into practice, as represented by the content of risk management tools).

  3. Tim

    How up to speed (at bigger banks) is senior management? Do they get the more granular/risk savvy management information reports, or seriously dumbed down versions?

    Short answer, not very. Longer version, it is impossible to even expect the management of institutions the size and complexity of eg Citi to ever be able to get up to speed with the risks run throughout the business. The fact that many of them demand some kind of one page, or even on number, summary of the risk being run is not a good start but it is debatable whether much of what passes for risk management will ever be able to quantify extreme risk and even if it did provide anything useful by way of remedies for dealing with it.tionide

  4. Doc Holiday

    I know this doesn’t help anyone and just adds noise, but it has been sitting in storage waiting to be seen:

    When the Long Term Capital Investment fund, having the Scholes and Merton in its advisory board, collapsed in 1998, the failure of risk management was traced back to the use of Gaussian distributions in the Black Scholes model. Levy distributions can produce heavier tails needed to model the risks in financial markets more correctly. Since there is no general formula for the densities of Levy distributions, many statistical methods, e.g. maximum likelihood, cannot be easily applied. Using the Wilson-Polchinski renormalization group and generalized Feynman graphs, B. Smii, H. Thaler and I have been able to provide a Borel summable asymptotic expansion for general Levy distributions with a diffusive component and a jump distribution that has all moments. First numerical checks are promising.

    In a simple equilibrium model designed to make market external risk tradable, agents exposed to the risk measure their preferences according to exponential utility functions. The equilibrium price of external risk can be described by backward stochastic differential equations (BSDE) with non-Lipschitz generators. In joint work with A. Popier (Ecole Polytechnique, Palaiseau) we investigate a new concept of solutions for BSDE of this type, which we call measure solutions and which corresponds to theconcept of risk-neutral measures in arbitrage theory. We show that strong solutions of BSDE induce measure solutions, and present an algorithm by which measure solutions can be constructed without reference to strong ones.

    * Full Disclosure: The author has spaced out where this stuff came from, but it was from the internet and seems related to something that is on topic or someday may be on topic; I have no position in anything and really care less and less every day about most everything.

  5. Doc Holiday

    I used to love this crap:

    “From a previous version of that paper, Drees (2000,2003) derived a weighted approximation for this tail empirical process under absolute regularity and discussed statistical applications, like the analysis of estimators of the extreme value index or extreme quantiles. However, the tail empirical process does not describe the extreme value dependence structure of the times series. By and large, results on the asymptotic behavior of estimators of the extremal dependence structure under suitable mixing conditions are restricted to estimators of the extremal index and, more general, the distribution of the size of clusters of extreme observations. Unfortunately, these estimators are of very limited value in quantitative risk management. For instance, the distribution of the total sum of losses exceeding a high threshold in a period of given length cannot be described in terms of the cluster size distribution.”

    The discussions showed that multivariate extreme value theory comes close to its boundaries of applicability and techniques. Rare event simulation using importance sampling can be useful, but may break down when heavy-tailed risks are involved.. uhhmmm yea, that’s the shit!

    From: The Mathematics and Statistics of Quantitative Risk
    Management
    Report No. 15/2008

    http://www.mfo.de/programme/schedule/2008/12/OWR_2008_15.pdf

    All that glitters is not gold;
    Often have you heard that told:
    Many a man his life hath sold
    But my outside to behold:
    Gilded tombs do worms enfold.
    Had you been as wise as bold,
    Young in limbs, in judgment old,
    Your answer had not been inscroll’d:
    Fare you well; your suit is cold.
    Cold, indeed; and labour lost:
    Then, farewell, heat, and welcome, frost!
    Portia, adieu. I have too grieved a heart
    To take a tedious leave: thus losers part.

  6. lewy14

    DocH,

    Thanks for that. I’m reading Mandelbrot’s book on the (Mis) Behavior of Markets and I’ve been wondering if his statistical approaches are being taken up.

    I think the answer hinted at from the citations you provide is “yes, but it’s friggin’ hard stuff.” I’d be very interested to know if folks knew of gentle but formal introductions to this “new financial math”.

    And thanks also Yves for this thread; I’ve been thinking that the fashionable calumny heaped on risk managers for their alleged ignorance of non-Gaussian tail risk has been a bit overdone – could they possibly be so ignorant with their “twenty five sigma” utterances, when even a rank dilettante such as myself am familiar with heavy tails?

    It seems… well… unlikely

  7. Milosz

    The whole point of Taleb’s book (well one of the primary points) is that you can’t predict the future regardless of how advanced your math is. You can make the math comply with _any_ prior fat tail events. But due to the complexity of the world you just can’t get much information about the future anyway.

    The easiest way to look at it in my book is simply to think of the world of finance as the weather. Meteorologists can predict the weather fairly accurately for the next few days or so, but we are never going to see a forecast for more than 5-6 days with measurably good accuracy. You can quote me on that.

    Not only that, the best meteorologist in the world would not be able to predict that meteor that may hit us tomorrow…

    Unless you accept the above – which it seems most bankers haven’t – you will continuously make silly mistakes when it comes to risk judgments.

  8. Steve

    An alternative to NT is Riccardo Rebonnato. His book Plight of the Fortune Tellers is no more comforting than NT, but comes without NT’s m’as-tu vu attitude. It’s a practitioner’s view of the false promises of VaR. Scholes wrote a brief and prescient critique of VaR in 2000, “Crisis and Risk Management” (Amer. Econ. Review, v. 90, no. 2) that is well worth reading–particularly regarding unmodeled market liquidity risk and non-stationary correlation assumptions.

    I don’t believe I’ve ever met a commercial banker who went beyond statistics 101, if even that.

  9. russell1200

    You can see a screen shot of Allstate’s risk analysis “dashboard” here.

    http://slabbed.wordpress.com/legal-qui-tam/the-scheme/61-the-slabberator/

    They call it the Slabberator in reference to Allstate and its post Katrina policies. It is not what Allstate would call it.

    One quick look will show you that the comment about one page summaries made above is pretty much on the mark. I am sure that there are people working in and around Allstate who know more than the dashboard indicates, but they are not calling the shots.

    Allstate is also a bank, and has been one for a while. So they fit pretty well within the mainstream of the industry.

  10. tom

    Many,many moons ago I was a mortgage underwriter and very briefly worked in a risk managment department for a European Bank assessing the trading books and specific trades for swaps, MBS and so on. We relied heavily on CAPM (by basing our decisions on ‘scientific’ meaths) and pure common sense. The traders had a book limit and we made sure they didn’t surpass their limits and that the various books in aggregate didn’t open up gaps in income/cost flows. Very quickly I learnt a career in Risk Management was for fools. A RM takes all the risks and isn’t listened to until something goes arse over tip. That was back in 1990 when I left banking altogether.

    Generally top management are aware of the risks they take, even if they don’t know the specifics, but if the bucks are rolling in their risk decision making appartus quickly malfunctions.

  11. Ati

    I was an option market maker for a brief spell at a top US derivatives house. We used to daily check the usual stuff, delta, gamma, vega, theta etc. But we also did occasional scenario testing (what if the market dropped 20% and vol exploded type scenario). So at least on my desk, we did have an idea of that kind of fat tail risk. We also had a sense it was consistently underpriced by some players who confused their models with the actual market. Problem is holding too much downside insurance is expensive to carry and you bleed to death. People like to sell this stuff even when they are aware of this risk because it is regular income.

  12. trelsco

    The people trading the complex products generally know the risks are non-normal. Arguably the issue is that it is easier to arbitrage your bank than the market. So unless you have an appropriate incentive structure and finance and risk management functions that are politically strong and on the ball, you will have problems.

  13. Thomas

    I agree with Tim and Russell: Senior management (as in : the really senior top dogs) doesn’t normally receive the details, and very few of them would be able to understand a detailed assessment if they did receive it. They want a one-pager, preferrably with a “traffic light indicator” flashing red or green.

  14. Thomas

    On a side note:

    After getting my ph.d. 10 years ago, I also applied for a job with the risk management department of a major European bank. Two days after submitting the application, I received a phone call from their head of risk management, who was all excited, because he had seen on my CV that I had also published an article on VaR used for performance-measurement in banks.

    He explained to me that top management had just decided to introduce a scientific and objective way of measuring risk by VaR, to allow top management to make the right decisions. Now they wanted to hire the people to do it. When I cautiously tried to point out the potential problems and pitfalls, and the serious limits of “scientific and objective ways” to quantify risk, they didn’t want to hear it. They wanted somebody to develop a tool to give top management what it wanted. (This particular bank, which shall not be identified, got severely damaged during the crisis and is now on government life-support)

    A separate application went to a large reinsurance company, which also invited me for an interview and told me that their CEO had decided to develop a “scientific tool for pricing of all risks, including financial risks”, for the company to use itself, but also to sell it to their insurance clients, so that they can also price their products according to state-of-the-art techniques. Again, I pointed out that whoever uses such “tools” would need an in-depth understanding of what he was doing, an ability to factor in qualitative aspects, as well as a feel for the limits of any quantitative analysis. I was told that I needn’t worry – they only needed somebody to develop a tool, because the CEO thought it was a good idea. (This particular company is still alive, but struggling, and has needed quite a bit of additional funds over the last few years in order to survive)

    What struck me in both cases is: Nobody wanted to talk about limitations and caveats. People wanted a scientific, objective, quantatitive method. And spare us the details, because we anyway don’t understand them. With that kind of approach, things just had to go wrong!

    (Back then, I got both job offers, but turned down both of them, because it didn’t sound like an attractive proposition to build a model that people would use without wanting to know about the small print.)

  15. MrM

    Limiting risk management to Gaussian distribution and Gaussian copula is a strawman. Every practitioner and every academic knows limitations of the assumption that the world is normal. To compensate for that, some use more complex distributions, others apply judgment on top of the normal distribution, and most do a bit of both:
    – People fresh out of school/academia, including academics on the regulatory staff, tend to focus on applying more complex math.
    – Senior management does not have the skill for that, but also is cynical/experienced enough to understand the limitations of any math.
    – Examining/supervising regulators have never been on the cutting math edge, which is the right attitude for them. They now emphasize heavily stress tests (I know, I know) and scenario analysis.

    The reality is that there have been always plenty of judgment involved but it got trumped by the following two related factors:
    – Reliance on the market efficiency. Most people position themselves relative to the market, a bit more aggressively or a bit more conservatively. Very few can afford to take the view that the market is way off in its assessment of risk.
    – Market pressure and incentives. When Countrywide is making money hand over fist quarter after quarter, it is very difficult for other mortgage originators to ignore this performance. Every company is expected to show consistent growth in earnings, or its stock price will suffer. However, there are natural limitations to growth in banks profits (banks are already too big a chunk of the US economy). Hence taking on more risk and leveraging up (justified by the model-driven belief that the level of risk was relatively low) were one of few remaining ways to continue pumping out growth.

  16. fisheryc

    Yves, thanks to the other qualifed comments, I think you have one answer to your underlying question; Do senior management etc, understand the models. Answer; absolutely not. Bigger question; is it predictive. Answer: absolutely No.

    Any of the serious quantitative models rely almost entirely on the “Efficient Market Theory(some form)” for their intellectual existence. The EMT, as we should all know by now, is the luminiferous ether of market movements. It’s a voodoo, no how many IBM super computers they use.

    It matters only because some people think it works and therefore make decisions based on the results.

  17. economicdarwinism

    First some background. Risk models are often (as in always) based on a bunch of "risk factors", e.g. interest rates, stock market volatility, sector volatility, various indices, etc. You could easily have 100s or 1000s of these risk factors. The risk factors themselves are often (as in always) assumed to be multivariate normal. This helps with Monte Carlo simulation.

    However, no risk model itself assumes P&L is normally distributed (except for some horrible third party vendors *cough* *Barra* who make a killing from unwitting fund managers). The "non-normality" is often (as in always) introduced via pricing models that take movements in the risk factors and convert them into P&L movements. Since these pricing models are nonlinear in the risk factor inputs, the output will not be multivariate normal.

    It is roughly like this:

    Multivariate Normal

    |
    V

    Nonlinear Pricing Model

    |
    V

    Non-Normal P&L

    But, you can see that although the P&L is non-normal, the inputs are normal, hence they have "thin tails". With thin tail inputs, it is hard to generate fat tail outputs even if the resulting P&L is not normally distributed.

    To what extent is allowance made on trading desks and at quant hedge funds for extreme event risk?

    The emphasis is less on tail events and more on generating some fairly realistic looking non-Gaussian P&L distribution via nonlinear pricing models, but the input is still Gaussian so you can easily imagine the tails not being captured appropriately.

    How is it made? Is it via adjustments to Gaussian risk management models, use of other risk management techniques, or less formal (model based) considerations?The model is the model. Any half-baked attempts at stress testing are done by shocking the inputs to the risk factors. This is done more MBA scenario style, i.e. 10% shift up, 10% shift down. Stress testing is an art form, not a science.

    What role does VaR play versus other approaches?VaR is used primary to allocate capital. If a desk see an increase in VaR, it may end up having less capital allocated to it. This can become a political battle of wills. I’ve seen situations where the risk manager comes back to the quant and tells them the model must be wrong because the trader says so. So the quant is forced to go back and recompute correlations until the desk gets allocated what it wants to be allocated.

    Now, imagine what a gold mine it would be if someone could create a security that represented essentially a blind spot to risk management systems. That desk would be allocated nearly infinite capital because it had essentially zero risk. What type of security would go under the radar of a VaR system? Something where all the risk is in the tails. VaR simply measures the “boundary” of what the bank considers a tail event to be. It doesn’t tell you what lays beyond. If a security had all its risk in the tails and almost no risk not in the tails, its VaR would be practically zero. Can you guess which security satisfies these desirable characteristics? Bingo! A CDO.

    How prevalent are improved versions of VaR, or is that an oxymoron?There are improved versions of VaR. Expected shortfall looks at the “average tail event”. So it uses VaR to specify the boundary of tail events, but goes one step further and looks at the average loss given that a tail event has occurred. This is the future of risk management. All banks will eventually switch from VaR to ES, a.k.a. expected tail loss (ETL). It is not a magic bullet, but CDOs will look a lot less attractive under ES than they did under VaR.

    How up to speed (at bigger banks) is senior management?How up to speed is any management when it comes to highly technical stuff? Most senior risk managers are quite clueless. There are some rare exceptions, e.g. GS.

    Do they get the more granular/risk savvy management information reports, or seriously dumbed down versions?Risk reporting whether it is granular or dumbed down is pretty useless, so the question is kind of moot. The one saving grace of risk management is that it gives you a pretty good window into what you own. I wouldn’t trust the risk numbers at all.

    Is there any sign that financial regulators are moving beyond VaR and leverage ratios as their main risk assessment approaches (no, those stress tests do not count)?Not that I’ve seen, but this is inevitable.

    How much (if at all) are institutional salesmen aware of the risks embedded in more complex products (or is this moot since that sort of stuff isn’t trading much)?The best quants at the best shops do not understand the risk. This becomes obvious if you attend any credit derivatives conference. If the people designing this stuff do not understand the risks, how can a saleman?

    Ex quant hedge funds, has there been any marked change in risk management approaches on the buy side? If so, where (what types of players) and what new measures?I abandoned VaR years ago in favor of expected shortfall. I also abandoned multivariate normal input in favor of multivariate Levy skew stable. I doubt I’m the first, but I also doubt it can be considered mainstream among quants.

  18. gpp

    As I see it there are at least the following problems with risk management in banks

    (I) Banks have no genuine interest in controlling risk.

    The risc control department is the minimal operation that will pass a regulatory audit.
    It exists only because otherwise the activities of the bank would be severely curtailed by existing regulations
    (at least in Europe).
    It has no prestige within the bank (after all it makes no money), people are paid less and often have
    to work longer. They have no clout within the bank and dare not interfere with "profitable" trading.
    The quant working in risc control hopes that he will be "promoted" to work with the traders.

    Suppose for example a group of quants develop an incorrect mathematical model for a financial
    instrument that consistently misprices the instrument and leads to incorrect hedges.

    The bank now has a difference of opinion with most other banks about what the instrument is worth.
    It believes that is knows something others don't know and thus has the opportunity for large profits.
    It finds that there is no shortage of counterparties willing to take the other side of a trade.

    The positions are hedged and profit and loss computed by marking to the faulty model.
    The model shows large profits. The more position size is increased the larger the profit.
    Quants, traders and management all get generous bonuses.

    Such "profits" cannot be put at risk by the risc control department.
    A risc manager raising objections will soon work elsewhere.

    This can go on for a couple of years. The bank becomes a large player.
    When the situation gets out of hand, some participants are fired but since they are not personally liable
    they don't really care. They may have made a lot of money.
    If possible the incident will have been hushed up so they will find employment elsewhere.

    I have second hand knowledge of one such case at a medium sized bank in Europe.

    (II) Formidable mathematical problems.

    There is no reason to believe that the past can predict the future.
    But even under the optimistic assumption that available data contain all the relevant information about
    the future insurmountable mathematical difficulties remain.

    The problems go considerably further than "fat tails".
    There is also the problem of high dimension — "the curse of dimensionality".

    The book of a medium sized bank will contain in excess of ten thousand positions.
    The first step in making this a little more manageable is a reduction in dimension.
    Instead of modelling each instrument you identify a smaller number of "risc factors" that drive
    the returns of the individual instruments.

    Many of these risc factors have no economic interpretation (they are abstract mathematical quantities derived from an
    eigenvector decomposition of the return covariance matrix) and are thus impossible to understand intuitively.

    There are still hundreds of risc factors X_1,…,X_D left.
    Assemble the individual risc factors into a vector X=(X_1,…,X_D).
    This is a quantity that lives in a very high dimensional space.
    The dimension is the number D of risc factors.

    You must now find the probability distribution of X in this very high dimensional space.

    This is where the Gaussian distribution comes in: it is completely determined by the mean and covariance matrix
    and is easy to implement and simulate.

    It is VERY HARD to go to other distributions in high dimensional spaces.

    You have to collect data and choose a high dimensional distribution consistent with these data.
    This is a nontrivial problem.

    The simplest (some would say only) reasonable distribution to use is the maximum entropy distribution consistent with the data that
    have been collected.
    This is the distribution that makes the fewest assumptions other than the constraints imposed by the data
    (see "maximum entropy principle").

    The Gaussian distribution is the maximum entropy distribution consistent with the means and covariances,
    equivalently, the first and second order moments:

    E(X_i), E(X_iX_j), 0 < i <= j < D+1.

    These are more than D(D+1)/2 constraints (tens of thousands to hundreds of thousands).
    Here E denotes the expected value and "constraint" means: the expected values E(X_i), E(X_iX_j)
    equal the sample means.

    If you want to capture the phenomenon of "fat tails" in a high dimensional space you must go further
    and impose constraints on all moments up to order 3:

    E(X_i), E(X_iX_j), E(X_iX_jX_k), 0 < i <= j <= k < D+1. (*)

    These are millions to hundreds of millions of constraints.
    But this is only a minimal approach to the "fat tails" in a high dimensional space.

    If you go up to moments of order 4 you have billions of constraints.
    Let us list three problems with which we are now faced:

    (A) Computation is beyond the ability of current computers.

    The maximum entropy distribution satisfying these constraints (*) has a known form but
    cannot realistically be computed explicitly.
    Even if I were to give it to you explicitly the computational load is too high for
    simulating from this distribution.

    The same will be true for all other distributions satisfying the contraints:
    A distribution satisfying hundreds of millions of constraints will have hundreds of millions of
    intricate details (as many terms as constraints) each necessitating computation so that you will
    be overwhelmed by the computational load.

    Recall that you want to generate at least tens of thousands of samples from this distribution
    leading to hundreds of billions to trillions of floating point operations.

    (B) Insufficient data to support the constraints

    You will typically only have thousands of data points for the risk factors X.
    You cannot meaningfully compute hundreds of millions of constraints on the basis
    of thousands of data points.

    For example: there will be lots of polynomials P of degree 3 in the variables x_1,..,x_d
    such that the equation P(X) = 0 is satisfied at each data point X.

    For each data point X the condition

    0 = P(X) = \sum_{1<=i<=j<=k<=D} a_ijk X_iX_jX_k (the X_i given)

    is a linear equation in the coefficients a_ijk and there are a lot more coefficients than there
    are data points, that is the a_ijk are the solutions of a system of linear equations
    with many more variables than equations — thus there are many such solutions.

    Fix such a polynomial P, add all the equations P(X)=0, for each data point X, and divide by the number of
    data points to obtain the relation:

    \sum_{1<=i<=j<=k<=D} a_ijk E(X_iX_jX_k) = 0.

    In other words the sample means E(X_iX_jX_k) computed from an insufficiently large number of data
    (fewer data than constraints) satisfy lots of linear relations which the actual expected values of the
    moments of the distribution will not in general satisfy — they are simply artifacts of the data paucity.

    (C) Impossibility to simulate adequately many scenarios.

    But let us disregard all this and assume you can sample from
    the distribution of the risc factors X.

    You now have to compute the distribution of the portfolio returns R=R(X).
    Given a realization of the risc factors X your valuation models compute the portfolio return
    R as a function of X.
    This function is not given by any explicit formula but rather by a horribly complicated algorithm (input X, output R(X))
    applying all your pricing models to the respective positions in your book.

    It is safe to say the mathematical properties of the function R=R(X) are not at all well understood.
    Often the function R is not continuous (default events,…)
    It is a function of hundreds of variables:

    R=R(X)=R(X_1,X_2,…,X_D).

    After a standard reduction we can assume that 0 < X_j < 1, that is, X lives in the hypercube C of dimension D.
    We must now evaluate R at enough points X in the hypercube C to convince ourselves that we have seen everything
    that the portfolio can do to us with the correct frequencies.

    Since we have no idea how the function R varies over C we need to evaluate R at a very dense subset of points
    of C.
    An idea might be to choose 10 equidistant points on each edge of the cube, combine these into a grid of points
    in C and evaluate R at each grid point.

    This grid is not particularly dense.
    It contains 10^D points (note D is on the order of 500), that is, by many orders of magnitude more points than
    there are elementary particles in the universe.

    Since it is infeasible to evaluate R(X) at so many points we choose the points X in C "randomly".
    Each evaluation of R(X) evaluates the entire portfolio and so is a considerable computational expense.
    We can realistically try for 10000 evaluations.

    This is literally nothing in the vastness of the hypercube C.
    To make this clear suppose D=500, and you have computed R(X) not at ten thousand but rather at 10 trillion
    points X in the hypercube.

    To fix a unit of length let's say the hypercube has edges which are one light year long.
    I now drop you off in the hypercube C with a telescope that allows you to see everything
    within 1 light year of yourself. In particular you can see all the way to each wall of the cube
    (but you cannot see the entire wall).

    What is the probability q that you will see at least one of points X?
    The answer is: q is virtually zero (smaller than 1/10^100)!
    With probability practically equal to one you won't see a single one of the ten trillion points X.

    By contrast in the three dimensional cube you are very likely to see at least 90% the points X.

  19. Leo Kolivakis

    I will chime in here to talk to you about iceberg risk.

    When I was working at public pension funds, I was amazed to see how little if any discussion took place on systemic risk. The real estate group did their thing, the private equity group did their thing, the hedge fund group did their thing, and so on in all asset classes. And risk management were good at measuring risk but totally useless at managing risk.

    Let me explain that last point. You can have the best risk measurement system in the world, but it’s useless if you do not know when to step in and cut the allocations to a portfolio manager.

    I once considered allocating money to a hedge fund manager who use to work for Soros. When we talked about risk management, he told me that “I come from the Soros school of risk management so I do not really believe in hard VaR rules for managing risk.”

    I did not allocate to this manager. I felt like telling him if you were so good, why did Soros fire you?

    Another large global macro hedge fund was heavily short bonds in late 2003 and they were convinced that a new bear market in bonds had begun. I told them that deflation is on the horizon and I would be long bonds.

    The difference with them is that they had tight risk management rules that averted catastrophic losses. They still lost a whack of dough, but it was not catastrophic.

    Now, getting back to iceberg risk. I was amazed at asset mix meetings and board of directors meetings at pension funds how they almost never talked about systemic risk. What if all asset classes are correlated to one and all get clobbered at the same time?

    You don’t need to be a sophisticated quant to think about the linkages between public markets, private markets and hedge funds.

    Here was a simple scenario: Highly leveraged illiquid strategies in hedge funds explode bringing about forced deleveraging in public equities. Private equity funds then get clobbered with a lag. As the financial industry contracts, commercial real estate in New York and London gets clobbered as funds close up shop.

    Do you see where I am heading with this? A little good old fashion qualitative analysis where you do scenarios focusing on the linkages between asset classes is very important.

    Surprisingly, very few pension funds have risk management groups staffed with economists who can review the qualitative risks of the entire portfolio. CPPIB does it but very few others do it.

    I hope this helps,

    Leo

  20. Doc Holiday

    Re1: "Nobody wanted to talk about limitations and caveats. People wanted a scientific, objective, quantatitive method. And spare us the details, because we anyway don't understand them. With that kind of approach, things just had to go wrong! "

    > That's it in a nutshell and that comment could be directly shot at Moody's, S&P and Fitch, because they have models built on science which amounts to bullshit voodoo. Governments of the world which are stuffed to the gills with nepotism and inefficiencies then suggest they will regulate things like derivatives and risk, and even medicine and food interactions which they don't understand.

    We have the SEC, FTC, FASB, FOMC, Treasury, Congress and 1000's of highly educated monkey's that interact in a dance of chaos, pretending that they understand the risks associated with a few hundred trillion bananas — this global society we have today simply is out of control, because the pirates that abuse power are way ahead on the curves and able to frontrun "intended" regulations related to risks. The same mentality that provides consumers with toxic shit like melamine, in dog food, baby food, candy, etc, are the same producers that provide us with toxic waste shit in the form of derivatives that are based on models that are designed to value shit like gold!

    Full disclosure: The author is undergoing stress and has just had cereal with almonds, sunflower seeds, oatmeal, bran flakes and milk. However, this opinion has not been edited and the author has no responsibility for anything and really doesn't give a crap about anything. Furthermore, the author is about to listen to theis: http://www.youtube.com/watch?v=jfO1dzkD-E8

  21. maika13

    If you want to learn more about the practical implementation of heavy teails in risk management here is a link below to a provider of risk solutions for fund of funds and various risk desks that explicitly account for “fat tails”. This is done through simulation based VaR which uses stable distributions instead of Gaussian one.

  22. Luke Lea

    “Using the Wilson-Polchinski renormalization group and generalized Feynman graphs, B. Smii, H. Thaler and I have been able to provide a Borel summable asymptotic expansion for general Levy distributions with a diffusive component and a jump distribution that has all moments.”

    Ah, just what we need!

  23. vlade

    I’d agree that the normal distribution is a strawman. There is a different problem though, but that is not finance specific.
    Management in general (as any humans), want certainty. Therefore, any numbers produced are deemed to be precise, and the more significant digits you can produce the better.

    Understanding that a lot of numbers produced (especially) in finance are just a rough baroque map, not a GPS navigational system is unfortunately what is the most dangerous thing.

  24. economicdarwinism

    maika13,

    Finanalytica uses simulation, but you could actually compute VaR and ES in closed form once you have the stable parameters ;)

    But yeah, stable distribution are much better for risk management and I’m glad to see vendors like Finanalytica moving in that direction.

    Barra recently introduce some new risk tools based on extrem value theory. I suggested they use stable distributions instead, but they were concerned about “infinite variance”, which is a red herring in my opinion.

    Plus, stable distributions are beautiful mathematically AND extremely practical. The best of both worlds for a quant.

  25. Doc Holiday

    Garbage analysis input, CDO's and various derivatives output…. and rubes are hired every day, and retarded investors will buy things if the package has attractive packaging and sales reps and those little finger sandwiches with smoked salmon, and don't forget the booze and music; short skirts help too and time, don't forget that you have to rush into things that don't make sense!

    STATISTICS IN MEDICINE
    Statist. Med. 2007; 26:2919–2936

    Re: SUMMARY
    When a likelihood ratio is used to measure the strength of evidence for one hypothesis over another, its reliability (i.e. how often it produces misleading evidence) depends on the specification of the working model. When the working model happens to be the ‘true’ or ‘correct’ model, the probability of observing strong misleading evidence is low and controllable. But this is not necessarily the case when the working model is misspecified.

    Royall and Tsou (J. R. Stat. Soc., Ser. B 2003; 65:391–404) show how to adjust working models to make them robust to misspecification. Likelihood ratios derived from their ‘robust
    adjusted likelihood’ are just as reliable (asymptotically) as if the working model were correctly specified in the first place. In this paper, we apply and extend these ideas to the generalized linear model (GLM) regression setting.

    Re: The exponential adjustment factorA/B is simply the ratio of the model-based variance estimate for to the robust variance estimate (sandwich estimator) using the ‘corrected’ data.

    To accommodate this type of model failure, we need to only slightly modify how the adjustment factor B is calculated. We do this in the same way that the ‘meat’ of the sandwich estimator is adjusted for clustering.

    Also see: The canonical correlation (don't go forgetting those little fuc-ers) obtained as follows: the data are mapped to empirical uniforms and transformed with the inverse function of the Gaussian distribution. The correlation is then computed for the transformed data. The advantage of the canonical measure is that no distribution is assumed for individual risks.Indeed, it can be shown that a misspecification about the marginal distributions (for example to assume Gaussian margins if they are not) leads to a biased estimator of the correlation matrix.

    > I have no clue why I have this stuff, but maybe I should start a ponzi scheme, or take over the world?

  26. Doc Holiday

    You vant tails, here yah go dude:

    A complete and user-friendly directory of tails of Archimedean copulas is presented which can be used in the selection and construction of appropriate models with desired properties. The results can be synthesized in the form of a decision tree. Given the values of some readily computable characteristics of the Archimedean generator, the upper and lower tail of the copula can be classified into one of three categories each, one corresponding to asymptotic dependence and the other two to asymptotic independence. For a long list of one-parameter families, the relevant tail quantities have been computed so that the corresponding classes in the decision tree can easily be determined.

    http://alfred.galichon.googlepages.com/riskworkshop-abstracts

    Please, for God’s sake, don’t friggn forget this crap here:

    Copula Calibration
    The Statistics Toolbox software includes functionality that calibrates and simulates Gaussian and t copulas.

    Using the daily index returns, estimate the parameters of the Gaussian and t copulas using the function copulafit. Since a t copula becomes a Gaussian copula as the scalar degrees of freedom parameter (DoF) becomes infinitely large, the two copulas are really of the same family, and therefore share a linear correlation matrix as a fundamental parameter.

    Although calibration of the linear correlation matrix of a Gaussian copula is straightforward, the calibration of a t copula is not. For this reason, the Statistics Toolbox software offers two techniques to calibrate a t copula:

    http://www.mathworks.com/products/econometrics/demos.html?file=/products/demos/shipping/econ/americanbasketdemo.html

    Have a nice day!

  27. Doc Holiday

    I thought I’d make an observation on this and see if anyone has an opinion. It seems to me that it’s interesting that there are ways to calculate and model a treasury yield curve and thus the associated underlying debts and obligations of Treasury, but yet there seems to be no realistic way to model the gaming component of equities which are now highly linked and correlated to Treasury instruments, i.e, it would seem that all these models are broken, and manipulative with falsified inputs and chaotic outputs.

    I have to point a finger once again at FASB and the accounting fraud they helped engineer, along with rating agency models that were built with false and misleading data, and Treasury for backing this up, along with FTC, DOJ, Congress and the full collusion support of all wall street entities that thrive on illusions.

    The stress tests that have just been superficially biased will do nothing for anyone or any model, so in this half-assed attempt to buy time, where is the relationship between Volcker and Obama in regard to truth, honesty and change? If we allow a group of crooks to continue to falsify data and produce falsified models, we will undermine our society to a point where it will collapse sooner than later.

    The concept of nationalizing the banks has to begin ASAP and there must be a massive effort to build an army of financial engineers to examine this systemic collapse. This period of financial terrorism by wall street is a challenge that is essentially an economic war which our president and congress are failing to address; this is Pearl Harbor, this is an attack by a group of people that for all practical purposes is not unlike The Nazis!

    See: The Manhattan Project was the project, conducted during World War II primarily by the United States, to develop the first atomic bomb. Formally designated as the Manhattan Engineer District (MED), it refers specifically to the period of the project from 1942–1946 under the control of the U.S. Army Corps of Engineers, under the administration of General Leslie R. Groves. The scientific research was directed by American physicist J. Robert Oppenheimer.

    Where is our leadership and why are we not seeing this corruption for what it really is??? We need an army of financial engineers to clean house ASAP!

  28. redst8r

    Yves: this post and especially the comments are why I read this blog. Awesome stuff!

    I do not have any expertise to add to the comments. But I did note a sub-theme of sorts. Namely that top management is clueless and doesn’t really care about or can’t understand risk managment regardless of the veracity of the risk metric.

    Q.#1: Why is top management so clueless? Adjunct query: how did a so many clueless people get to be top managers?

    There are so many brillant people around, as this blog, comments and links demonstrate so how did so many clueless people get into such high levels of management? (wink)

    Q.#2: The wink. Isn’t it just possible that the top management isn’t clueless but that their objectives have diverged from the nominal objective of creating and managing a well run institution?

    As at least one comment noted when Countrywide was raking in the dough quarter after quarter their competitors could not resist the lure and dropped risk managment practices as fast and as far as they could.

    Thus, while I enjoyed and read the blog, all the comments and some of the links I now wonder why? Getting better educated on risk metrics and managment seemed useful and interesting. Especially so since as a small (tiny really) investor it is required for my survival. After all, I am small enough to fail.

    But for the large institutions that this topic is likely oriented to a better risk metric seems ever so much like a better navigation system for the Titanic II when what is actually needed is a more responsible captain and owner.

    Thanks again for a great post.

  29. maika13

    economicdarwinism,

    I’m not quite sure about the possibility of deriving a closed form expression of the stable distribution VaR or ETL (Expected Tail Loss aka CVaR) but it could be that you’re right, at least for the univariate case.

    Still I doubt it’s possible to come up with general closed form solution – and thus to avoid the Monte Carlo simullations – for the cumulative stable VaR or ETL of a portfolio which is made up of multiple positions (risk factors); especially if you impose more complicated dependence structures on the positions (risk factors) like stable t-copula or dependen subordinators where the latter are meant to capture more accurately the dependences in the tails more.

  30. dudly

    Just WOW, so many things, so well stated by all. gpp your emphasis on distribution and dimension is at the heart of my hobby models (spending small fortune on new comp, just to increase dimension, love SSDs).

    Thanks also to Doc,H you always bust me up with your empirical chicanery. I put you up there with 3 madmen I used to know at TRW, try walking around a corner and find them having a astrophysics argument in Three Stooge’s prose, hahahahah good times.

    So to boil it down, people (uninformed citizens) are getting the wool pulled over their heads, by people wearing nylon over theirs via obfuscation (no PhD, no comprehension)hell even most of the Masters (CEO,CFO,etc) are blind or choose this position as it gives them an out.

    skippy…go MIT with your amino acid quantum computer…it can count to 4, one atom…exponetial growth starts small, but big leaps are down the road.

  31. trelsco

    For a typical investment bank the array of products and valuation techniques is very broad and it is not reasonable to expect senior management to be across them all in any depth. Instead the responsibility of the boards and senior management is to make sure people are managed and incentivised to do the right thing for the firm and its shareholders.

    I believe redst8r gets to the point. If you as a senior manager are getting very well rewarded in the short term for taking excessive risk, and everyone else on the street is doing the same thing, what is your motivation to dampen things down?

    In my view, there is a large portion of blame for senior management that did not show the leadership and independence to set the structure and incentives of their firms to build long-run firm value. Following the herd was easier, particularly if you are well paid, even if your shareholders lose their shirts.

  32. dzaebst

    Hey, thanks all. Good posts.

    economicdarwinism notes that as a reason CDOs became so popular.

    Does anyone have an idea of how common it is for large firms to allocate capital for investments by low VaR?
    Is there any way to get data on VaR and capital allocation?

    It’s a very interesting idea that a VaR blindspot caused the problem. I’d like to look into it. However, I’m guessing that the data is only accessible inside the firms and not in public records.

  33. t

    Q: Does CFA Training address these Issues?
    A: No.

    I sat the CFA in ’02 – ’04. I don’t know how the syllabus or reaing list (which no-one actually reads) has changed since then, but at the time, it barely mentioned the themes in your post at all.

    There was some mention of VAR, a few basic calculations, and a list of disadvantages thereof, like correlations changing over time, but nothing in great detail.

    Far more volume of the CFA is taken up with CAPM as the foundation of financial markets, so all that belief in rationality. I’d argue that’s the opposite of what should be taught.

  34. JTM

    Regarding “Do sales guys understand the risk in complex instruments?” My short answer is that after scanning all the comments I see only one answer addressing this question. Sales guys don’t read blogs like this. They read restaurant reviews.

    A background on my perspective: I recently resigned from a structured products sales role at a top five institution in such things by volume. I’m currently starting my own firm in the dubious field of risk management, specifically for agricultural businesses. My thoughts on risk:

    1. Yes, risk is commonly defined as volatility. Look at where that got us. Remember that vol is a risk metric, but not risk itself.

    2. After sitting next to a correlation desk for some time, I’m fully convinced that there is very little concept of the difference in risk and uncertainty. Risk increases with uncertainty, but they are not the same thing. Uncertainty is, by definition, unquantifiable, though I doubt immeasurable.

    3. Statistical tools such as VaR are very helpful. I wish I had the time or a quant to build a standard VaR model for me to use. The problem with such statistical measures is that they instill a false confidence in those who don’t understand the concepts, and it turns out that is most financial professionals.

    4. A prior poster summed it up well when he stated that while a weather man can be accurate on a 5-6 day forecast, he’ll never get it right further out. Exposure to uncertainty increases with time, and the only way to appropriately measure risk is to keep your finger on the pulse every day and try to catch inconceivable events as they cross the horizon. There are market forces that we cannot yet quantify, and until then they can only be measured by type of gut feeling that to me seems highly correlated with individuals who have the strength to wake up every morning and ask themselves why everything they believed yesterday is wrong today.

Comments are closed.