Hoisted From Comments: Greater Liquidity Produces Instability

Posted on by

Below is a provocative line of thought from an anonymous reader. It supports a gut feeling that I have been unable to prove, namely, that lowering of boundaries between markets (ranging from the large number of global macro hedge funds to the large number of retail currency speculators in Japan) is destabilizing. I’ve found the occasional supporting bit of empirical evidence (for instance, Kenneth Rogoff’s and Carmen Reinhart’s recent paper on financial crises, which found that greater financial integration was correlated with crises) but no theories. Conventional economic wisdom would tell you arbitrage is always and ever good (it supposedly improves price formation which leads to better allocation of capital), and inefficiencies are bad. However, complex systems theory provides a very different perspective:

Perhaps a lesson to be learned here is that liquidity acts as an efficient conductor of risk. It doesn’t make risk go away, but moves it more quickly from one investment sector to another.

From a complex systems theory standpoint, this is exactly what you would do if you wanted to take a stable system and destabilize it.

One of the things that helps to enable non-linear behavior in a complex system is promiscuity of information (i.e., feedback loops but in a more generalized sense) across a wide scope of the system.

One way you can attempt to stabilize a complex system through suppressing its non-linear behavior is to divide it up into little boxes and use them to compartmentalize information so signals cannot easily propagate quickly across the entire system.

This principle has been recognized in the design of software systems for several decades now, and is also a design principle recognizable in many other systems both natural and artificial (c.f. biology, architecture) which are very robust with regard to exogenous shocks. Stable systems tend to be built from structural heirarchies which do not share much information across structural boundaries, either laterally or vertically. That is why you don’t die from a heart attack when you stub your toe, your house doesn’t collapse when you break a window, and if your computer crashes it doesn’t take down the entire internet with it.

Glass-Steagall is a good example of this idea put into practice. If you use regulatory firewalls to define distinct investment sectors and impose significant transaction costs at their boundary that will help to reduce the speed and amplitude of signals which will propagate from one sector to another, so a collapse in one of them will be less likely casue severe problems in the others.

It worries me that we’ve torn down most of these barriers in the last several decades in the name of arbitrage, forgetting that the price we paid for them in inefficiency was a form of insurance against the risk of systemic collapse. This is exactly what I would do if I wanted to take a more or less stable, semi-complex system and drive it in the direction of greater non-linearity.

I think this was to some degree inevitable – it is a symptom of the decay and loss of trans-generational memories from our last great systemic shock in the 1930s. I suspect that something like this is bound to happen every 3-4 generations as we unlearn the lessons our grandparents and great-grandparents learned to their cost.

Print Friendly, PDF & Email

22 comments

  1. CrocodileChuck

    Great post by Anonymous. The study of natural systems would seem to be essential for our regulators and elected representatives alike.

    Re: his point on causation and inter-generational memory loss: see ‘Generations’ (1991), by William Strauss, Neil Howe on the rhythms of history. By the authors’ own framework, we appear to be headed into another phase of ‘secular crises’. Worth a read, if only for a different perspective on American history from 1584 to the end of the eighties.

    CrocodileChuck

  2. Anonymous

    Agree with everything Anonymous says, it’s just good system analysis. Maybe another way to look at it is that Liquidity is NOT equal to Commitment. If it is easy to sell/trade, then you have no commitment to it. If an originator of an instrument had to keep it and live with it, you can depend on most financial instruments being a very different.

  3. reason

    As a software engineer – I often have felt this is the great problem with using outside contractors to produce software – they write it then walk away from it. If they are responsible for maintaining and answering for what they produce, I’m sure the outcome would be different.

  4. alan greenspend

    Wonderful post. My sentiment as well, especially concerning trans-generational memory for the masses. For old money, savvy investors and those wishing to loot, it has always been a way to profit on those less historically studious.

  5. Anonymous

    I like the line of reasoning in the post – the thinking of a true engineer. Just to play devil’s advocate for a moment though, don’t we also think that all great things, all novelty and progress, come precisely from those complex systems where information is not tightly controlled and is allowed to experience unexpected ‘promiscuity’. For example, would we even be having this conversation if the computer had remained a stable tightly controlled system run by the mainframe geeks at IBM? A book I came across recently brings up this question
    http://yupnet.org/zittrain/

  6. Peripheral Visionary

    One specific reason that greater liquidity could produce greater instability is due to regulatory arbitrage. While price arbitrage can be an effective mechanism for price discovery and therefore an efficient market, regulatory arbitrage is a very different animal, and is a veritable fountainhead of unintended consequences.

    Specific case in point, the gasoline subsidies many oil-producing nations provide their people (Nigeria being one example.) “Arbitrageurs” (a.k.a. black marketeers) load up fuel trucks with subsidized gas and drive them across nearby borders, selling gas at below-market prices in neighboring countries. Subsidy expenses soar, and gasoline retailers in neighboring nations have to compete with gasoline priced at below cost.

    The fundamental issue is the mentality that free markets, and therefore open borders, are always and everywhere a good thing. That is too simple a conclusion to be correct, and it’s taken time to start identifying situations where the opposite is true. Of course, anyone who questions the prevailing policies of low barriers to trade will be labeled a protectionist, but as a wise man once said, “good fences make good neighbors.”

  7. HBL

    Wow, the complexity theory and stability perspective is fascinating.

    The loss of trans-generational memory idea aligns well with the Kondratiev Cycle theory and its seasons… I have been wondering for a while about whether Kondratiev’s theories could see a similar leap to increased prominence in the way that Minsky’s have.

    Anon 9:27am – Computers still are tightly controlled systems, but from a glance at that link it appears the book’s focus is more on network effects in how we use computers. And it could certainly be argued that our networked society *IS* destabilizing in terms of rapid changes to our societies and our planet, whether or not you believe the positives outweigh the negatives.

  8. Nicholas Mycroft

    It is quite clear from reading Charles Kindleberger’s -Manias, Panics and Crashes- that financial globalization is correlated with financial crises. Lots of them in the globalizing 19th century, none during the de-globalizing World War II and Cold War. They began to occur again as soon as the Cold War ended.

  9. a

    I guess I’m in a bad mood, but I don’t see what the complex-system theory has to do with liquidity. Sure, breaking up a complex system into smaller systems as the post describes makes things more stable (you don’t want 9/11s? don’t put people into cities…), but what does that have to do with liquidity?

  10. Anonymous

    The post seems to be a restatement of George Soros’s theory of reflexivity. Soros stated that in good times credit and regulation get looser, amplifying booms. Then when the bust happens, credit tightens along with regulation. In other words, the credit/ regulatory cycle exaggerates the cycles seen in the real economy and in the equity markets.

    Sane regulation applied consistently correlates with having the punchbowl taken away before booms get out of control.

  11. ramster

    Anon 9:27am: I think it’s an unavoidable trade-off. A system that enables high efficiency and/or innovation comes with unavoidable systemic risk. Consider a few examples that don’t quite fit into the control systems-ish example of the post but are still germane:

    1) just-in-time manufacturing: Avoid the cost of large inventories by getting raw materials and components into your manufacturing plant just as you need them. This is great for efficiency but exposes you to all sorts of exogenous events (road closures or traffic, strikes at component manufacturers, bad weather, etc.)

    2) reducing hospital beds because most of the time they sit idle. That’s great until the 1/10 or 1/100 probability event occurs (i.e. SARS or a flu pandemic) and then you’re screwed.

    In general, we need to move away from a mindset that extols efficiency and innovation without acknowledging their associated risk. And for mission critical systems (i.e. the financial system, which is basically the circulatory system of capitalist economies), avoiding risk must be prioritized above all.

  12. ThatLeftTurnInABQ

    a said: “I guess I’m in a bad mood, but I don’t see what the complex-system theory has to do with liquidity.

    Speaking as the commenter quoted by Yves, I’m not an econ/finance person (my background is in the earth sciences and software engineering), so I probably have it wrong, please correct me. My thinking was that the problem was with how the liquidity was used, to make fungible things which previously were not (like houses and MBS), which was part of what I read into Yves’ top level post yesterday.

    IMHO making thing fungible creates a stronger dependency between them, which is where systems theory enters into the picture (see below for more clarification of terms). I would say that excess liquidity was an enabling factor in the breakdown of structural barriers (e.g. securitization of mortgages) which previously provided a compartmentalized character (and hence systemic stability) to our economy. In this sense liquidity acted like a solvent. This break down of barriers might have occurred anyway, but the high levels of liquidity we recently enjoyed eased and accelerated the process.

    Please deconstruct this argument as I’m speculating well outside my areas of expertise here.

    Yves,
    Wow – I’m humbled and honored to be quoted like this (the anon comment from the Liquidity thread was mine and I suppose I should have used a handle).

    Anon at 9:27 raises a good point with: “all novelty and progress, come precisely from those complex systems where information is not tightly controlled“.

    I didn’t use very precise terminology in my original comment because I didn’t want to weigh it down with a bunch of object oriented programming jargon, so let me clarify something here:

    Tight control over information isn’t really the goal when designing a stable system, instead it is controlling unwarranted and counterproductive proliferation of dependencies between different parts of the system that you aim for. This is because when one part of a system depends on another a part, and then that one in turn depends on yet another part (and so on through the system) then this set of dependencies (or “tight coupling”) creates a path for a failure which starts in only one part to cascade across the system causing greater harm than it would if it had been more compartmentalized.

    It is possible to pass information between different part of a system in a way that minimizes their dependencies – in the programming literature this is called using “loose coupling”, and when combined with cohesion (all the stuff which is associated together in the same part of the system is strongly related) this is a good thing and still allows for a free flow of information, but in a more stable manner. The architecture of the internet is a good example of this principle at work.

    An example of tight coupling from our current credit crisis would be the move by the monoline insurers into the mortgage backed securities business. They created a dependency between two somewhat unrelated financial sectors (muni bonds and residential mortgages) which were weakly coupled to each other before, such that now a city in one part of the country may have problems issuing bonds on account of declining home values in a geographically remote and economically unrelated part of the country, when from a systems standpoint there was little reason why these things needed to be connected. This is an example of tight coupling with weak cohesion, which is how you create an unstable system.

    Another example would be our globalized system of agriculture, which is now putting people in 3rd world countries on the brink of starvation in part because Iowa corn farmers are paying more for oil based inputs.

    Tight coupling plus weak cohesion = avoidable problems.

    The problem that I see with the very high levels of liquidity and reduction of arbitrage we’ve enjoyed recently (and really globalization more generally) is that it was used to create all sorts of new dependencies between different sectors of our economy which were weakly coupled before and are now tightly coupled. Much of the slack (i.e., arbitrage) which existed before has been taken out of our system, and now shocks can propagate freely from one part of the system to the others with few barriers to stop or attenuate them. A perhaps bad analogy would be that it is as if somebody made money removing all of the shock absorbers from our cars because “they weren’t adding any value”, something which appears to be true right up to the point where you hit a big pothole in the road, at which point it would be nice to have them back. Too late.

    CrocodileChuck said:
    The study of natural systems would seem to be essential

    I can’t echo this strongly enough. Natural systems have much longer timeframes than economists are used to looking at, so they provide an intellectual framework for thinking about grey swan and black swan events (which in some natural systems are the norm, and perform most of the “work” of the system) that is sorely missing right now.

    For example, anyone familiar with the dilemma that the US Forest Service is currently grappling with regarding their legacy of excessively zealous fire suppression in the US since 1910 (in the absence of the frequent small ground fires which would normally have occurred the net fuel load in our forests has built up to unnatural and dangerous levels, making very large and destructive crown fires much more likely) could have told you that the various recession and credit crisis suppression efforts conducted by the Fed under Greenspan were going to add up to a much larger crisis down the road.

    IMHO small recessions and small credit crises function in our economy much like small frequent ground fires function in a forest ecology – by clearing out things that need to be removed periodically (like bad business practices) they act to maintain the health (i.e. long term stability) of the system, as long as they occur frequently and have small amplitudes. Greenspan tried to suppress their frequency (much like the US Forest Service spent the middle part of the 20th cen. putting out every fire they could get to), and the net effect is that we’ve converted a larger number of small amplitude events into a single large amplitude event doing the same amount of work. IMHO this was not a good trade, and might have been avoided if someone whose intellectual background included thinking about longer time horizons (and the low frequency-high amplitude events that they imply) had been in charge.

  13. Ingolf

    Your fire suppression analogy seems to me exactly right.

    As you point out, thinking about the financial system (and indeed the broader economy) in this fashion helps us to realise that risks and difficulties can’t be done away with and that attempts to do so will eventually bring far greater catastrophes.

    Complex, self organising systems tend, I think, to be highly stable and creative. To endure, adapt and prosper, however, they must have constant real world feedback. In the case of our financial architecture, that loop has for decades been clogged and distorted. Implicit or explicit central bank guarantees and various government support schemes have over time fundamentally altered the perception of risk. This, in turn, together of course with central bank accommodation, has enabled credit growth to far exceed all historical parameters.

    Put simply, it seems to me that if the financial system is to be deregulated, then participants must not be saved from their own errors or foolishness. If the political decision is made that all manner of protection is to be provided, then fairly stringent regulation is essential. What we have seems to me the worst of both worlds.

    As for “liquidity”, I wonder if it isn’t a bit of a red herring in this discussion. The word can mean so many things to different people and in different contexts that it may confuse rather than illuminate. If it is to be of any use at all, I suspect each of us who trots it out as either a prop or a target should first define the sense in which we’re using it.

  14. Jennifer

    Thinking back to my control systems classes, an underdamped system (too much feedback, or as thatleftturn says interdependencies) is unstable, but an overdamped system is inefficient.

    Perhaps damping is akin to regulation, and dropping Glass-Steagall was the last little bit of feedback that led to a big squealing feedback event. :-)

  15. RN

    A fascinating and enlightening conversation.

    It occurs to me than when you couple more closely as Anonymous describes in a system rife with feedback paths, you also get a de facto increase in leverage which can serve as a multiplier.

    Scary stuff.

  16. YK

    Liquidity in both physical and monetary terms refers, of course, to the ability to flow easily. So if you take analogy with the physical world, the larger the body of liquid, the larger the waves that can be generated in it. So if you think about instability as waves ( e.g. asset price inflation waves ), the observation becomes sort of obvious. You can take the observation further by thinking about wave breakers, and what can serve as wave breakers in the financial markets. Regulation, market deglobalization, etc

  17. ThatLeftTurnInABQ

    Thinking back to my control systems classes, an underdamped system (too much feedback, or as thatleftturn says interdependencies) is unstable, but an overdamped system is inefficient.

    Jennifer,

    This is a good way of putting it. I don’t want to go too far overboard in advocating for an overdamped system. It seems obvious to me that one can go too far in either direction, and engineering a system (financial or otherwise) for stability alone without regard to efficiency is a bad idea. There are tradeoffs to be made between the two in search of a useful balance and therein lies the real art of it.

    I think the most difficult part of this problem is that in seeking to balance the two we often are judging the merits of things that play out on significantly different time scales. Efficiency arguments tend to address shorter term concerns, whereas systemic stability arguments are really addressing the impact of low frequency-high amplitude events (or black swans), and having enough expertise and wisdom to judge well in both domains can be a stretch in terms of background and outlook. I would say this is something of an intellectual (rather than moral) hazard problem.

    I expect that the relative importance which each generation assigns to stability vs. efficiency arguments is colored by their personal and collective experience, which is where trans-generational memory comes into play. As a consequence it seem reasonable to me to expect a sort of rough-and-ready periodicity to the historical patterns we see in how these factors are weighed and tradeoffs made.

    It makes sense to me that the current credit crisis is causing some people to rethink the relative value of financial stability vs. efficiency, perhaps to a degree we haven’t seen since the 1950s. If we can construct a more stabilizing regulatory environment for our economic system without descending into something as painful as the Great Depression of the 1930s that would be a very good outcome IMHO – an example (to paraphrase Bismarck) of benefiting by learning from other people’s mistakes. I hope our current situation plays out this way. When reading about the current crisis on this blog (and others), I keep thinking of the question that Niels Bohr posed to Oppenheimer upon his arrival at Los Alamos in 1943 (as quoted by Richard Rhodes, in TMotAB epilog p.778): “Is it really big enough?” (i.e. to change our way of thinking).

  18. Anonymous

    Yves:
    There is a general review article on complex systems in
    Science 19 October 2007 318: 410-412
    which expands on the general discussion. It suggests that systems of sufficient complexity have layers with Emergent properties
    I thought to cite it in October but couldn’t tie it to any thing concrete. But the understanding of emergent properties goes further then the programming example given above. It suggests that you can not just “scale up” living systems. A trivial example. One can study closely mouth bacteria floating around in a liquid.But yo can not from that predict that given correct conditions, they will come together and form the protective scale we can only remove by physical abrasion with brushing.

    The social sciences seem more loath then the biological ones to accept this understanding. I am not knowledgeable enough in Economics to know if it respects emergent properties.
    But it seems to me likely that the complex economic models we have seen fail recently in fact are victims of a Reductionist fallacy. They cannot be fixed by “tweaking” the curent model. The current economic crisis can be seen as easily the most expensive scientific experiment ever performed.
    plschwartz

  19. Anonymous

    Yves:
    Regarding the last post.
    I realize that Donald Rumsfeld had an experiment of similar cost when he sought to test his theory on “the reaction of the Iraqi people to an imposed democratization”
    plschwartz

  20. Anonymous

    Sorry for the tardiness of this post, but I just got a chance to read this post today. In his book A Demon of Our Own Design, Richard Bookstaber examines a similar issue. He too looks to the physical sciences and engineer and draws on the concept of “tight coupling”, which is interesting in the context of this discussion. The Santa Fe Institute has a large body of work on complex systems.

    I also think there are some very strong parallels that can be drawn between the current state of finance on physics a half century or so ago. Finance theory is currently dominated by large, comprehensive theories that appear to work well “on average”, under the “right assumptions”, and in the “long run” (read the U Chicago School of Thought). These ideas are somewhat similar to Einstein’s Theory of Relativity. The concepts of quantum physics proved that this grand theory did not work on the micro level. Likewise, we seem to continually rediscover that these grand financial theories do not in any specific instance. It would seem that finance would benefit from the introduction of the equivalent of quantum physics, or any theory inherently based on uncertainty, to explain individuals’ behavior in aggregate.

  21. binaryoptions

    To Speaking as the commenter quoted by Yves, you said:

    …an example of tight coupling with weak cohesion, which is how you create an unstable system.

    then a couple lines down, you stated the following:

    Tight coupling plus weak cohesion = avoidable problems

    How then does an unstable system equal avoidable problems? I don’t understand this relationship.

Comments are closed.