Guest Post: Does Connectivity in the Financial System Produce Instability?

With the financial system on the exam table, it has been more than a bit troubling, that certain questions are neglected in serious academic/policy debates.

The discussion of possible remedies focuses on regulatory solutions, everything from requiring mortgage brokers to be licensed to increasing financial institution capital requirements and having much greater harmonisation, as the Brtis like to put it, of banking and brokerage firm oversight.

While these measures individually and collectively could be salutary, no one seems to be willing to consider the fundamental question: did the push to facilitate the free flow of capital, both domestically and across borders, play a role in this crisis? For the last 15+ years, the push in policy has been towards increased efficiency, which means lower transaction costs, less supervision, little interest in considering whether so-called innovations benefit anyone beyond their purveyors (Martin Mayer observed that, “A lot of what is called innovative is simply a way to find new technology to do what has been forbidden with the old technology.”).

It’s important to examine this question, because many in this society have come to believe that regulation is bad and ever to be avoided. Yet markets like the equities markets, where participants trade an ambiguous promise anonymously, depend on regulation. Thus, the question should be, “What level of regulation is optimal?” rather than “How much regulation can we eliminate?” The problem with the latter approach is that it can take years for problems to develop, and when they do show up, since the tools to stop them have been thrown away, a full blow crisis has to develop for corrective measures to be implemented.

Some evidence suggests that free capital flows in and of themselves produce instability and crises. A recent paper by Kenneth Rogoff and Carmen Reinhart found that

Periods of high international capital mobility have repeatedly produced international banking crises, not only famously as they did in the 1990s, but historically.

Yet the focus of policy has been to increase the cross border flow of funds. Indeed, when the post mortems of this era are in, I suspect the carry trade will be found to have been a major culprit.

Another indicator: as the financial services industry has become increasingly deregulated and boundaries between businesses become blurred or meaningless (fund managers versus brokerage accounts, hedge funds versus proprietary trading desks, investment bank versus commercial bank) bank profitability has fallen and the industry has pushed into higher risk activities to try to compensate. Indeed, not only have overall risk measures risen, but the top banks also appear to be following common strategies. Both behaviors increase systemic risk.

Reader Richard Kline has been pondering this issue in a series of posts (see here, here and here) from a complex systems perspective rather than the traditional finance/markets vantage point. The discussion below summarizes his argument; a fuller treatment can be found here.

As always, your comments very much appreciated.

From Richard Kline:

Is high connectivity in the financial system desirable from the standpoint of stability? Conventional wisdom would largely say, yes; highly connected capital and exchange markets should ‘reduce inefficiencies,’ bring liquid capital to where ‘it is needed,’ and ‘level the playing field.’ Theoretical simulations of high connectivity systems together with related experience from systems design suggest the reverse: raising connectivity or undampening propagation in a system beyond modest levels in either case leads to high systemic asymmetry at best and pervasive systemic instability at worst. Those wishing an extended discussion of the underlying concepts will find it here. The basic concerns follow below.

CONNECTIVITY

Self-modulation occurs in systems with throughput, nodes, and connectivity between those nodes. If considered in idealized form, the financial system in general, and market behaviors in particular can be evaluated in these terms. Capital, debt contracts, futures, and the like could function as throughput, with both velocity V and volume L. Participants can function as nodes; highly dissimilar if large organizations negotiating specific contracts and deals; highly similar if bidders on regulated exchanges. Nodes vary thus both in size S and in the degree to which they behave differently D from each other. Connectivity K is simply the number of links any given node has to other nodes engaged in similar behavior. Both connectivity and differentiation impede throughput flows, but in opposite ways. The more nodes are connected, the more easily capital or information or loss exposure can flow; this is how connectivity is generally conceived in capital
markets, and the reasoning behind open exchanges. The more that nodes, i.e. participants, are similar in form or behavior, that is the less differentiated they are, the more throughput flows. Again, this is the reasoning behind common regulatory regimes, accounting rules, a common currency, etc.

Correlation across a system tends to involve shifts in differentiation D and connectivity K. That is, correlation largely concerns overall similarity of behavior amongst nodes; institutions move the same products, firms compete on price, market participants act in different directions at somewhat different times—or the same direction all at once, and so on. By contrast, modulation across a system tends to involve shifts in throughput, both in velocity V and volume L, but also regarding self-correlation of throughput. That is, modulation largely concerns similarity of form or movement of throughput; bonds are offered at regular intervals near known prices, varying product risks are ‘factored out’ by insurance or hedges, futures contracts channel price movements, and so on. From this perspective, several generalizations follow:

Organization in a system will self-generalize: order ‘flows’ across the nodes in a system inherently as differentiation D per node and connectivity K per node shift. Even if these changes are linear at the level of individual nodes, they are typically nonlinear at the level of the system, and may involve complete state changes with very short thresholds of transformation. Simulations show that even at very low levels of connectivity, K=2 [yes, two connections per highly similar node], systems will constantly if mostly gradually change their overall alignment. At high levels of connectivity, though, systems are prone to frequent, global transformations. Highly connected systems have inherently transient stability, unless otherwise buffered or dampened.

Connectivity K between nodes allows both order and throughput to circulate widely in a system. However, K is often agnostic as to the influences it allows to propagate, so that if changes in K may yield outcomes as intended they can yield and often do yield ones pervasive and unintended.

Differences between parts of a system impede flows across a system, whether flows of order, of throughput, or both. Differences create ‘inefficiencies,’ but they also buffer propagation in a system. Specifically, differentiation D—the extent to which nodes in a system vary in size, composition, and function—buffers node to node flow. Thus, increasing the similarity of participant behavior in markets (lowering differentiation) has the effect of lowering impedance for the same level of connectivity and/or ‘liquidity’ of throughput. If, for example, everyone carries a large balance on plastic and a low balance on passbook, more throughput moves through one part of the financial system, faster, and more easily.

Lowering nodal differentiation D in a system increases ‘efficiency’ in that it lowers buffering of throughput and allows connectivity to propagate order changes in a system. However, this may be at the expense of system stability as the effect may be the same as increasing connectivity K to the degree where system organization becomes chaotic. Residential property owners, developers, property assessors, mortgage originators, the capital markets, and the bond raters once had diverse profit strategies, but gradually they converged toward the fee-for-service, flip the product model of the capital markets. Connectivity increased, and throughput soared. Um, yes . . . .

Background correlation—the mapping of a system to its supporting context—often also serves as a buffer to propagation in a system since the background order is independent of and often resistant to modification by the order of a coordinated system. If the background order is itself highly correlated, though, it may function as a catalyst rather than as a buffer. Program trading in the late 80s where selling out of many portfolios was correlated to a few common background indices is an example. This is an endemic issue in financial markets, where despite being ostensibly buffered by high participant differentiation of behavior they nonetheless become correlated globally to a few background variables.

Systems with pervasive connectivity K amongst nodes have the advantage of being significantly adaptive to external changes. Raising connectivity for a system increases its overall adaptivity. This has been a purpose of just in time ordering, for example. However, such adaptivity is achieved at the expense of stable internal organization since high-K systems are very prone to system-wide changes: they are globally rather than locally adaptive. The auction rate market for municipal bonds was highly adaptive to very small changes in rates and extremely flexible for participants; it adapted globally to the shift in a single parameter, re-sale probability: look at it now.

‘Liquidity’ in a system is a composite behavior (a multi-variate derived state). Not only does throughput vary in velocity V as well as in the ‘headline number’ of volume L, it may modify itself through self-correlation as will be mentioned below. Additionally as per the above description, changes in both differentiation D and connectivity K in a system greatly change how throughput behaves. Where D and K remain generally the same, ‘liquidity’ can be influenced by varying volume or modulating velocity. As we see with the failure of ‘liquidity’ in US capital markets 08, volume and velocity alone are insufficient: the banks have low D—most are functionally insolvent—with decreased K—they little lend to each other. Studies of connectivity imply that in such conditions many minor local optimae develop with low overall systemic flow; just so. Despite large volume capital injection, the financial system remains ‘illiquid.’

MODULATION

Capital of similar form or moving in similar ways across a system of nodes-participants needn’t be reshaped drastically transaction by transaction; rather, it can be disproportionately influenced by small changes to the system which raise or lower impedance to its movement. This is what is meant by modulation. Central bank interest rate setting is substantially a modulation effect. A central bank ‘signal’ is small in relation to overall capital throughput, but even in the absence of legal compulsion that signal forms a value range around which transaction throughput is abundant and moves freely, while defining outlier value ranges where throughput moves poorly and accordingly is scarce.

Independent of connectivity, nodes within a system are not necessarily correlated amongst themselves, or at least not highly correlated. Despite this, throughput in a system—capital principally in the context of the financial system—can become more or less self-correlated. For example, if many different forms or terms of throughput move across nodes, any form which has lower resistance will move over more nodes. If its volume can scale, a larger share of throughput over a larger share of nodes is of the same form. Other forms or terms of throughput most nearly similar may see their velocity, volume, and distribution increase as well. In particular, if and as nodal differentiation decreases, the action of throughput is increasingly similar regardless of where it passes through a system: the throughput in effect self-correlates even if nodes and local connectivities retain significant diversity. Auto-correlative changes do not require overt
external intervention, although in financial markets such throughput convergences are highly profitable if spotted or maximized so external intervention in throughput flows is high and probable. If throughput flows and node differentiation and connectivity influence each other progressively, the process becomes self-modulating.

It is possible that as throughput across such a system becomes increasingly modulated, it can yield a field effect. Field phenomena have low overall resistance to point-source propagation; that is, they can globally reference their order state on a continuous basis. Marginal pricing in markets with good transparency strongly suggest a field order. Individual nodes may wish to diverge from a price point, but resistance from the rest of the market will be high; over any near duration, the field order will reduce the price discontinuity to the field order. Field effects can be modeled by tensors, but their ‘statistical logic’ to use a broad term is distinct from that which follows from the kinds of statistical tools typically used for economic activity. In principle, throughput in a system is likely a tensor field, while the system it is mapped to if nodal may well itself be a scalar field. The concept of capital flows as field phenomena is one
that I cannot prove but which should be studied more closely.

Finally, fields induce flow. Set a price or a volume gradient for capital, and said capital will flow across a system, typically towards high capital density regions. A gradient may thus ‘induce’ illiquid or capital-like assets to shift toward liquid forms, or otherwise to shift their state. Moreover, if throughput is sufficiently high in volume, velocity, or both, it can override, even mask, differences in nodes in the system. In part for this reason, system performance with high throughput will consistently give a misleading view of system stability. Correlation of throughput can by field induction carry flow behavior beyond the structural capacity of existing nodes. Again, large-large reasoning does not hold, especially for field phenomena: small value signals can reference larger flows of throughput which furthermore they do not necessarily transact directly. From this perspective, several generalizations follow:

Self-modulation of throughput in a system in effect simulates increasing connectivity K or decreasing differentiation D because throughput increasingly ‘acts the same’ regardless of its velocity or volume. Viewed from the perspective of connectivity above, systems with self-modulating throughput are less stable than their D and K values would suggest, and can become self-destabilizing: they overshoot.

While self-correlation of throughput is not necessarily bad or good, it can mask relevant distinctions. Consider MBSs, inherently of different quality and risk. However, this throughput had to act the same way to pass down-channel readily, so increasingly it had to ‘look the same.’ Hedges were bundled in, tranches were selectively sliced, and even ratings models themselves progressively tweaked to yield uniform AAA ratings. Self-correlation improved flow, even ‘induced liquidity,’ but this was no virtue from the standpoints of risk assessment or risk concentration. Moreover, MBSs of correlated appearance easily correlated their price declines, regardless of underlying differences in performance.

Dense concentrations of capital may drive other nodes to act increasingly the same way as high connectivity allows their ‘order to flow’ across a system. To the extent that they also modulate capital flows, the impact is not only increased but may be self-enhancing by field functions; in effect, such concentrations inherently propagate their own organizational order. Such order flows are not necessarily either complete, linear, or stable; however, their net effect may be to drive differentiation D down, and increasingly to correlate it. Again, in view of the summary above, this not only creates system asymmetry—i.e. the rich get richer—but lowers system stability: dense concentrations of capital are likely inherently destabilizing. For example, it has recently been identified that globally most central bank rates are negative or very low in real terms regardless of their nominal levels. One interpretation for the driver behind this result maybe that the ‘gravity well’ of US and Japanese real rates—which have both been low or negative since the early 90s for the most part—optimized rates first in closely integrated countries, than in others because smaller currencies with higher rates were more expensive to borrow and re-loan. From that perspective, financial preference steadily shifted to low-rate, high liquidity currencies, the opposite of what monetary policies anticipate: rates could either be higher than local conditions warranted or lower, but the ‘gravity’ of very low US and Japanese rates wouldn’t tolerate a middle position.

Statistical reasoning appropriate for field functions is seldom used in assessing throughput organization in the financial system, leading to misunderstanding of systemic conditions by observers. Stability and instability are not Cartesian plots but matrix distributions. The invisible hand is the beck and grasp of a tensor field; current analysis sees the fingernails on the hand, at most.

Stuart Kaufman. 1993. The Origins of Order.
Christopher Chase-Dunn and Andrew Hall. 1995. Rise and Demise.

[Kaufman’s text is dense but seminal in discussions of systemic connectivity, in this case amongst genes. Chase-Dunn and Hall consider core-periphery relations, a concept from political economy which has different implications from the perspective of systemic connectivity.]

Print Friendly, PDF & Email

16 comments

  1. John

    The speculative element in the market is destabilizing, not to mention that it diverts human capital toward financial engineering when they could be better used in other realms.

    Market regulators should consider imposing punitive transaction costs on short-term investments. I can imagine a fee (much like the SEC fee) that drops as a percentage of the transaction as time goes on. There is very little reason why holding periods should be smaller than 1 month, let alone 1 year.

  2. Richard Kline

    So John, I’m not in a strong position to comment with regard to the specific fee you propose. However, I am much in favor of _time-variable_ constraints of the form that you mention. I’ll raise that particular issue in the subsequent final post in this series kindly hosted by Yves.

  3. Anonymous

    When the nodes increase in a system transaction, every node will usually pass on filtered (if not distorted) information to the next node. In addition, every node will add its weakness or strength to the transaction. For example, when the mortgage backed securities were rated by the rating agencies, the rating of the securities is a result of filtered information (and now we know
    that the information was biased and distorted from self-conflict interest). Another example for added weakness is that when insurance is needed from monolines in order to sell CDO or mortgage backed securities, the weakness of monolines is added to the transaction. However it is also possible that a node will add its strength to the transaction. A good example to see how nodes will add weakness or strength to the system , thus decrease or increase the stability of a system is to examine the difference between the traditional (say, through FHA, FNM or FRC) and current mortgage loan securitization process.

    On the other hand, I do not feel that it makes much sense to discuss whether or not connectivity will increase or decrease the system stability since there is no practical way to restrict or increase connectivity through regulation. What is wrong with current system, connectivity or not, is that everyone is playing with “other people’s money” (from government to money managers). When one is playing with other people’s money, the gains are rewarded and the losses are subsidized by the innocents.

  4. Anonymous

    Could you apply your thoughts on financial connectivity to a particular issue- Convergence of International Financial Reporting Standards(IFRS). On the one hand, it seems that when FASB, IASB and other groups honestly focus upon how to converge to the best financial reporting methodology for a specific issue(e.g. how to report leasng transactions) it would be difficult for anyone to argue that this is not a good thing. However, the arguments that you presented make me wonder whether a big- picture formal 100% convergence of US GAAP to IFRS could create significant systemic risks for which there has been no anticipation. (e.g. How can US GAAP be converged with IFRS without converging the legal repercussions that audit firms face worldwide?). Any thoughts?

  5. etc

    anon 9:24am,

    It is easy to limit the ability of people to end-run a toll charge on trades with some low holding period or traders with some low-average holding period (maybe we don’t like traders with an average holding period of less than 2 days, or 2 months, or whatever). You just toll the holding period restart the holding period if someone hedges out risk, and require reporting by intermediaries executing the trades.

    That said, who is to say the quants doing automated trading or the value investors investing in special situations do or don’t add value? Or maybe both do sometimes?

    Regardless, most of the people I hear complaining that talented people should be going into things outside business or finance, are either rich people that are complete hypocrites or people with no shot at making money because they’re lazy (eg, rich politicians)or lack lucrative talents (eg, left-wing teachers).

  6. etc

    In prior post, last paragraph should read as follows:

    Regardless, most of the people I hear complaining that talented people should be going into things outside business or finance, are either rich people that are complete hypocrites (eg, rich politicians) or people with no shot at making money because they’re lazy or lack lucrative talents (eg, left-wing teachers).

  7. Yves Smith

    etc,

    Perhaps it’s sample bias, but the people I know who complain about that issue are people in service professions (finance, consulting, law) who are making good money ($200k+) and work very hard. Some of them were trained as engineers. They all feel that what they are doing adds very little value to society and think something is wrong with their line of work being so well paid. But they have kids in private school, or spouses who won’t accept a big drop in lifestyle, so they aren’t about to act on their impulses.

    And not all of them are particularly left wing, either. One voted for Bush twice, another agonized over his vote both times and voted for Bush the last election (not sure re 2000)

  8. Richard Kline

    To Anon I 9:24, this discussion is predicated upon nodes performing in a closely similar fashion, that is they are idealized. Your point is important, that _information_ in particular is increasingly distorted as it moves across nodes. To the degree that this increases differentiation, perversely it may buffer loss cascades: the liars lose less. It is, it has been said, an imperfect world. The best response there might be to make throughput as similar and transparent as possible.

    Consider this: If someone offered you a private placement real estate bond investment in Sichuan with a well-above market rate yield, would you buy it from them? Mighty opaque; distant market; different legal system; fat return in relation to the local conditions. Smells like sucker bait to me, unless I’m a seasoned real estate investor who will have limited exposure. So what were country cousin state banks in Deutschland doing buying the equivalent? The best safeguard to corrupted information is to slow down throughput, and limit concentration. The information is out there; for example, the terms and problems with option ARMs and Alt-A loans have been known since before these were even securitized, so if those buying tranches of them didn’t do their homework it doesn’t play for them to say ‘I wuz lyyyyyed tah.’ Remember, too, that US pension funds who know the market a little prohibited themselves from gorging on this swill.

    To Anon II 9:24, as etc. says reporting transparency by intermediaries goes a long way toward making a holding constraint stick. But I am more interested in the scaling over time of costs and exposures. For instance, the more you buy of a certain exposure within a near time frame, the more you should have to cough up a concentration premium against your total exposure. I’m not sure outright bars to sale are as useful as braking and releasing movement through scaled costs and concentration caps. I don’t want to imply that solutions are simple, but I do think some are possible: most solutions involve _preventing_ speculators from moving fast and large, and that takes coordination from the regulatory framework and clearing intermediary structure, i.e. uniform systemic buffering.

  9. Richard Kline

    To Anon of 10:03, converging accounting standards worldwide would unquestionably create connectivity risks. It is a seeming paradox that manifestly efficient good governance procedures can make systems more prone to collapse, but consider the pervasive leveeing of lowland river courses in the US Midwest and you’ll see what can happen: floods are made bigger elsewhere while fools build in the flood plain made ‘safe.’ At the very least, uniform global standards are going to require what amounts to a global regulator; is anyone proposing _that_? I doubt it. Throughput behaviors are likely to soar with uniform standards, so concentrations will be a major problem. One has to think of things like mandated limits on how large any one auditor can be, and how much of any one firm or market they can cover. One might actually get better audits if, say, a behemoth like Exxon actually had different auditors for different units but all of them having access to the same firm data assessed under a common standard.

    I don’t mean too imply that there are ‘simple solutions’ for connectivity problems always lying to hand. Rather the reverse: When large systemic changes are proposed, such as auditing standard convergence, it has to be taken as a given that they can come with massive disruptions. Changes like this need to be given a systemic modeling beforehand for the kind of questions raised here; it won’t catch everything, but at least one can look for changes in concentration, modulation, or velocity, and then see if those happen. One should have a closely observed period of implementation of at least a half dozen years. Now, we typically get ‘Gee, this is going great’ announcements by those who are too fully committed to the evident success of the changes, not a situation which makes for good evaluation of risks and stability.

    To Anon I of 9:24: High connectivity in the modern financial system is endemic, and it isn’t going away. That means that we have to buffer it more extensively elsewhere. Compartmentalization on exposure and flow, whether hard prohibitions or soft brakes, is essential; not perfect, but essential. Also, increasing differentiation may look ‘less efficient,’ but it shouldn’t be underestimated. Government mortgage lenders, for example, really do have a different policy brief than pump-and-dump shops like Countrywide, and those ‘mission distinctions’ should be closely considered. Greater differentiation in node behavior is a greater good from the systemic standpoint. That’s not a complete answer to the question, but a place to start.

    Personally, I think that it’s highly questionable that any financial packager adds value. They inherently have an interest in getting throughput up and compartmentalization down, and they tend to concentrate distorting volumes of capital. Keep them smallish, dispersed, diverse, taxed, watching each other, and competing with each other. Presently, we’ve allowed the opposite set of conditions. And they’ve killed the financial system. Again.

  10. Anonymous

    RK – thanks for the response on convergence

    BTW – Great post- I actually understood most of it without having to hit the thesaurus too many times.

  11. Anonymous

    Sy Krass said…

    Isn’t the main distortion the amount of leverage certain entities can utilize when participating? Aren’t the investment banks leveraged ungodly amounts, you only need ~ 7-8% for some of these futures contracts on commodities. A historic example, the 1929 crash you could buy stocks with 10% down, now 50% is needed when buying on margin.

  12. etc

    yves,

    It is easy to complain that talented people aren’t being allocated optimally from a macroecnomic perspective. However, whenever I hear people complain about it, they aren’t willing to make any of the trade offs to encourage talented people to make different choices.

    The elite factions of the democrat and republican parties have deliberately disassembled the economic moats that protect wages for engineers, scientists, and skilled tradesmen in the US, resulting in wages for these people to be reduced. And now that talented people choose options other than these jobs, the politicians, journalists, and business executives complain about shortages of talented workers in these areas. If the country wants talented people to do these jobs, it’s got to subsidize these jobs with government handouts or reimpose barriers to trade, immigration, cross-border investment, and so on.

  13. Richard Kline

    To Anon of 7:28, I’m relieved to hear that the post read clearly enough. It’s a bit dense, but I couldn’t boil it down much more than as it stands, so I worried that it would put off some who might find the concepts interesting nonetheless.

    To Sy Krass: Excessive leverage is unquestionably a problem, but it isn’t the only problem. Leverage can kill a node, but the overall system can be quite vulnerable with _very little leverage_ if connectivity is high, due to correlation of actions. This is something I hope readers will keep in mind. Consider: levering 30:1 is less an issue if you are pyramiding atop a base of $10M; if you go bust, it isn’t the system’s problem. When we have Lehman’s as a vastly larger inverted pyramid, that’s a colossoal regulatory failure because of the over concentration. The perspective of this post is an attempt to look at things from the system scale rather than the nodal scale.

    Financial system participants, furthermore, understand that high leverage is a bad thing. Hedges made small and mid-size bank-like firms too comfortable, and haven’t really worked as planned, a bit problem. Big Capital, though, knew that their positions were too large to hedge. Many of them wouldn’t accept the amount of exposure they ended up with if they thought that they were going to end up with it: big capital counted on moving bad, over-leveraged assets because throughput was allowed to be excessively high. When throughput suddenly froze up, outfits like Citi didn’t just stop dancing, they stopped breathing when all that paper landed on their own chest at once. The Fed has perhaps six quarters, the equivalent of ‘six minutes,’ to get them out of respiratory arrest; so far doesn’t look good. And neg real rates _facilitated_ excessive leverage by flooding the system with bogus liquid credit. In other words, excess leverage is a _symptom of systemic distortions already in place_ more than it is a cause of those systemic distortions. By the time one has concerns leveraging at 25 and 30:1, systemic distortions have been in place for some time already.

    To etc., you and I remember the sequence of declining wages in the US somewhat differently. To my recollection, when the working class had its wage levels ‘squeezed down’ by inflation in 78-82, things like two-tier labor contracts gutted the factory workforce but professions such as college instructors as well, and a certain Republican institution made union busting respectable, most of those scientists, engineers, and professionals didn’t do squat, or care. ‘Knowledge based society,’ right? Get the poor back on work, right? The bosses ‘can’t come for us,’ right? —Until they did. I’ve watched doctors cross picket lines while nursing and janitorial unions get their legs broken. “They’ll never come for me, I’m too valuable, and I can just work elsewhere if it comes to that.” Every year I watch local school systems where I live struggle to pass the millage because the locals all think that ‘teachers make too much money.’ Yeah, right: Ever tried their job? is my response. So much for ‘an information society.’

    When the middle class decides that it’s in _their_ interests to keep wages up, they’ll organize and get a political party to carry the ball. As long as the middle class accepts the working class taking the loss in a zero-sum context, we’ll all keep on losing except for the top income 1%. Trade barriers simply will _not_ restart US factory labor. We would be better served to tax or borrow the income from the top income 10% to maintain a world class educational system—which we certainly don’t have right now. But we need some of those scientists, engineers, and professionals to quit siding with the bosses, and that’s tough for them to get their heads around; they keep hoping for nicer bosses. “Hope is not a plan.”

  14. mxq

    My apologies beforehand for commenting a few days after the post, but I can’t help but think of the striking parallels this has comparative to Richard Bookstaber’s A Demon of Our Own Design, specifically regarding liquidity and systemic risk.

    Here is an interesting bit:

    “While opaqueness may have actually been beneficial in normal times, it was a different story when the firm (LTCM) was on the ropes. Short-term lenders have a stunted sense of risk-return trade-offs…It is not a matter of simply paying a higher price if lenders perceive that their capital is at risk. In fact, waving a premium rate in front of them can be counterproductive; it makes them suspicious. Since no bank knew the other side of the position they were financing (for LTCM), they treated the position as an outright trade, and required multiples more in margin than they would have required if they had known (what they were financing)…The liquidity providers that had the ability to take on the firm’s (LTCM) positions, and might have done so in a less charged environment, elected to sit out.”

    In the midst of all of this (LTCM) and today, risk management typically uses, according to Bookstaber, “simple historical analysis to assess the trading opportunities for unleashing…leverage…Predicated on their conviction that (LTCM’s postion)…had long-term stability…What they (LTCM) did not appreciate was that they had changed history: There had never been someone trading hundreds of billions of dollars in the middle of this…their models assumed they were in a “game against nature” where their decisions did not alter the playing field…Their actions did change the game, because the decisions of other traders would change depending on the actions LTCM took, or was perceived to take. LTCM looked at their risk as if they were playing a game of roulette, where the possible outcomes were unaffected by what was bet and how much was bet. The market turned out to be more like a game of poker, where the outcomes depended on the behavior of the other players.”

    Very interesting stuff. Thanks for posting this thought-provoking piece, Richard.

  15. Awake

    Liked your paper so much I handed the linked copy of your paper off to my department head (Dept of Finance, Miami University (Oxford, Ohio) Richard. May try to put you two in touch after he gets through it, he seemed to be quite interested.

  16. Richard Kline

    So mxq, sorry in return to be late replying, I didn’t check back for comments the last few days. I’ve heard mention of Bookstaber’s text more than one, and your comment reminds me that I should really read it, now. In my own defense, I haven’t been working in this area much at all in the recent years: most of my work has been in cataloging ancient scripts and coming at some decipherments! So I’m less current then I should be.

    Awake, I’m glad you found the full discussion useful. I open more questions in this than I really answer, and I would love to have someone really put their teeth into these concepts, particularly with things like field induction which is perhaps beyond my own ability to really analyse usefully. I have saved all the parts of this series which Yves is thoughfully sponsoring, and I may put them into a .pdf if anyone is interested. I’d be happy to talk to you or your collegues about any of this if they have an interest. I don’t know that it’s appropriate to put my own email in a comment here. However, I think Yves would be willing to forward to me an email for any of you if you wish if you contact him off the blogs email contact. Regards.

Comments are closed.