Lambert found a short article by Richard Cook that I’ve embedded at the end of the post. I strongly urge you to read it in full. It discusses how complex systems are prone to catastrophic failure, how that possibility is held at bay through a combination of redundancies and ongoing vigilance, but how, due to the impractical cost of keeping all possible points of failure fully (and even identifying them all) protected, complex systems “always run in degraded mode”. Think of the human body. No one is in perfect health. At a minimum, people are growing cancers all the time, virtually all of which recede for reasons not well understood.
The article contends that failures therefore are not the result of single causes. As Clive points out:
This is really a profound observation – things rarely fail in an out-the-blue, unimaginable, catastrophic way. Very often just such as in the MIT article the fault or faults in the system are tolerated. But if they get incrementally worse, then the ad-hoc fixes become the risk (i.e. the real risk isn’t the original fault condition, but the application of the fixes). https://en.wikipedia.org/wiki/Windscale_fire#Wigner_energy documents how a problem of core instability was a snag, but the disaster was caused by what was done to try to fix it. The plant operators kept applying the fix in ever more extreme does until the bloody thing blew up.
But I wonder about the validity of one of the hidden assumptions of this article. There is a lack of agency in terms of who is responsible for the care and feeding of complex systems (the article eventually identifies “practitioners” but even then, that’s comfortably vague). The assumption is that the parties who have influence and responsibility want to preserve the system, and have incentives to do at least an adequate job of that.
There are reasons to doubt that now. Economics has promoted ways of looking at commercial entities that encourage “practitioners” to compromise on safety measures. Mainstream economics has as a core belief that economies have a propensity to equilibrium, and that equilibrium is at full employment. That assumption has served as a wide-spread justification for encouraging businesses and governments to curtail or end pro-stabilty measures like regulation as unnecessary costs.
To put it more simply, the drift of both economic and business thinking has been to optimize activity for efficiency. But highly efficient systems are fragile. Formula One cars are optimized for speed and can only run one race.
Highly efficient systems also are more likely to suffer from what Richard Bookstaber called “tight coupling.” A tightly coupled system in one in which events occur in a sequence that cannot be interrupted. A way to re-characterize a tightly coupled system is a complex system that has been in part reoptimized for efficiency, maybe by accident, maybe at a local level. That strips out some of the redundancies that serve as safeties to prevent positive feedback loops from having things spin out of control.
To use Bookstaber’s nomenclature, as opposed to this paper’s, in a tightly coupled system, measures to reduce risk directly make things worse. You need to reduce the tight coupling first.
A second way that the economic thinking has arguably increased the propensity of complex systems of all sorts to fail is by encouraging people to see themselves as atomized agents operating in markets. And that’s not just an ideology; it’s reflected in low attachment to institutions of all sorts, ranging from local communities to employers (yes, employers may insist on all sorts of extreme shows of fealty, but they are ready to throw anyone in the dust bin at a moment’s notice). The reality of weak institutional attachments and the societal inculcation of selfish viewpoints means that more and more people regard complex systems as vehicles for personal advancement. And if they see those relationships as short-term or unstable, they don’t have much reason to invest in helping to preserving the soundness of that entity. Hence the attitude called “IBY/YBG” (“I’ll Be Gone, You’ll Be Gone”) appears to be becoming more widespread.
I’ve left comments open because I’d very much enjoy getting reader reactions to this article. Thanks!
So many ideas….
Mike Davis argues that in the case of Los Angeles, the key to understanding the city’s dysfunction is in the idea of sunk capital–every major investment leads to further investments (no matter how dumb or large) to protect the value of past investments.
Tainter argues that the energy cost (defined broadly) of maintaining the dysfunction eventually overwhelms the ability of the system to generate surpluses to meet the rising needs of maintenance.
Goldsworthy has argued powerfully and persuasively that the Roman Empire in the West was done in by a combination of shrinking revenue base and the subordination of all systemic needs to the needs of individual emperors to stay in power and therefore stay alive. Their answer was endlessly subdividing power and authority below them and using massive bribes to the bureaucrats and the military to try to keep them loyal.
In each case, some elite individual or grouping sees throwing good money after bad as necessary to keeping their power and their positions. Our current sclerotic system seems to fit this description nicely.
I immediately thought of Tainter’s “The Complex of Complex Cultures” when I starting reading this. One point that Tainter made is that collapse is not all bad. He presents evidence that the average well being of people in Italy was probably higher in the sixth century than in the fifth century as the Western Roman Empire died. Somewhat like death being necessary for biological evolution collapse may be the only solution to the problem of excessive complexity.
Tainter insists culture has nothing to do with collapse, and therefore refuses to consider it, but he then acknowledges that the elites in some societies were able to pull them out of a collapse trajectory. And from the inside, it sure as hell looks like culture, as in a big decay in what is considered to be acceptable conduct by our leaders, and what interests they should be serving (historically, at least the appearance of the greater good, now unabashedly their own ends) sure looks to be playing a big, and arguably the defining role, in the rapid rise of open corruption and related social and political dysfunction.
That also sounds like the EU and even Greece’s extreme actions to stay in the EU.
Then I’ll add my two cents: you’ve left out that when systems scale linearly, the amount of complexity, and points for failure, and therefore instability, that they contain scale exponentially–that is according to the analysis of James Rickards, and supported by the work of people like Joseph Tainter and Jared Diamond.
Ever complex problem that arises in a complex system is fixed with an even more complex “solution” which requires ever more energy to maintain, and eventually the inevitably growing complexity of the system causes the complex system to collapse in on itself. This process requires no malignant agency by humans, only time.
Sounds a lot like JMG and catabolic collapse.
Well, he got his stuff from somewhere too.
There are no linear systems. They are all non-linear because the include a random, non-linear element – people.
Long before there were people the Earth’s eco-system was highly complex and highly unstable.
The presumption that fixes increase complexity may be incorrect.
Fixes should include awareness of complexity.
That was the beauty of Freedom Club by Kaczinsky, T.
Maybe call the larger entity “meta-stable?” Astro and geo inputs seem to have been big perturbers. Lots of genera were around a very long time before naked apes set off on their romp. But then folks, even these hot, increasingly dry days, brag on their ability to anticipate, and profit from, and even cause, with enough leverage, de- stability. Good thing the macrocosms of our frail, violent, kindly, destructive bodies are blessed with the mechanisms of homeostasis. Too bad our “higher” functions are not similarly gifted… But that’s what we get to chat about, here and in similar meta-spaces…
Most large systems are meta-stable despite high levels of complexity, principally because a) most components and hence disturbances in them are small compared to the scale of the system as a whole while b) vide Bookstaber’s point large systems tend NOT to be tightly coupled as a whole. Loose coupling prevents disturbances from swamping major systemic parameters and/or cascading system wide reordering (read disordering or prior systemic order).
Cook’s most interesting point in my view was No. 7, that root or single causes in systemic disorder are rare, and that consequently the expectation that there be ‘a cause for every action’ is fundamentally a logical error. I found in my own work the concept of ‘proximate cause’ to be of value, that small changes of state, related or not, in a system might shift the order state over an important parameter threshold without any one cause being either necessary, sufficient, or even terribly relevant. Order parameters are far more important to understand in modeling a system than any single ’causes’ or events. Single, or strong, causes do exist of course, but they are the exception, not the rule.
With regard to complex systems, ’causes’ are amorphous, and often trivial. Princip’s Bullet wasn’t the cause of The Great War. The principal order factor was the tight coupling of mobilization plans, due to an even tighter coupling of shared and widely known rapid strategic offensive military policies. Diplomatic strategies were designed to be highly confrontative, but the European powers still had a strong incentive to compromise; given time. The first fool to push the Go Button meant that the tight coupling of mobilization plans swamped the genuine willingness and realistic expectation for and of diplomatic go-slow and compromise. The ’cause’ was trivial to everyone except the Royal Habsburgs, the issue was the systemic shallow military threshold and the tight coupling of action, i.e. the state of the system.
A refreshingly interesting paper.
We still marching to Iran, Yves, or did I miss something along with a couple of years?
As synoia states below, consider an earthquake, we’re confident in the stability of the ground, but for reasons beyond our capacity to understand or predict the earth exposes weaknesses in our systems, think old brick buildings…don’t know what you’ve missed but you’ve been missed….
Agree, positive density of ideas, thoughts and implications.
I wonder if the reason that humans don’t appreciate the failure of complex systems is that (a) complex systems are constantly trying to correct, or cure as in your cancer example, themselves all the time until they can’t at which point they collapse, (b) that things, like cancer leading to death, are not commonly viewed as a complex system failure when in fact that is what it is. Thus, while on a certain scale we do experience complex system failure on one level on a daily basis because we don’t interpret it as such, and given that we are hardwired for pattern recognition, we don’t address complex systems in the right ways.
This, to my mind, has to be extended to the environment and the likely disaster we are currently trying to instigate. While the system is collapsing at one level, massive species extinctions, while we have experienced record temperatures, while the experts keep warning us, etc., most people to date have experienced climate change as an inconvenience — not the early stages of systemwide failure.
Civilization collapses have been regular, albeit spaced out, occurrences. We seem to think we are immune to them happening again. Yet, it isn’t hard to list the near catastrophic system failures that have occurred or are currently occurring (famines, financial markets, genocides, etc.).
And, in most systems that relate to humans with an emphasis on short term gain how does one address system failures?
MikeW, + (or maybe x or ^ 1000000)..
I wonder if Yves and Lambert are going to add a new feature to the Links and tags — “End of the livable world watch.” Aggregating the observations of each new, exciting, improved feature of human-induced or -accelerated vector and feature of good-for-me-who-effing-cares-if-it’s-bad-for-you-and-everyone-else-ism… Might make for fun reading, as long as the internetofeverything maintains some open channels and one’s solar panels and nose are still above water (if there’s any left to drink).
SadomasochistoScadenfreude… Is that a thing, yet?
Good-For-Me-Who-Effing-Cares-If-It’s-Bad-For-You-And-Everyone-Else
would be a GREAT category heading though it’s perhaps a little close to “Imperial Collapse”
To paraphrase President Bill Clinton, who I would argue was one of the major inputs that caused the catastrophic failure of our banking system (through the repeal of Glass-Steagall), it all depends on what the definition of WE is.
I know it already has a different meaning, but ‘I’ll be gone, you’ll be gone’ seems apropos.
And all that just a 21st century version of “apres moi le deluge”, which sounds very likely to be the case.
JT – just go to the Archdruid site. They link it regularly, I suppose for this purpose.
Civilizational collapse is extremely common in hsitory when one takes a long term view. I’m not sure though that I would describe it as having that much “regularity” and while internal factors are no doubt often important external factors like the Mongol Onslaught are also important. It’s usually very hard to know exactly what happened since historical documentation tends to disappear in periods of collapse. In the case of Mycenae the archaeological evidence indicates a near total population decline of 99% in less than a hundred years together with an enormous cultural decline but we don’t know what caused it.
As for long term considerations the further one tries to project into the future the more uncertain such projections become so that long term planning far into the future is not likely to be evolutionarily stable. Because much more information is available about present conditions than future conditions organisms are probably selected much more to optimize for the short term rather than for the largely unpredicatble long term.
…it’s not in question. Evolution is about responding to the immediate environment. Producing survivable offspring (which requires finding a niche). If the environment changes (Climate?) faster than the production of survivable offspring then extinction (for that specie) ensues.
Now, Homo sapien is supposedly “different” in some respects, but I don’t think so.
I agree. There’s nothing uniquely special about our species. Of course species can often respond to gradual change by migration. The really dangerous things are global catastrophes such as the asteriod impact at the end of the Cretaceous or whatever happened at the Permian-Triassic boundary (gamma ray burst maybe?).
Interesting that you sit there and type on a world-spanning network batting around ideas from five thousand years ago, or yesterday, and then use your fingers to type that the human species isn’t special.
Do you really think humans are unable to think about the future, like a bear hibernating, or perhaps the human mind, and its offspring, human culture and history, can’t see ahead?
Why is “Learn the past, or repeat it!” such a popular saying, then?
The Iron Law of Institutions (agents act in ways that benefit themselves in the context of the institution [system], regardless of the effect those actions have on the larger system) would seem to mitigate against any attempts to correct our many, quickly failing complex social and technological systems.
This would tend to imply that attempts to organize large scale social structures is temporary at best, and largely futile. I agree. The real key is to embrace and ride the wave as it crests and callapses so its possible to manage the fall–not to try to stand against so you get knocked down and drowned. Focus your efforts on something useful instead of wasting them on a hopeless, and worthless, cause.
Civilization is obviously highly unstabe. However it should remembered that even Neolithic cultures are almost all less than 10,000 years old. So there has been little time for evolutionary adaptions to living in complex cultures (although there is evidence that the last 10,000 years has seen very rapid genetic changes in human populations). If civilization can continue indefinitely which of course is not very clear then it would be expected that evolutionary selection would produce humans much better adapted to living in complex cultures so they might become more stable in the distant future. At present mean time to collapse is probably a few hundred years.
But perhaps you’re not contemplating that too much individual freedom can destabilize society. Is that a part of your vast psychohistorical equation?
Well said, but something I find intriguing is that the author isn’t talking so much about civilizational collapse. The focus is more on various subsystems of civilization (transportation, energy, healthcare, etc.).
These individual components are not inherently particularly dangerous (at a systemic/civilizational level). They have been made that way by purposeful public policy choices, from allowing enormous compensation packages in healthcare to dismantling our passenger rail system to subsidizing fossil fuel energy over wind and solar to creating tax incentives that distort community development. These things are not done for efficiency. They are done to promote inequality, to allow connected insiders and technocratic gatekeepers to expropriate the productive wealth of society. Complexity isn’t a byproduct; it is the mechanism of the looting. If MDs in hospital management made similar wages as home health aides, then how would they get rich off the labor of others? And if they couldn’t get rich, what would be the point of managing the hospital in the first place? They’re not actually trying to provide quality, affordable healthcare to all Americans.
It is that cumulative concentration of wealth and power over time which is ultimately destabilizing, producing accepted social norms and customs that lead to fragility in the face of both expected and unexpected shocks. This fragility comes from all sorts of specific consequences of that inequality, from secrecy to group think to brain drain to two-tiered justice to ignoring incompetence and negligence to protecting incumbents necessary to maintain such an unnatural order.
Now that is a comprehensive thought. (Honestly!)
Seeing the big picture from looking at a few pixels is an art.
Thanks, appreciate the note!
I tend to agree with your point of view.
The problem arises with any societal order over time in that corrosive elements in the form of corruptive behavior (not principle based) by decision makers are institutionalized. I may not like Trump as a person but the fact that he seems to unravel and shake the present arrangement and serves as an indicator that the people begin to realize what game is being played, makes me like him in that specific function. There may be some truth in Thomas Jefferson’s quote: “The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. It is its natural manure.” Those presently benefiting greatly from the present arrangement are fighting with all means to retain their position, whether successfully or not, we will see.
Well said, washunate. I think an argument could be run that outside economic areas, the has been a drive to de-complexity.
Non economic institutions, bodies which exist for non market/profit reasons are or have been either hollowed out, or co-opted to market purposes. Charities as vast engines of self enrichment for a chain of insiders. Community groups, defunded, or shriveled to an appendix by “market forces”. The list goes on…and on.
Reducing the “not-market” to the status of sliced-white-bread makes us all the more dependant on the machinated complexities of “the market”….god help us….
Joseph Tainter’s thesis, set out in “The Collapse of Complex Societies” is simple: as a civilization ages its use of energy becomes less efficient and more costly, until the Law of Diminishing Returns kicks in, generates its own momentum and the system grinds to a halt. Perhaps this article describes a late stage of that process. However, it is worth noting that, for the societies Tainter studied, the process was ineluctable. Not so for our society: we have the ability–and the opportunity–to switch energy sources.
In my grandmother’s youth, they did not burn wood for nothing. Splitting wood was hard work that required calories.
Today, we heat up our patios at night with gas heaters… The amount of economic activity based on burning energy not related to survival is astounding.
A huge percentage of our GDP is based on economies of scale and economic efficiencies but are completely disconnected from environmental efficiencies.
This total loss is control between nature and our lifestyles will be our waterloo .
“A huge percentage of our GDP is based on economies of scale and economic efficiencies but are completely disconnected from environmental efficiencies”
I think this artificial market construct is an intrinsic driver for firming tight couplings for commodifying everything–and is now morphing because of # 14) “Change introduces new forms of failure.”
An interesting article as usual, but here is another take.
Indeed, sometimes complex systems can collapse under the weight of their own complexity (Think: credit default swaps). But sometimes there is a single simple thing that is crushing the system, and the complexity is a desperate attempt to patch things up that is eventually destroyed by brute force.
Consider a forced population explosion: the people are multiplied exponentially. This reduces per capita physical resources, tends to reduce per-capita capital, and limits the amount of time available to adapt: a rapidly growing population puts an economy on a treadmill that gets faster and faster and steeper and steeper until it takes superhuman effort just to maintain the status quo. There is a reason why, for societies without an open frontier, essentially no nation has ever become prosperous with out first moderating the fertility rate.
However, you can adapt. New technologies can be developed. New regulations written to coordinate an ever more complex system. Instead of just pumping water from a reservoir, you need networks of desalinization plants – with their own vast networks of power plants and maintenance supply chains – and recycling plans, and monitors and laws governing water use, and more efficient appliances, etc.etc.
As an extreme, consider how much effort and complexity it takes to keep a single person alive in the space station.
That’s why in California cars need to be emissions tested, but in Alabama they don’t – and the air is cleaner in Alabama. More people needs more controls and more exotic technology and more rules.
Eventually the whole thing starts to fall apart. But to blame complexity itself, is possibly missing the point.
No system is ever ‘the’.
Two words, Steve: Soviet Union.
It’s gone now. But we’re rebuilding it, bigger and better.
If, of course, bigger is better.
Facts not in evidence.
This may just be a rationalization, on my part, for having devoted so much time to historical studies– but it seems to me that historians help civilizations prevent collapse, by preserving for them the largest possible “store of available responses.”
Yves,
Thanks for posting this very interesting piece! As you know, I am a fan Bookstaber’s concept of tight coupling. Interestingly, Bookstaber (2007) does not reference Cook’s significant work on complex systems.
Before reading this article, I considered the most preventable accidents involve a sequence of events uninterrupted by human intelligence. This needs to be modified by Cook’s points 8, 9. 10 and 12.
In using the aircraft landing in the New York river as an example of interrupting a sequence of events, the inevitable accident occurred but no lives were lost. Thus the human intervention was made possible by the unknowable probability of coupling the cause with a possible alternative landing site. A number of aircraft accidents involve failed attempts to find a possible landing site, even though Cook’s point #12 was in play.
Thanks for the post!!!!!
A possible issue with or a misunderstanding of #7. Catastrophic failure can be made up of small failures that tend to follow a critical path or multiple critical paths. While a single point of origin for catastrophic failure may rarely if ever occur in a complex system, it is possible and likely in such a system to have collections of small failures that occur or tend to occur in specific sequences of order. Population explosion (as TG points out) would be a good example of a failure in a complex social system that is part of a critical path to catastrophic failure.
Such sequences, characterized by orders of precedence, are more likely in tightly coupled systems (which as Yves points out can be any system pushed to the max). The point is, they can be identified and isolated at least in situations where a complex system is not being misused or pushed to it’s limits or created due to human corruption where such sequences of likelihood may be viewed or baked into the system (such as by propaganda->ideology) as features and not bugs.
I agree completely that maximum efficiency comes with horrible costs. When hospitals are staffed so that people are normally busy every minute, patients routinely suffer more as often no one has time to treat them like a human being, and when things deviate from the routine, people have injuries and deaths. Same is true in other contexts.
Agreed, but that’s not caused by efficiency. That’s caused by inequality. Healthcare has huge dispariaties in wages and working conditions. The point of keeping things tightly staffed is to allow big bucks for the top doctors and administrators.
Yes. When one efficiency conflicts with and destroys another efficiency. Eq. Your mother juggled a job and a family and ran around in turbo mode but she dropped everything when her kids were in trouble. That is an example of an efficiency that can juggle contradictions and still not fail.
Might this nurse observe that in hospitals, there isn’t and can’t be a “routine” to deviate from, no matter how fondly “managers” wish to try to make it and how happy they may be to take advantage of the decent, empathic impulses of many nurses and/or the need to work to eat of those that are just doing a job. Hence the kindly (sic) practice of “calling nurses off” or sending them home if “the census is down,” which always runs aground against a sudden influx of billable bodies or medical crises that the residual staff is expected to just somehow cope with caring for or at least processing, until the idiot frictions in the staffing machinery add a few more person-hours of labor to the mix. The larger the institution, the greater the magnitude and impact (pain, and dead or sicker patients and staff too) of the “excursions from the norm.”
It’s all about the ruling decisions on what are deemed (as valued by where the money goes) appropriate outcomes of the micro-political economy… In the absence of an organizing principle that values decency and stability and sustainability rather than upward wealth transfer.
I’ll join the choir recommending Tainter as a critical source for anybody interested in this stuff.
IBG/YBG is a new concept for me, with at least one famous antecedent. “Après moi, le déluge.”
The author presents the best-case scenario for complex systems: one in which the practitioners involved are actually concerned with maintaining system integrity. However, as Yves points out, that is far from being case in many of our most complex systems.
For instance, the Silvertip pipeline spill near Billings, MT a few years ago may indeed have been a case of multiple causes leading to unforeseen/unforeseeable failure of an oil pipeline as it crossed the Yellowstone river. However, the failure was made immeasurably worse due to the fact that Exxon had failed to supply that pump-station with a safety manual, so when the alarms started going off the guy in the station had to call around to a bunch of people to figure out what was going on. So while it’s possible that the failure would have occurred no matter what, the failure of the management to implement even the most basic of safety procedures made the failure much worse than it otherwise would have been.
And this is a point that the oil company apologists are all too keen to obscure. The argument gets trotted out with some regularity that because these oil/gas transmission systems are so complex, some accidents and mishaps are bound to occur. This is true–but it is also true that the incentives of the capitalist system ensure that there will be more and worse accidents than necessary, as the agents involved in maintaining the system pursue their own personal interests which often conflict with the interests of system stability and safety.
Complex systems have their own built-in instabilities, as the author points out; but we’ve added a system of un-accountability and irresponsibility on top of our complex systems which ensures that failures will occur more often and with greater fall-out than the best-case scenario imagined by the author.
As Yves pointed out, there is a lack of agency in the article. A corrupt society will tend to generate corrupt systems just as it tends to generate corrupt technology and corrupt ideology. For instance, we get lots of little cars driving themselves about, profitably to the ideology of consumption, but also with an invisible thumb of control, rather than a useful system of public transportation. We get “abstenence only” population explosion because “groath” rather than any rational assessment of obvious future catastrophe.
Right on. The primary issue of our time is a failure of management. Complexity is an excuse more often than an explanatory variable.
For want of a nail…..
abynormal
August 21, 2015 at 2:46 pm
Am I the only hearing 9″Nails, March of the Pigs…
Aug. 21, 2015 1:54 a.m. ET
A Carlyle Group LP hedge fund that anticipated a sudden currency-policy shift in China gained roughly $100 million in two days last week, a sign of how some bearish bets on the world’s second-largest economy are starting to pay off.
http://www.wsj.com/articles/hedge-fund-gains-100-million-in-two-days-on-bearish-china-bet-1440136499?mod=e2tw
oink oink is the sound of system fail
A very important principle:
All systems have a failure rate, including people. We don’t get to live in a world where we don’t need to lock our doors and banks don’t need vaults. (If you find it, be sure to radio back.)
The article is about how we deal with that failure rate. Pointing out that there are failures misses the point.
. . .but it is also true that the incentives of the capitalist system ensure that there will be more and worse accidents than necessary, as the agents involved in maintaining the system pursue their own personal interests which often conflict with the interests of system stability and safety.
How true. A Chinese city exploded. Talk about a black swan. I wonder what the next disaster will be?
After a skimmy read of the post and reading James’ lead-off comment re emperors (Brooklin Bridge comment re misuse is somewhat resonant) it seems to me that a distinguishing feature of systems is not being addressed and therefore being treated as though it’s irrelevant.
What about the mandate for a system to have an overarching, empowered regulatory agent, one that could presumably learn from the reflections contained in this post? In much of what is posted here at NC writers give due emphasis to the absence/failure of a range of regulatory functions relevant to this stage of capitalism. These run from SEC corruption to the uncontrolled movement of massive amount of questionably valuable value in off the books transactions between banks, hedge funds etc. This system intentionally has a deliberately weakened control/monitoring function, ideologically rationalized as freedom but practically justified as maximizing accumulation possibilities for the powerful. It is self-lobotomizing, a condition exacerbated by national economic territories (to some degree). I’m not going to now jump up with 3 cheers for socialism as capable of resolving problems posed by capitalism. But, to stay closer to the level of abstraction of the article, doesn’t the distinction between distributed opacity + unregulated concentrations of power vs. transparency + some kind of central governing authority matter? Maybe my Enlightenment hubris is riding high after the morning coffee, but this is a kind of self-awareness that assumes its range is limited, even as it posits that limit. Hegel was all over this, which isn’t to say he resolved the conundrum, but it’s not even identified here.
Think of Trump as the pimple finally coming to a head: he’s making the greed so obvious, and pissing off so many people that some useful regulation might occur.
Another thought about world social collapse: if such a thing is likely, (and I’m sure the PTB know if it is, judging from the reports from the Pentagon about how Global Warming being a national security concern) wouldn’t it be a good idea to have a huge ability to overpower the rest of the world?
We might be the only nation that survives as a nation, and we might actually have an Empire of the World, previously unattainable. Maybe SkyNet is really USANet. It wouldn’t require any real change in the national majority of creepy grabby people.
Government bureaucrats and politicians pursue their own interests just as businessmen do. Pollution was much worst in the non-capitalist Soviet Union, East Germany and Eastern Europe than it was in the Capitalist West. Chernobyl happened under socialism not capitalism. The present system in China, although not exactly “socialism”, certainly involves a massively powerful govenment but a glance at the current news shows that massive governmental power does not necessarily prevent accidents. The agency problem is not unique to or worse in capitalism than in other systems.
I’d throw in the theory of cognitive dissonance as an integral part of the failure of complex systems. (Example Tarvis and Aronon’s recent book: Mistakes Were Made (But Not by me))
We are more apt to justify bad decisions, with bizarre stories, than to accept our own errors (or mistakes of people important to us). It explains (but doesn’t make it easier to accept) the complete disconnect between accepted facts and fanciful justifications people use to support their ideas/organization/behavior.
I think this one suffers “Metaphysical Foo Foo Syndrome” MFFS. That means use of words to reference realities that are inherently ill-defined and often unobservable leading to untestable theories and deeply personal approaches to epistemological reasoning.
just what is a ‘complex system”? A system implies a boundary — there are things part of the system and things outside the system. That’s a hard concept to identify — just where the system ends and something else begins. So when ‘the system’ breaks down, it’s hard to tell with any degree of testable objectivity whether the breakdown resulted from “the system” or from something outside the system and the rest was just “an accident that could have happened to anybody'”
maybe the idea is; ‘”if something breaks down at the worst possible time and in a way that fkks everything up, then it must have been a complex system”. But it could also have been a simple system that ran into bad luck. Consider your toilet. Maybe you put too much toilet paper in it, and it clogged. Then it overflowed and ran out into your hallway with your shit everywhere. Then you realized you had an expensive Chinese rug on the floor. oh no! That was bad. you were gonna put tthat rug away as soon as you had a chance to admire it unrolled. Why did you do that? Big fckk up. But it wasn’t a complex system. It was just one of those things.
thanks for that, I think…
Actually, it was a system too complex for this individual. S(He) became convinced the plumbing would work as it had previously. But doo to poor maintenance, too much paper, or a stiff BM the “system” didn’t work properly. There must have been opportunity to notice something anomalous, but appropriate oversight wasn’t applied.
You mean the BM was too tightly coupled?
It coould happen to anybody after enough pizza and red wine
people weren’t meant to be efficient. paper towels and duct tape can somettmes help
This ocurred to me: The entire 1960s music revolution would’t have happened if anybody had to be efficient about hanging out and jamming. You really have to lay around and do nothing if you want to achieve great things. You need many opportunities to fail and learn before the genius flies. That’s why tightly coupled systems are self-defeating. Because they wipe too many people out before they’ve had a chance to figure out the universe.
Excellent example of tight coupling: Toilet -> Floor -> Hallway -> $$$ Rug
Fix: Apply Break coupling procedure #1: Shut toilet door.
Then: Procedure #2 Jam inexpensive old towels in gap at the bottom.
As with all such measures this buys the most important thing of all – time. In this case to get the $$$Rug out of the way.
IIRC one of Bookstaber’s points was that that, in the extreme, tight coupling allows problems to propagate through the system so fast and so widely that we have no chance to mitigate before they escalate to disaster.
I think that’s an interesting framework. I would say effeciency is achieving the goal in the most effective manner possible. Perhaps that’s measured in energy, perhaps labor, perhaps currency units, but whatever the unit of measure, you are minimizing that input cost.
What our economics and business thinking (and most importantly, political thinking) has primarily been doing, I would say, is not optimizing for efficiency. Rather, they are changing the goal being optimized. The will to power has replaced efficiency as the actual outcome.
Unchecked theft, looting, predation, is not efficient. Complexity and its associated secrecy is used to hide the inefficiency, to justify and promote that which would not otherwise stand scrutiny in the light of day.
What nonsense. All around us ‘complex systems’ (airliners, pipelines, coal mines, space stations, etc.) have become steadily LESS prone to failure/disaster over the decades. We are near the stage where the only remaining danger in air travel is human error. We will soon see driverless cars & trucks, and you can be sure accident rates will decline as the human element is taken out of their operation.
see fukushima, lithium batteries spontaneously catching fire, financial engineering leading to collapse unless vast energy is invested in them to re stabilize…Driverless cars and trucks are not that soon, tech buddies say ten years I say malarkey based on several points made in the article, while as brooklyn bridge points out public transit languishes, and washunate points out that trains and other more efficient means of locomotion are starved while more complex methods have more energy thrown at them which could be better applied elsewhere. I think you’re missing the point by saying look at all our complex systems, they work fine and then you ramble off a list of things with high failure potential and say look they haven’t broken yet, while things that have broken and don’t support your view are left out. By this mechanism safety protocols are eroded (that accident you keep avoiding hasn’t happened, which means you’re being too cautious so your efficiency can be enhanced by not worrying about it until it happens then you can fix it but as pointed out above tightly coupled systems can’t react fast enough at which point we all have to hear the whocoodanode justification…)
And the new points of failure will be what?
So here’s a question. What is the failure heirarchy. And why don’t those crucial nodes of failsafe protect the system. Could it be that we don’t know what they are?
While 90% of people were producing food a few decades ago, I think a large percentage will be producing energy in a few decades… right now we are still propping up our golf courses and avoiding investing in pipelines and refineries. We are still exploiting the assets of the 50s and 60s to live our hyper material lives. Those investments are what gave us a few decades of consumerism.
Now everyone wants government to spend on infra without even knowing what needs to go and what needs to stay. Maybe half of Californians need to get out of there and forget about building more infra there… just a thought.
America still has a frontier ethos… how in the world can the right investments in infra be made with a collection of such values?
We’re going to get city after city imploding. More workers producing energy and less leisure over the next few decades. That’s what breakdown is going to look like.
yes, and it seems we still have a lot of exploitable material but at what point will we hit diminishing returns and will we be capable of realizing that state…
Flying might get safer and safer while we get more and more cities imploding.
Just like statues on Easter Island were getting increasingly elaborate as trees were disappearing.
we either don’t know what they are or underestimate, ignore, or misprice the risks? Some have pointed to a state of confidence in stability that can breed complacency…
What you say is true, but only if you have a sufficient number of failures to learn from. A lot of planes had to crash for air travel to be as safe as it is today.
I am surprised to see no reference to John Gall’s General Systematics in this discussion, an entire study of systems and how they misbehave. I tend to read it from the standpoint of managing a complex IT infrastructure, but his work starts from human systems (organizations).
The work is organized around aphorisms — Systems tend to oppose their own proper function — The real world is what it is reported to the system — but one or two from this paper should be added to that repertoire. Point 7 seems especially important. From Gall, I have come to especially appreciate the Fail-Safe Theorem: “when a Fail-Safe system fails, it fails by failing to fail safe.”
Instead of writing something long and rambling about complex systems being aggregates of smaller, discrete systems, each depending on a functioning and accurate information processing/feedback (not IT) system to maintain its coherence; and upon equally well functioning feedback systems between the parts and the whole — instead of that I’ll quote a poem.
” Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold; ”
-Yates, “The Second Coming”
erm… make that “Yeats”, as in W.B.
So, naturalists observe, a flea
Has smaller fleas that on him prey;
And these have smaller still to bite ’em,
And so proceed ad infinitum.
– Swift
IIRC in Robert A. Heinlein’s “The Puppet Masters” there’s a different version:
Big fleas have little fleas
Upon their backs to bite ’em,
And little fleas have lesser fleas
And so, ad infinitum.
Since the story is about humans being parasitized and controlled by alien “slugs” that sit on their backs, and the slugs in turn being destroyed by an epidemic disease started by the surviving humans, the verse has a macabre appropriateness.
Original reply got eaten, so I hope not double post. Robert A. Heinlein’s (and others?) version:
Big fleas have little fleas
Upon their backs to bite ’em
And little fleas have lesser fleas
And so ad infinitum!
The order Siphonoptera….
“And what rough beast, its hour come round at last,
slouches toward Bethlehem to be born?”
I can’t leave that poem without its ending – especially as it becomes ever more relevant.
Terrific post- just the sort of thing that has made me a NC fan for years.
I’m a bit surprised that the commentators ( thus far ) have not referred to the Financial Crisis of 2008 and the ensuing Great Recession as being an excellent example of Cook’s failure analysis.
Bethany McLean and Joe Nocera’s All The Devils Are Here 9400111899562009933676www.amazon.com/All-Devils-Are-Here-Financial/dp/159184438X/ref=sr_1_1?s=books&ie=UTF8&qid=1440167434&sr=1-1&keywords=all+the+devils+are+here describes beautifully how the erosion of the protective mechanisms in the U.S. financial system, no single one of which would have of itself been deadly in its absence ( Cook’s Point 3 ) combined to produce the Perfect Storm.
It brought to mind Garett Hardin’s The Tragedy Of The Commons https://en.wikipedia.org/wiki/Tragedy_of_the_commons . While the explosive growth of debt ( and therefore risk ) obviously jeopardized the entire system, it was very much within the narrow self interest of individual players to keep the growth ( and therefore the danger ) increasing.
Bingo. Failure of the culture to properly train its members. Not so much a lack of morality as a failure to point out that when the temple falls, it falls on Samson.
The next big fix is to use the US military to wall off our entire country, maybe include Canada (language is important in alliances) during the Interregnum.
Why is no one mentioning the Foundation Trilogy and Hari Seldon here?
My only personal experience with the crash of a complex, tightly-coupled system was the crash of the trading floor of a very big stock exchange in the early part of this century. The developers were in the computer room, telling the operators NOT to roll back to the previous release, and the operators ignored them and did so anyway. Crash!
In Claus Jensen’s fascinating account of the Challenger disaster, NO DOWNLINK, he describes how the managers overrode the engineers’ warnings not to fly under existing weather conditions. We all know the result.
Human error was the final cause in both cases.
Now we are undergoing the terrible phenomenon of global warming, which everybody but Republicans, candidates and elected, seems to understand is real and catastrophic. The Republicans have a majority in Congress, and refuse–for ideological and monetary reasons–to admit that the problem exists. I think this is another unfolding disaster that we can ascribe to human error.
“Human error” needs unpacking here. In this discussion, it’s become a Deus ex Humanitas.
Humans do what they do because their cultural experiences impel them to do so.
Human plus culture is not the same as human. That’s why capitalism doesn’t work in a selfish society.
” capitalism doesn’t work in a selfish society ”
Very true, not nearly so widely realized as it should be, and the Irony of Ironies .
Another problem with obsessing about (productive or technical) efficiency is that it usually means a narrow focus on the most measured or measurable inputs and outputs, to the detriment of less measurable but no less important aspects. Wages are easier to measure than the costs of turnover, including changes in morale, loss of knowledge and skill, and regard for the organization vs. regard for the individual. You want low cost fish? Well, it might be caught by slaves. Squeeze the measurable margins, and the hidden margins will move.
You hint at a couple fallacies.
1) Measuring what is easy instead of what is important.
2) Measuring many things and then optimizing all of them optimizes the whole.
Then, have some linear thinker try to optimize those in a complex system (like any organization involving humans) with multiple hidden and delayed feedback loops, and the result will certainly be unexpected. Whether for good or ill is going to be fairly unpredictable unless someone has actually looked for the feedback loops.
Very good.
It’s nice to see well spelled out a couple of intuitions I’ve had for a long time. For example, that we are going in the wrong direction when we try to streamline instead of following the path of biology: redundancies, “dirtiness” and, of course, the king of mechanisms, negative feedback (am I wrong in thinking that the main failure of finance, as opposed to economy, is that it has inbuilt positive feedback instead of negative?). And yes, my professional experience has taught me that when things go really wrong it was never just one mistake, it is a cluster of those.
Yes, as you hint here, and I would make forcefully explicit: COMPLEX vs NOT-COMPLEX is a false dichotomy that is misleading from the start.
We ourselves, and all the organisms we must interact with in order to stay alive, are individually among the most complex systems that we know of. And the interactions of all of us that add up to Gaia are yet more complex. And still it moves.
Natural selection built the necessary stability features into our bodily complexity. We even have a word for it: homeostasis. Based on negative feedback loops that can keep the balancing act going. And our bodies are vastly more complex than our societies.
Society’s problem right now is not complexity per se, but the exploitation of complexity by system components that want to hog the resources and to hell with the whole, quite exactly parallel to the behavior of cancer cells in our bodies when regulatory systems fail.
In our society’s case, it is the intelligent teamwork of the stupidly selfish that has destroyed the regulatory systems. Instead of negative feedback keeping deviations from optimum within tolerable limits, we now have positive feedback so obvious it is trite: the rich get richer.
We not only don’t need to de-complexify, we don’t dare to. We really need to foster the intelligent teamwork that our society is capable of, or we will fail to survive challenges like climate change and the need to sensibly control the population. The alternative is to let natural selection do the job for us, using the old reliable four horsemen.
We are unlikely to change our own evolved selfishness, and probably shouldn’t. But we need to control the monsters that we have created within our society. These monsters have all the selfishness of a human at his worst, plus several natural large advantages, including size, longevity, and the ability to metamorphose and regenerate. And as powerful as they already were, they have recently been granted all the legal rights of human citizens, without appropriate negative feedback controls. Everyone here will already know what I’m talking about, so I’ll stop.
Actually I believe F1 has rules regarding the number of changes that can be made to a car during the season. This is typically four or five changes (replacements or rebuilds), so a F1 car has to be able to run more than one race or otherwise face penalties.
Yes, F-1 allows four power planets per-season it has been up dated lately to 5. There isn’t anything in the air or ground as complex as a F-1 car power planet. The cars are feeding 30 or more engineers at the track and back home normal in England millions of bit of info per second and no micro-soft is not used but very complex programs watching every system in the car. A pit stop in F-1 is 2.7 seconds anything above 3.5 and your not trying hard enough.
Honda who pride themselves in Engineering has struggled in power planet design this year and admit they have but have put more engineers on the case. The beginning of this Tech engine design the big teams hired over 100 more engineers to solve the problems. Ferrari throw out the first design and did a total rebuild and it working.
This is how the world of F-1 has moved into other designs, long but a fun read.
http://www.wired.com/2015/08/mclaren-applied-technologies-f1/
I’m sure those in F-1 system designs would look at stories like this and would come to the conclusion that these nice people are the gate keepers and not the future. Yes, I’m a long time fan of F-1. Then again what do I know.
The sad thing in F-1 the gate keepers are the owners CVC.
Interesting comment! One has to wonder why every complex system can’t be treated as the be-all. Damn the torpedos. Spare no expense! Maybe if we just admitted we are all doing absolutely nothing but going around in a big circle at an ever increasing speed, we could get a near perfect complex system to help us along.
If the human race were as important as auto racing, maybe. But we know that’s not true ;->
In the link it’s the humans of McLaren that make all the decisions on the car and the race on hand. The link is about humans working together either in real race time or designing out problems created by others.
Globalization factors in maximizing the impact of Murphy’s Law:
1. Meltdown potential of a globalized ‘too big to fail’ financial system associated with trade imbalances and international capital flows, and boom and bust impact of volatile “hot money”.
2. Environmental damage associated with inefficiency of excessive long long supply chains seeking cheap commodities and dirty polluting manufacturing zones.
3. Military vulnerability of same long tightly coupled ‘just in time” supply chains across vast oceans, war zones, choke points that are very easy to attack and nearly impossible to defend.
4. Consumer product safety threat of manufacturing somewhere offshore out of sight out of mind outside the jurisdiction of the domestic regulatory system.
5. Geographic concentration and contagion of risk of all kinds – fragile pattern of horizontal integration –
manufacturing in China, finance in New York and London, industrialized mono culture agriculture lacking biodiversity (Iowa feeds the world). If all the bulbs on the Christmas tree are wired in series, it takes only one to fail and they all go out.
Globalization is not a weather event, not a thermodynamic process of atoms and molecules, not a principle of Newtonian physics, not water running downhill, but a hyper aggressive top down policy agenda
by power hungry politicians and reckless bean counter economists. An agenda hell bent on creating a tightly coupled globally integrated unstable house of cards with a proven capacity for catastrophic (trade) imbalance, global financial meltdown, contagion of bad debt, susceptibility to physical threats of all kinds.
Any complex system contains non-linear feedback.
Management presumes it is their skill that keeps the system working over some limited range, where the behavior approximates linear. Outside those limits, the system can fail catastrophically.
What is perceived as operating or management skill is either because the system is kept in “safe” limits, or just happenstance. See chaos theory.
Operators or engineers controlling or modifying the system are providing feedback. Feedback can push the system past “safe” limits. Once past safe limits, the system can fail catastrophically Such failure happen very quickly, and are always “a surprise”.
All complex system contain non-linear feedback, and all appear manageable over a small rage of operation, under specific conditions.
These are the systems’ safe working limits, and sometimes the limits are known, but in many case the safe working limits are unknown (See Stock Markets).
All systems with non-linear feedback can and will fail, catastrophically.
All predicted by Chaos Theory. Best mathematical filed applicable to the real world of systems.
So I’ll repeat. All complex system will fail when operating outside safe limits, change in the system, management induced and stimulus induced, can and will redefine those limits, with spectacular results.
We hope and pray system will remain within safe limits, but greed and complacency lead us humans to test those limits (loosen the controls), or enable greater levels of feedback (increase volumes of transactions). See Crash of 2007, following repeal of Glass-Stegal, etc.
It’s Ronnie Ray Gun. He redefined it as, “Safe for me but not for thee.” Who says you can’t isolate the root?
Ronnie Ray Gun was the classic example of a Manager.
Where one can only say:
“Forgive them Father, for they know not what they do”
Three quite different thoughts:
First, I don’t think the use of “practitioner” is an evasion of agency. Instead, it reflects the very high level of generality inherent in systems theory. The pitfall is that generality is very close to vagueness. However, the piece does contain an argument against the importance of agency; it argues that the system is more important than the individual practitioners, that since catastrophic failures have multiple causes, individual agency is unimportant. That might not apply to practitioners with overall responsibility or who intentionally wrecked the system; there’s a naive assumption that everyone’s doing their best. I think the author would argue that control fraud is also a system failure, that there are supposed to be safeguards against malicious operators. Bill Black would probably agree. (Note that I dropped off the high level of generality to a particular example.)
Second, this appears to defy the truism from ecology that more complex systems are more stable. I think that’s because ecologies generally are not tightly coupled. There are not only many parts but many pathways (and no “practitioners”). So “coupling” is a key concept not much dealt with in the article. It’s about HUMAN systems, even though the concept should apply more widely than that.
Third, Yves mentioned the economists’ use of “equilibrium.” This keeps coming up; the way the word is used seems to me to badly need definition. It comes from chemistry, where it’s used to calculate the production from a reaction. The ideal case is a closed system: for instance, the production of ammonia from nitrogen and hydrogen in a closed pressure chamber. You can calculate the proportion of ammonia produced from the temperature and pressure of the vessel. It’s a fairly fast reaction, so time isn’t a big factor.
The Earth is not a closed system, nor are economies. Life is driven by the flow of energy from the Sun (and various other factors, like the steady rain of material from space). In open systems, “equilibrium” is a constantly moving target. In principle, you could calculate the results at any given condition , given long enough for the many reactions to finish. It’s as if the potential equilibrium drives the process (actually, the inputs do).
Not only is the target moving, but the whole system is chaotic in the sense that it’s highly dependent on variables we can’t really measure, like people, so the outcomes aren’t actually predictable. That doesn’t really mean you can’t use the concept of equilibrium, but it has to be used very carefully. Unfortunately, most economists are pretty ignorant of physical science, so ignorant they insistently defy the laws of thermodynamics (“groaf”), so there’s a lot of magical thinking going on. It’s really ideology, so the misuse of “equilibrium” is just one aspect of the system failure.
Really?
“equilibrium…from chemistry, where it’s used to calculate the production from a reaction”
That is certainly a definition in one scientific field.
There is another definition from physics.
When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium.
However objects on a table are considered in equilibrium, until one considers an earthquake.
The condition for an equilibrium need to be carefully defined, and there are few cases, if any, of equilibrium “under all conditions.”
Equilibrium ceases when Chemistry breaks out, dear Physicist.
Equilibrium ceases when Chemistry breaks out
This is only a subset.
I avoided physics, being not so very mathematical, so learned the chemistry version – but I do think it’s the one the economists are thinking of.
What I neglected to say: it’s an analogy, hence potentially useful but never literally true – especially since there’s no actual stopping point, like your table.
There is much simpler way to look at it, in terms of natural cycles, because the alternative is that at the other extreme, a happy medium is also a flatline on the big heart monitor. So the bigger it builds, the more tension and pressure accumulates. The issue then becomes as to how to leverage the consequences. As they say, a crisis should never be wasted. At its heart, there are two issues, economic overuse of resources and a financial medium in which the rent extraction has overwhelmed its benefits. These actually serve as some sort of balance, in that we are in the process of an economic heart attack, due to the clogging of this monetary circulation system, that will seriously slow economic momentum.
The need then is to reformulate how these relationships function, in order to direct and locate our economic activities within the planetary resources. One idea to take into consideration being that money functions as a social contract, though we treat it as a commodity. So recognizing it is not property to be collected, rather contracts exchanged, then there wouldn’t be the logic of basing the entire economy around the creation and accumulation of notational value, to the detriment of actual value. Treating money as a public utility seems like socialism, but it is just an understanding of how it functions. Like a voucher system, simply creating excess notes to keep everyone happy is really, really stupid, big picture wise.
Obviously some parts of the system need more than others, but not simply for ego gratification. Like a truck needs more road than a car, but an expensive car only needs as much road as an economy car. The brain needs more blood than the feet, but it doesn’t want the feet rotting off due to poor circulation either.
So basically, yes, complex systems are finite, but we need to recognize and address the particular issues of the system in question.
Perhaps in a too-quick scan of the comments, I overlooked any mention of Nassim Nicholas Taleb’s book, Antifragile. If so, my apologies. If not, it’s a serious omission from this discussion.
Thank you for this.
I first wondered about something related to this theme when I first heard about just in time sourcing of inventory. (Now also staff.) I wondered then whether this was possible because we (middle and upper class US citizens) had been shielded from war and other catastrophic events. We can plan based on everything going right because most of us don’t know in our gut that things can always go wrong.
I’m genX, but 3 out of 4 of my grandparents were born during or just after WWI. Their generation built for redundancy, safety, stability. Our generation, well. We take risks and I’m not sure the decision makers have a clue that any of it can bite them.
The just-in-time supply of components for manufacturing was described in Barry Lynn’s book “Cornered” and identified as creating extreme fragility in the American production system. There have already been natural disasters that shutdown American automobile production in our recent past.
Everything going right wasn’t part of the thinking that went into just-in-time parts. Everything going right — long enough — to steal away market share on price-point was the thinking. Decision makers don’t worry about any of this biting them. Passing the blame down and golden parachutes assure that.
This is a good analysis of the dynamics of short term-ism, the models don’t necessarily capture the dimension of time, still pondering your comment below and like washunate wondering what others think about it as I have only basic systems knowledge…
This is really a very good paper. My direct comments are:
point 2: yes. provided the safety shields are not discarded for bad reasons like expedience or ignorance or avarice. See Glass-Steagall Act, for example.
point 4: yes. true of all dynamic systems.
point 7: ‘root cause’ is not the same as ‘key factors’. ( And here the doctor’s sensitivity to malpractice suits may be guiding his language.) It is important to determine key factors in order to devise better safety shields for the system. Think airplane black boxes and the 1932 Pecora Commission after the 1929 stock market crash.
It’s easy, complexity became too complex. And I can’t read the small print. We are devolving into a world of happy people with gardens full of flowers that they live in on their cell phones.
There are a number of counter-examples; engineered and natural systems with a high degree of complexity that are inherently stable and fault-tolerant, nonetheless.
1. Subsumption architecture is a method of controlling robots, invented by Rodney Brooks in the 1980s. This scheme is modeled on the way the nervous systems of animals work. In particular, the parts of the robot exist in a hierarchy of subsystems, e.g., foot, leg, torso, etc. Each of these subsystems is autonomously controlled. Each of the subsystems can override the autonomous control of its constituent subsystems. So, the leg controller can directly control the leg muscle, and can override the foot subsystem. This method of control was remarkably successful at producing walking robots which were not sensitive to unevenness of the surface. In other words, the were not brittle in the sense of Dr. Cook. Of course, subsumption architecture is not a panacea. But it is a demonstrated way to produce very complex engineered systems consisting of many interacting parts that are very stable.
2. The inverted pendulum Suppose you wanted to build a device to balance a pencil on its point. You could imagine a sensor to detect the angle of the pencil, an actuator to move the balance point, and a controller to link the two in a feedback loop. Indeed, this is, very roughly, how a Segway remains upright. However, there is a simpler way to do it, without a sensor or a feedback controller. It turns out that if your device just moves the balance point sinusoidaly (e.g., in a small circle) and if the size of the circle and the rate are within certain ranges, then the pencil will be stable. This is a well-known consequence of the Mathieu equation. The lesson here is that stability (i.e., safety) can be inherent in systems for subtle reasons that defy a straightforward fault/response feedback.
3. Emergent behavior of swarms Large numbers of very simple agents interacting with one another can sometimes exhibit complex, even “intelligent” behavior. Ants are a good example. Each ant has only simple behavior. However, the entire ant colony can act in complex and effective ways that would be hard to predict from the individual ant behaviors. A typical ant colony is highly resistant to disturbances in spite of the primitiveness of its constituent ants.
4. Another example is the mammalian immune system that uses negative selection as one mechanism to avoid attacking the organism itself. Immature B cells are generated in large numbers at random, each one with receptors for specifically configured antigens. During maturation, if they encounter a matching antigen (likely a protein of the organism) then the B cell either dies, or is inactivated. At maturity, what is left is a highly redundant cohort of B cells that only recognize (and neutralize) foreign antigens.
Well, these are just a few examples of systems that exhibit stability (or fault-tolerance) that defies the kind of Cartesian analysis in Dr. Cook’s article.
This is a very interesting comment.
Well said.
Glass-Steagall Act: interactions between unrelated functionality is something to be avoided. Auto recall: honking the horn could stall the engine by shorting out the ignition system. Simple fix is is a bit of insulation.
ADA software language: Former DOD standard for large scale safety critical software development: encapsulation, data hiding, strong typing of data, minimization of dependencies between parts to minimize impact of fixes and changes. Has safety critical software gone the way of the Glass-Steagall Act? Now it is buffer overflows, security holes, and internet protocol in hardware control “critical infrastructure” that can blow things up.
The Deacon’s Masterpiece
or, the Wonderful “One-hoss Shay”:
A Logical Story
http://holyjoe.org/poetry/holmes1.htm
Wow! I hate to rain on this parade — I hope no one seriously considers Dr. Richard I. Cook an authority on systems design. The 18 rules or aphorisms or whatever they are supposed to be only somewhat fit complex systems within the context of medical systems design. Apply his rules at your own risk within the universe of medical systems design and apply his rules to other systems at your folly.
From the top:
1) Not all complex systems are intrinsically hazardous — counter example a Rube-Goldberg invention, your washing machine or dishwasher, the process/system designed for putting profits into the pockets of dishonest financial types (where any hazard is not to the human tool applying the system).
2) Should state: “We HOPE complex systems are heavily and successfully defended.”
3) Like #2 — We HOPE all the single point failure modes for a complex system have been repaired, though that is all too seldom the case in fact.
4) Sounds like a good rule for complex systems slapped together and shoved into the market for their beta-testing.
5) We hope this is true — this is one aspect of system robustness.
6) Good rule for any large complex and poorly tested system, like many of the medical systems pushed into the market.
7) This “rule” seems directly contrary to a long established Navy practice of looking for the root cause of failures.
8) Hugh? I’ll have to read this guys book article to find out exactly what he’s getting at here.
9) Human operators are usually a component of a complex system. As such they can be both a producer and defender against failures. This assertion approaches tautology.
10) I hope this is not the case! How about a little training for the operators?
11) Not sure what #11 asserts — WAG — Poor design and bad training are often blamed for operator error.
12) Not sure what import to give to this assertion. Human operators usually want to do a good job — just because — and typically do whatever they can think of to make sure a complex system doesn’t fail on their watch. Only designers and/or managers of great poverty of skill or imagination would rely on #12 as part of a design or operations process.
13) Hugh!!!!!? This principle has little or no content and I cannot imagine how it might be applied to systems design.
14) This principle is “motherhood” to every designer.
15) This principle is not true. Identifying a “cause” does not intrinsically limit how a good designer responds to a problem.
16) This principle is nonsense. The reliability of a system depends on the reliability of the components and reliability is an important aspect of safety for safety-critical systems such as medical systems.
17) People do NOT create safety. As a rule people try to do a good job. The SOP for operations adapts to faults, failures, and omissions. Human sources of faults, failures and omissions improve with training and frequent operational exercises.
18) This principle especially applies to any poorly tested system, complex or otherwise. Based on the author’s description for this principle, it serves as a corollary to principle 17 restated in terms of operator training and operational exercise. Taken on its face, this principle is absurd. I am glad we didn’t require failure (as in scrambling the bombers and firing our missiles) of the NORAD system to fix the problem of misinterpreting RADAR bounce off the moon. One failure nearly closed down NASA. In the world of medical systems the case of the Therac-25 [https://en.wikipedia.org/wiki/Therac-25] should suggest some caution in applying principle #18 as a desirable design practice.
So — apply any of these principles to design practice and/or evaluation with great care and skepticism. Within their context of medical systems these principles are highly problematic and optimistic — at best. I might go so far as to suggest some of these principles could be positively dangerous as guides for any kind of safety-critical design of simple or complex systems. I hope all designers of safety-critical systems would go check out NASA’s process for doing safety-critical design. The process isn’t easy or pretty and it certainly cannot be codified into this author’s 18 rules/principles.
How the 18 rules aid the analysis of economic, social, ecological, political … you name it systems … it might be most wise to remember the old principle of GIGO.
To #1 add the other half — “Not all hazardous systems are complex — a handgun for example.”
I’m curious to see if someone responds to this. I share your reaction, so I just assumed the point was more to spark thought and discussion than actually take the specific systems analysis seriously. But since you put in the time to go point by point, it’s an interesting critique.
Another expression of the idea that highly efficient systems are fragile: corporate raiders and restructurers are praised for “trimming the fat” from companies they take over, and creating “lean operations.”
But “fat” is what enables organizations to survive hard times and adapt to changing conditions.
That’s also another good example where the drive for efficiency is not what is going on. Corporate raiding is the act of levering up a company’s assets for debt issuance and other specific financial activities that can create tax beneficial income for the raiders. The complexity is not inherent; it is created to siphon off the wealth in a way recognized and encouraged by our systems of law, banking, higher education, and taxation.
‘Second, this appears to defy the truism from ecology that more complex systems are more stable. I think that’s because ecologies generally are not tightly coupled.’
Also from ecology, is the truism that monocultures are inherently unstable. Arguably, globalisation of complex financial devices has created a monoculture rather than a diversity of ecosystems of loosely connected or independent systems. The locust landing on cultivated land meets little to check its progress (negative feedback) or reproduction. Failure in one part of the financial system tends to spread because there is little insulation from the other parts. The analogy would be with the failure of one of an individual’s highly complex organs within the monoculture of a single human body.