Yves here. Perhaps I am too much of a cynic, but so rarely do groups manage to overcome high-school level social dynamics that I despair of ideas like “collective intelligence” in human affairs.
By David S. Wilson, SUNY Distinguished Professor of Biology and Anthropology at Binghamton University and Arne Næss Chair in Global Justice and the Environment at the University of Oslo. His most recent book is Does Altruism Exist?Twitter: @David_S_Wilson. Originally published at This View of Life; cross posted from Evonomics
David Sloan Wilson interviews Geoff Mulgan
Say the word “mind” and most people immediately think about the workings of an individual brain. The idea that something larger than an individual might have a mind seems like science fiction—but modern evolutionary theory says otherwise.
It is now widely accepted that eusocial insect colonies—ants, bees, wasps, and termites—have collective minds, with members of the colony acting more like neurons than decision-making units in their own right. For example, a critical stage in the life of a honeybee colony is when it fissions and the swarm that leaves must find a new nest cavity. Exquisite research by Thomas Seeley and his associates shows that the swarm behaves like a discerning human house hunter, scouting the available options and evaluating them according to multiple criteria. Yet, most scouts visit only one cavity and have no basis for comparison. Instead, the comparison is made by a social process that takes place on the surface of the swarm, which is remarkably similar to the interactions among neurons that take place when we make decisions. After all, what is a multi-cellular organism but an elaborately organized society of cells?
The reason that multi-cellular organisms and eusocial insect colonies both have minds is because they are both units of selection. Lower-level interactions that result in collective survival and reproduction are retained, while lower-level interactions that result in dysfunctional outcomes pass out of existence. What we call “mind” focuses on the lower-level interactions that result in the gathering and processing of information, leading to adaptive collective action.
As soon as we associate “mind” with “unit of selection”, then the possibility of human group minds leaps into view. It is becoming widely accepted that our distant ancestors found ways to suppress disruptive self-serving behaviors within their groups, so that cooperating as a group became the primary evolutionary force. Cooperation takes familiar physical forms such as hunting, gathering, childcare, predator defense, an offense and defense against other human groups. Cooperation also takes mental forms, such as perception, memory, maintaining an inventory of symbols with shared meaning, and transmitting large amounts of learned information across generations. In fact, most cognitive abilities that are distinctively human are forms of mental cooperation that frequently take place beneath conscious awareness. It is not an exaggeration to say that small human groups are the primate equivalent of eusocial insect colonies, complete with their own group minds. As the great 19th century social theorist Alexis d’Toqueville observed, “The village or township is the only association which is so perfectly natural that, wherever a number of men are collected, it seems to constitute itself.”
The adjective “small” is needed because all human groups were small prior to ten thousand years ago, although a tribal scale of social organization needs to be recognized as important in addition to the fission-fusion bands within each tribe where most of the social interactions occurred. In addition, cultural evolution is a multi-level process, no less than genetic evolution. As Peter Turchin shows in his book Ultrasociety, the societies that replaced other societies during the last 10,000 years did so in part because of their ability to gather and process information, leading to effective collective action at ever larger scales, such as the nations of France and America which were the main objects of Toqueville’s attention. Some elements of culturally evolved group minds are consciously designed, but many others are the result of unplanned cultural evolution, taking place beneath conscious awareness. They work without anyone knowing how they work.
Not only do units of selection tell us where group minds are likely to exist, but also where they are unlikely to exist. In many animal societies, within-group selection is the primary evolutionary force, leading to behaviors that would be regarded as selfish and despotic in human terms. If these societies have group minds at all, they are highly impaired, unlike eusocial insect colonies. By the same token, despotic human societies have group minds that are highly impaired, unlike more cooperatively organized human societies.
Knowing all of this has tremendous potential for recognizing collective intelligence in human life where it currently exists and socially constructing it where it is needed. However, most of what I have recounted is new, emerging only within the last two or three decades, and is often not reflected in the thinking of otherwise smart people on the subject of collective intelligence. In particular, there is a tendency to naively assume that collective intelligence emerges spontaneously from complex interactions, without requiring a process of selection at the level of the collective unit.
It was therefore with trepidation that I began reading Big Mind: How Collective Intelligence Can Change Our World, by Geoff Mulgan—founder of the think tank Demos, director of the UK Prime Minister’s Strategy Unit and head of policy under Tony Blair, and current chief executive of Nesta, the UK’s National Endowment for Science. That made him smart—but was he smart about collective intelligence from a modern evolutionary perspective?
To my delight, I found him very well informed, clearly recognizing that collective intelligence only exists under very special conditions, which makes it both present andabsent in human life. In addition to his conceptual understanding, his book is filled with examples from his extensive policy experience that were previously unknown to me, along with practical advice about how to enhance collective intelligence where it does not already exist. I therefore lost no time inviting him to have an email conversation, which he generously accepted. An excerpt of his book is featured on the online magazine Evonomics.com.
DSW: Welcome, Geoff, to TVOL and congratulations on your superb book. In our correspondence leading up to this conversation, you called my attention to a 1996 issue of Demos Quarterly devoted to evolutionary thinking. Tell us about your background and how you came to appreciate the relevance of evolutionary theory in relation to human affairs. Bear in mind that while you are already well known in some quarters, you will be new to many of our readers.
GM: My intellectual background is a combination of economics, philosophy, social science and telecommunications, the subject of my PhD. By the time I started becoming interested in public policy there was already widespread dissatisfaction with the overly mechanistic, equilibrium models of economics which failed adequately to explain patterns of change: how technologies arise and spread; how economies grow. Many of us looked to evolutionary thinking as a useful tool. It could provide metaphorical frames – understanding social change in terms of the generation of new possibilities, selection and then replication (which has subsequently helped feed a very dynamic field of social innovation); it gave some new insights into how we were formed as human beings, and new psychological insights into policy. The Demos Quarterly you mentioned was a good showcase of the state of the field at the time. But it had little immediate influence.
One interesting spin-off was what is now called behavioural economics, which adapted many insights from evolutionary biology into the language of economics. The next issue of Demos Quarterly in 1996 focused on that, and I later commissioned quite a bit of work in the UK government (including a big 2002 study on the implications of behavioural psychology for public policy). A few years later Nudge was published by Cass Sunstein and Richard Thaler and introduced these ideas to the mainstream, helping the creation of a behavioural insights team in the Prime Minister’s office in the UK.
Another result, which I write about quite a bit in the book, is to see large scale cognition, like evolution more generally, in terms of trade-offs. I call it cognitive economics: what selection or survival advantages are provided by certain kinds of cognition, and at what cost. A great deal of work has been done on this at the individual organism level in terms of the advantages of a larger, but very energy hungry, brain. I’m interested in the parallels for groups of organisations: if they spend scarce resources on abilities to observe, analyse, create or remember does that confer advantages? Can they overshoot – like the clan that spends so much time remembering its ancestors that it fails to protect itself from threats; or a company that spends so much time trying to create the new that it fails to attend to the present. My hunch is that a new discipline is possible that draws on evolutionary thinking to analyse these kinds of trade-offs in more precise ways.
DSW: That’s very helpful background. I don’t want to assume that we agree upon everything, so please comment on my rather lengthy introduction. Is there anything that you would like to add or amend, to set the conceptual stage broadly for our conversation?
GM: Your introduction makes a great deal of sense to me, and coming from a social science background it’s obvious that the group is a unit of selection. The question that animated me was a version of this: why do some nations, cities, organisations manage to thrive and adapt while others don’t, even though they appear to be endowed with superior intellectual resources or technologies? Why did some of the organizations that had invested the most in intelligence of all kinds – from firms like Lehmann Brothers to the USSR in the 1980s – fail to spot big facts in the world around them and so stumble? I was looking for a theory that could explain some of these patterns and understand how and when some groups are able to optimize for a particular environment and then adapt to a rapidly changing one.
DSW: Indeed! Now I’d like to focus on two conventional wisdoms that obscure clear thinking about collective intelligence. The first is the conventional wisdom that axiomatically takes the individual as the unit of analysis, including methodological individualism in the social sciences and the rational actor model in economics as a type of methodological individualism. This axiomatic view makes it difficult to conceive of the concept of mind above the level of an individual. What are your views about methodological individualism?
GM: Western intellectual life is dominated by traditions that reject any notion that a collective mind could be more than the sum of its parts. There were some good reasons to be suspicious of vague and mystical invocations of community, god, or national spirit. But I think it’s wrong to conclude that collective intelligence is nothing more than the aggregation of individual intelligences. We recognize that in any serious account of history, and to an extent in the law which can be guilty of crimes. There is little doubt in my mind that groups can think, and can have true or false beliefs. But the ways groups do these things are not precisely analogous to the ways individuals work. I try to provide a way of thinking about the degrees of ‘we-ness’ of groups, that relates this to the extent of integration of cognition in a group. Here I extend recent work on individual consciousness which relates it to the degree of integration of the brain while awake. This, I hope, more nuanced position sees individuals as shaped by groups, and groups as made up of individuals and is helped by the ways in which psychology and neuroscience have revealed that the individual mind is better understood not as a monolithic hierarchy, with a single will, but rather as a network of semiautonomous cells that sometimes collaborate and sometimes compete. If you accept that view, then it becomes more reasonable to see groups in a similar way, even if you differentiate the highly integrated individual mind from the less integrated group mind (in other words, not altogether integrated individual minds not altogether integrated into larger groups).
DSW: Thanks. The second conventional wisdom regards collective intelligence as an emergent property of complex interactions, without paying careful attention to the special conditions that are required. Here is how you put it on p 5 of your book: “To get at the right answers, we’ll have to reject appealing conventional wisdoms. One is the idea that a more networked world automatically becomes more intelligent through processes of organic self-organization. Although this view contains important grains of truth, it has been deeply misleading.” Please say more about this view, which in my experience is held by some people who are otherwise very smart, such as complex systems theorists who don’t have a strong background in evolutionary science.
GM: I offer several different challenges to the glib, but very common, view that the universe has some inner dynamic towards benign self-organisation. The first recognizes organization as costly, the lens of cognitive economics. When we study self-organisation more closely in any real situation – from markets to online collaborations like Wikipedia – they turn out to rely on a great deal of labour, provided by some people who choose to devote scarce time and money to the work of making things happen, rather than just having fun or sitting on the couch. Where there are insufficient motivations, incentives or habits for doing this self-organisation tends to disappoint. The second lens recognizes conflict, and a constant struggle between forces for cooperation and forces that aim to disrupt or misinform. Contemporary social media are an obvious example. The third more sociological lens recognizes that most real complex human societies combine multiple cultures, some hierarchical, some individualistic, and some more egalitarian and cooperative. These complement each other in all sorts of ways. Purely flat, self-organising egalitarian structures tend to fall apart, as do structures which are only hierarchical or only individualistic. I believe this is a fundamental insight of some modern social science (which I attribute particularly to the anthropologist Mary Douglas) which helps make sense of everything from grappling climate change to the everyday life of business. But many well-informed people are unaware of it.
DSW: I agree! Now that we have cleared the deck of misleading conventional wisdoms, could you please provide one of the best examples of human collective intelligence? Then we can discuss how it works mechanistically and how it came into existence historically.
GM: The global science system is probably the best single example, and nicely illustrates how real, living intelligence depends on each of the organizing principles described above. It has hierarchy within disciplines and within universities; strong individualist incentives; and a strong egalitarianism (the sociologist Robert Merton spoke of the communism of science, the assumption that knowledge is there to be shared). It depends on some common infrastructures; it orchestrates millions of minds and millions of machines; and it has to fight constant battles against its own internal and external enemies. The internal ones include the strong incentives for fraud or burying uncomfortable evidence (just last week a newly appointed professor of computational biology described being told by a superior that one should repeat experiments as many times as necessary to get the right result!).
Seen from afar the science system looks like a wonderful emergent system; seen up close it depends on many individuals devoting their lives to the hard work of building up a community, and establishing its norms, and persuading others to give it money, status and other resources.
DSW: This is indeed an excellent example to single out. Mechanistically, we can show that scientific inquiry requires a complex “social physiology” with regulatory processes enforced by norms. Historically, we can rely on books such as Steven Shapin’s Social History of Truth and Robert McCauley’s Why Religion is Natural and Science is Not. As the title of McCauley’s book implies, we can consult our deep genetic evolutionary past to show why we are not natural born truth seekers and require a socially constructed process to create a body of objective knowledge. We can also see how scientific and scholarly cultures have been torn apart in the past and should not be taken for granted in the present.
To continue, I’d like to focus on an example of collective intelligence that is a work in progress—the smart cities movement. To say that cities need to be made smart implies that they do not become smart on their own, affirming the point we have already made that collective intelligence does not spontaneously emerge. Yet, efforts to make cities smart often fail to reach their goals. What is your opinion of the smart cities movement?
GM: The smart city movement is simultaneously exciting and maddening. Its promise is that many of the everyday processes of city life can become more intelligent. Data flows can greatly improve the efficiency of energy management and transport flows. Central command centres like the one in Rio can spot emergencies and respond quickly. Sensors on lampposts can assess air quality, and families like mine can control heating in our home remotely, or check on our security. That’s the promise. Unfortunately far too much of what has been labelled as smart cities is either facile or irrelevant, often meeting needs that don’t exist (like refrigerators that tell you when you need to buy more milk). Too many plans failed to attend to the human element, or focused only on smart hardware not on helping people to be smart. These are just some of the reasons why so much investment in smart city technology has been wasted, why most of the prominent examples essentially failed – from Dongtan in China that was never built to Masdar in Abu Dhabi, or left us with rather soulless places like New Songdo in Korea. Few attended to the most pressing needs of cities – for health or jobs. And few learned basic lessons from evolution. The best cities have been given the space to evolve, to learn from experience and to reconfigure themselves. The smart city plans tend to be conceived as blueprints that simply need to be implemented with little engagement from the people who live in the city.
DSW: I like your point about the need for smart cities to be a collaboration with the residents. One of my former PhD students, Dan O’Brien, is involved in the smart cities movement in Boston. His research on 311 illustrates an important distinction between designing social systems and participating in the social systems that we design. As you know, 311 is a three-digit number that can be called to report problems such as a fallen tree or a pothole. It originated as a “cultural mutation” in Baltimore to handle calls that were inappropriately being made to 911, which is for emergency situations. Then people began to realize that 311 could serve as the “eyes and ears” of a city by having residents provide information about problems that the city could process and address. Today it is being used in hundreds of cities. With such a system in place, a city begins to resemble a single organism with a nervous system that receives and processes information in a way that leads to effective action. A lot of work is required to design and implement the system, but once it is in place, using it is as simple as punching three numbers into your smart phone when you see a problem. It seems to me that this distinction between designing social systems and participating in the social systems that we design is very general. Do you agree and can you provide some other examples?
GM: Yes I do. I was quite closely involved in variants of this in the 1990s when 111 services were designed, and then more recently when Nesta supported various tools that could show in real time the topics being called about, providing a real time map of the city’s concerns and problems. All of these examples have worked best with some space for iteration and learning. The UK introduced something very similar in the health service in the 1990s – called NHS Direct – to deal with more everyday issues and reduce pressures on hospitals. But this technically strong idea ran into challenges: resistance from some doctors who didn’t like diagnoses to be done by nurses helped by algorithms; concerns from lawyers about the risks of error, which together meant that too often the service recommended going to a doctor or hospital anyway; and the challenges of serving a large population without good English. In the same way my mentor Michael Young created a precursor in the 1980s, a call centre service of doctors and nurses for people with sensitive or embarrassing health conditions who didn’t want to interact face to face with a doctor; in this case he co-designed it with patients themselves. All of these are at their best examples of co-evolution – systems learn best by trying, fixing and following leads rather than abstract design.
DSW: Again, your emphasis on the need for policymaking to be iterative is important. In addition, any systems engineer will tell you that a complex system cannot be optimized by separately optimizing the parts. The parts and their interactions must be organized with the performance of the whole system in mind. If so, then collective intelligence at the global scale requires policies that are formulated with the welfare of the whole earth in mind. Nothing else will do and the concept that each nation can pursue a “my nation first” policy is collective idiocy. Do you agree with this assessment?
GM: The insight that optimization of parts can be suboptimal for the whole is one of the great contributions of systems engineering, and one that is often forgotten both in business and government, where the relentless pursuit of narrow targets can often have disastrous consequences. This is in part a technical challenge. The Intergovernmental Panel on Climate Change is a good example of a first attempt to create something more like a global collective intelligence that can ‘inform policies formulated with the welfare of the whole earth in mind’, as you put it. But its only a first stab. The huge number of variables involved in something like climate change, let alone its interaction with economies or social life, makes it next to impossible to model. The task is so far beyond the capability of either our brains or computers that we have to rely on rough and ready heuristics.
So to complement our imperfect tools we also need to cultivate a parallel ethical stance. In my work on public leadership I developed the idea that leaders should think of three concentric circles of accountability. The first is to the immediate task, or organization. The second is to the wider community they are part of. The third is accountability to humanity and the planet. Ideally we want leaders who can align all three. But if they try to sacrifice the first to the third then they are not likely to survive very long. Conversely the world would be disastrous if everyone focused only on interests of their immediate organization. We have to learn to strike a balance. To be compassionate only to those close at hand is narrow and mean. But to be compassionate only for the world as a whole and to ignore those closest to you can be just as bad.
DSW: I agree entirely. The parts must be organized in relation to the whole but the whole must sustain the parts. There is a lot to be pessimistic about, but can you leave our readers with some things to be optimistic about?
GM: We are in an odd phase of history which is simultaneously bringing extraordinary breakthroughs in artificial intelligence and unusually foolish or malign leaderships. Seen in the long view I tend to confident. There is some evidence of the Flynn effect, a long-term rise in individual intelligence; many of our institutions are becoming more capable, certainly if seen at a global level; we have greatly more awareness of global public goods, and greatly enhanced capacities to observe, analyse and predict; and we are in a period when millions more are becoming makers and shapers of technologies and of their world. Of course it’s not at certain that our capacity to think and act is advancing sufficiently fast. But I see strong reasons to err on the side of hope. One comes from family history. My cousin, John Mulgan, who was a New Zealand novelist and soldier, killed himself in 1945 because he was so depressed at what he saw happening in the world (amongst other things the British were reinstalling the Greek King in the country where he had been fighting for several years). He thought the outlook was hopeless. Yet in retrospect 1945 was one of the great years of new opportunity for the world, a reminder of how hard it is for us to judge our times accurately.
We shouldn’t rely on hope however, or fall into the trap of believing that there grand historical forces will either make things turn out for good or for bad. Systems – markets, humanity, science – don’t automatically generate solutions. History tells us that again and again and the space for agency – sparked by both imagination and fear – is where the most important work is done. I’m lucky to spend much of my time with practical innovators in the social innovation movement worldwide, in business, science and government; if you do so, you are inevitably infected by optimism.
They point to a final insight: over the years I’ve learned that the more detached people are from practical work the more they risk of slipping into negativity and fatalism. Action breeds hope not the other way around. Goethe said that in the beginning was the deed, not the word, and engagement with practice is probably the best way for us to keep hopeful and useful, and to make our contribution to the next stage of human evolution.
DSW: That’s great and describes my own experience as an activist. I’m also optimistic. The problems are wicked, but we are beginning to develop the tools for becoming “wise managers of evolutionary processes”.
I have many misgivings after reading this piece. I can see what they are getting at but I can see a self-selection which will happen that will defeat the purpose of things like so-called smart cities and group minds. I would also suspect that the haves would subconsciously separate themselves from the have-nots for reasons of social comfort. If you want an example of self-selection at work, consider the fact that in previous societies women were mostly shunted aside in the running of society. Straight away that society would lose approximately lose half of the potential talents, skills and abilities that they could have had. Thus these human mind groups would be only a skewered aspect of the host society after this own self-selection (or own goal!).
And sorry, people don’t think with their minds like insects and are not like the Borg. These units of selection are actually composed of individuals with different talents, skills and abilities that they bring to a coordinated whole. After all, if you have a toool-box, you have a variety of tools within and not multiple copies of the same tool. The idea of a Big Mind sounds nice and all that but the truth of the matter is that efficiency drops off the larger the number of individuals in that group. Humans can increase the “intelligence” with which they operate but I would guess that mostly be by reducing hindrances and friction that lets people make their own contributions.
The dichotomy painted by Geoff Mulgan that juxtaposes those that spend precious Time and Resources towards proactive aims and goals, and those that “have fun and sit on the couch,” is comically simplistic and fails to factor in the multitude of extra-personal factors that direct individual and small group action. In that respect it sounds more like a “matter of fact” Conservative/Reactionary talking point that is glibly thrown out into the dialogue and simple accepted as Truth, rather than scrutinized, perhaps Scientifically, in terms of the more complex reality surrounding it. We live in a World were the vast majority of people lack the free time to properly self educate themselves, and those who could create Time still find themselves in the same spot since most Institutions of Knowledge and the cultivation of Intelligence bar those that lack the proper funds to participate. From the entire conversation, no one sentence is dedicated to exploring the viability of Mulgen’s idea within the context of the current society, which is one that from nearly all observation appears dominated by short sighted individualism that is allowed nearly full agency because of the Influence of Wealth. How then are these problems to be rectified? Science as it exists now is internally egalitarian, yet facts encompassing the externality demonstrate that not everyone deserving of entrance into the Scientific fields will be to able to join for such crude and base reasons as lack of funds or time in which to be educated. If left un-rectified, then Mulgan’s society is a sham in which the internal hierarchy is one comprised of Elects produced by Wealth standards rather than Academic and Practical ones. This when paired with the troubling and already dubious philosophical view of harmonious hierarchy and Instrumentality, leaves me questioning what is to become of the “low” tiers of Mulgan’s society should it be properly implemented or the systems of complex technology and data movement simply filched and reoriented to serve existing power structures. I suspect the answer to be quite horrifying.
The ideas presented by this post provide much food for thought. That said, to cross the chasm from seductive to mainstream they would have to roll back decades of cultural programming that places the individual, and not the collective/community, at the centre of the success (in broad terms that is, not just economic success) narrative. Everywhere we turn the triumph of individualism as the preeminent social organizing principle is reinforced ad nauseam, in the media, in popular culture, in politics, in academia, even in our own homes the incentive structure is skewed towards encouraging individual rather than collective achievement.
Even in those societies where individual pursuits were traditionally framed within the context of their impact on the collective, the individualistic ethic is taking hold and supplanting cultural norms that elevated the community above the individual. The pendulum of cultural programming would have to swing back towards collectivism for these ideas to germinate, diffuse and blossom. Not impossible of course, but not easy either.
There is only one Mind.
There is only one spirit.
Awareness, consciousness is available to all material and energy beings. Material beings find it difficult because of the denseness of the energy in that matter, to discern this. That is what is meant by Original Sin: the problem of being in matter means a loss of awareness of One/All.
There is no need for priests or tutors. They are yet another obstacle. The elimination of the Cathar heresy is an example of what happens when authority structures feel challenged.
“Strong” people found out that they can amplify their power by capturing the attention of others, making an authority. This is then imposed on others who are organized to work in return for some reward system.
The followers will always be here. The “strong” are hurt and afraid, but have the self discipline to protect themselves with a structure. To have a collective, these strong persons have to be educated.
This is just crap. First, lousy application of group selection. The insects he notes have a queen, who is the sole unit of reproduction. That ain’t us.
He’s talking about structures, not humans. Which makes sense, given his background. “…the communism of science, the assumption that knowledge is there to be shared.” Counter with Manhattan Project. He is making a fundamentally flawed glossover of within-group v across-group evaluations.
“I’m lucky to spend much of my time with practical innovators in the social innovation movement worldwide, in business, science and government; if you do so, you are inevitably infected by optimism.” At least he knows he’s lucky, and let’s add ‘social innovation’ to this list of redflags: “SMART INNOVATIVE DISRUPTIVE STUPID NARRATIVE” whoa ‘innovati*’ is already there!
I apologize for my testiness, but systems science is a discipline, and crossing over the terminology to movement goals is akin to bringing ‘quantum’ into conversations about social groups. It undercuts the validity of both in the service of agents of optimism. Which usually reduces to ka-ching.
Us humans love to categorize, sometimes inappropriately. There are three tests for eusociality (https://www.quora.com/Are-humans-eusocial) and we do have two of the traits: cooperative brood care, and overlapping generations. The third, division of labor into reproductive and non-reproductive groups, is not so straightforward: after menopause a woman could then live for several decades as non-reproductive member of the tribe.
I share your disgust with the magic quantum dust sprinkled everywhere, but any theory backed by E.O. Wilson is worth listening to. YMMV.
Wilson is brilliant, and has pulled group selection back into the discussion. He is also rigorous.
I agree that humans are pro-social even if they don’t meet the strict definition of eusocial. But that doesn’t mean we’re going to get along. Turchin: “Cooperation and coercion are enjoined in a very special way: cooperation takes place among lower-level units (but is supplemented with punishment of free-riders), while conflict takes place between higher-level collectivities.” In other words, within-group cohesion (Becker’s ‘Insiders’) allows greater economies of between-group warfare. Mulgan’s statement, ‘But if they try to sacrifice the first to the third then they are not likely to survive very long’ is demonstrably wrong.
The Turchin link is pretty academic, while his book ‘Ultrasociety’ is worthy and accessible. To sow seeds of peace, or cooperation, to create learned altruism, we need to understand the soil we’re planting in. The article is misguided not in its ends, but in presenting an inadequate model which regards ‘extraordinary breakthroughs in artificial intelligence’ as a stabilizing force. Perverse outcomes are likely when action comes from biased models.
Unity of purpose can be found in the Big Idea.
To prevent this, all power structures try to hide certain truths. Teaching those truths is a revolutionary act.
The birth of Venus was a terrible event. The ejection of the planet was one thing, but the massive power surges that devastated the next planet along, Earth, meant that the likelihood that such an event would recur, has had to be hidden.
This may be about to be introduced or touched on by Operation BlueBeam or whatever it is called? When ever the PTB revile something or someone, that is because they are afraid of the truth within it, hence the attacks on Velikovsky.
What a wonderul word, “eusocial!” Makes me recall the wonders of living in a large condominium building (120 “units”) and a gated, deed-restricted community, where everyone all pulled on the same end of the rope. Houses all of a small palette of colors, no divergence to e.g. Chinese red doors. Monoculture lawns to be mowed to uniform 3” height. Cars must be parked inside the garage with door closed. Threatening notes or penalties and litigation for divergence from the collectively defined (well, sort of) social forms and “built environment.” Oh, wait… “One Thing to Rule Them All, One Thing to Bind Them. One Thing to Bring To Bring Them All, and In The Mindless Bind Them…”
Hey, is the whole MISecurity Complex an expression of that “group mind” phenom? Army ants? http://www.dailymail.co.uk/sciencetech/article-4092772/Shhh-washing-machine-overhear-you.html
And Zuckerberg and Bezos and the Kochs and that ____er Gates and Carlos “the Slim man” all want there to be a “group mind,” of course — they just want to be the Superneurons, to say what all the elements and perceptions and behaviors of said group mind will be. And those Superneurons have no trouble coopting significant cadres of lesser neurons, like Rove and Carrville and the prototype, Bernays, and the Dulleses and so on, to their Prime Directive.
Sure looks to the skeptic like “this is not going to end well
This simply reminds me of the Red Guards in China in the 1960s, waving that little Red Book. Self organization is like “hidden hand” … there is a hidden hand, but it is from the Elite aka Chairman Mao et al.
Part of the “group mind” is “group think” … so I see a big market for similar unisex clothing aka Mao jackets and trousers.
finally found the time, and as I’m reading this, I can hear the ghosts of BF Skinner and Huxley arguing in the Ether.
I also, like others, above, hear the whisper of the Totalitarian Phase of Neoliberal Global Empire.(which is really just the latest, most well appointed version of Feudalism(* pick yer term, all the way back to Sumer))
Where’s P.K. Dick when you need him?
Keep these folks locked in the tower, ivory or otherwise.
Reaction here seems to be generally skeptical, as it should be. One needn’t spend too much time analyzing the details, such as comparing and contrasting the ways in which humans are alike and different from ants. One need only recognize that these flights of fancy are founded on undefined terms: ‘mind’, ‘consciousness’ and ‘self-organizing’. In fact, for the later, Mulgan pretty much defines ‘self-organizing’ as organized by “people who choose to devote scarce time and money to the work of making things happen”. What is “self-organizing” about being organized by other people? ‘Self-organizing’ is being used here solely for its buzzword potential.
In the intro Wilson quotes d’Toqueville, “The village or township is the only association which is so perfectly natural…” basically claiming that the cultural form he is most familiar with is the “natural” form, a particular weakness of the Victorian mindset. And there are lots of other particular, dubious statements being made, but rather than turning each one over and examining it, let me reiterate… ‘mind’ is an undefined term, so what is it they are talking about, exactly?
So, not just 10-per-center, but 10-per-center to Tony “Fixing the intelligence around the policy” Blair. Seems legit.
It seems to me the corporation can justifiably be thought of as an artificial intelligence. And if this is the case, then AI has been around for centuries. After all, those who serve the corporation serve it’s interests, chiefly making money. This, whether it’s via a selling a widget or service. Of course it’s not as fast as our conception of an AI as we think of it today (think the Matrix), but nevertheless, a corporation has a will and a mind and can potentially live in perpetuity. Perhaps not a mind in actual fact, but in results – and a rose by any other name . . . just saying.
There’s something to this…although I don’t see it mentioned anywhere. The mediation of social interaction is critical. Arranging our cities so everyone must drive everywhere is *anti*social. Having the actual experience of society because pedestrian humans meet on the sidewalk is something most city planners seem determined to build out of the civic design.
Jane Jacobs says something like: Modern planning is positively neurotic in its willingness to embrace what doesn’t work and ignore what does. … It’s a form of advanced superstition, like 19th century medicine that thought bleeding patients cured them.
What’s the hallmark of bad civic arrangements? Use-based planning. That’s specifying, often decades in advance, whether a particular parcel will be residential, commercial, offices, etc. The alternative is called form-based planning, which specifies big, or little buildings and leaves it up to market conditions at the time of building to determine what’s built.