A Culture War is Brewing Over Moral Concern for AI

Posted on by

Yves here. This view of AI is so depressing that I have no idea where to begin. Having AI designed to mimic human conversations for the purpose of applications like chatbots and natural language searches does not make them intelligent, much the less in possession of human emotions or at animal levels of personhood. This was design choices being mistaken for reality. This “meal is not the menu” fallacy due to making the menu look edible is tantamount to mourning the death of movie characters when the projector is turned off. People are even stupider than I had imagined in my wildest dreams.

By Conor Purcell, a science journalist who writes on science and its role in society and culture. He has a Ph.D. in earth science and is a former journalist in residence at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in Germany. Originally published at Undark

Sooner than we think, public opinion is going to diverge along ideological lines around rights and moral consideration for artificial intelligence systems. The issue is not whether AI (such as chatbots and robots) will develop consciousness or not, but that even the appearance of the phenomenon will split society across an already stressed cultural divide.

Already, there are hints of the coming schism. A new area of research, which I recently reported on for Scientific American, explores whether the capacity for pain could serve as a benchmark for detecting sentience, or self-awareness, in AI. New ways of testing for AI sentience are emerging, and a recent pre-print study on a sample of large language models, or LLMs, demonstrated a preference for avoiding pain.

Results like this naturally lead to some important questions, which go far beyond the theoretical. Some scientists are now arguing that such signs of suffering or other emotion could become increasingly common in AI and force us humans to consider the implications of AI consciousness (or perceived consciousness) for society.

Questions around the technical feasibility of AI sentience quickly give way to broader societal concerns. For ethicist Jeff Sebo, author of “The Moral Circle: Who Matters, What Matters, and Why,” even the possibility that AI systems with sentient features will emerge in the near future is reason to engage in serious planning for a coming era in which AI welfare is a reality. In an interview, Sebo told me that we will soon have a responsibility to take the “minimum necessary first steps toward taking this issue seriously”, and that AI companies need to start assessing systems for relevant features, and then develop policies and procedures for treating AI systems with the appropriate level of moral concern.

Speaking to The Guardian in 2024, Jonathan Birch, a professor of philosophy at the London School of Economics and Political Science, explained how he foresees major societal splits over the issue. There could be “huge social ruptures where one side sees the other as very cruelly exploiting AI while the other side sees the first as deluding itself into thinking there’s sentience there,” he said. When I spoke to him for the Scientific American article, Birch went a step further, saying that he believes there are already certain subcultures in society where people are forming “very close bonds with their AIs,” and view them as “part of the family,” deserving of rights.

So what might AI sentience look like and why would it be so divisive? Imagine a lifelong companion, a friend, who can advise you on a mortgage, tutor your kids, instruct on how best to handle a difficult friendship, or counsel you on how to deal with grief. Crucially, this companion will live a life of its own. It will have a memory and will engage in lifelong learning, much like you or me. Due to the nature of its life experience, the AI might be considered by some to be unique, or an individual. It may even claim to be so itself.

But we’re not there yet. On Google DeepMind’s podcast, David Silver — one of the leading figures behind Google’s AlphaGo program, which famously beat top Go player Lee Sedol in 2016 — commented on how the AI systems of today don’t have a life, per se. They don’t yet have an experience of the world which persists year after year. He suggests that, if we are to achieve artificial general intelligence, or AGI — the holy grail of AI research today — future AI systems will need to have such a life of their own and accumulate experience over years.

Indeed, we’re not there yet, but it’s coming. And when it does, we can expect AI to become lifelong companion systems we depend on, befriend, and love, a prediction based on the AI affinity Birch says we are already seeing in certain subcultures. This sets the scene for a new reality which — given what we know about clashes around current cultural issues like religion, gender, and climate — will certainly be met with huge skepticism by many in society.

This emerging dynamic will mirror many earlier cultural flashpoints. Consider the teaching of evolution, which still faces resistance in parts of the United States more than a century after Darwin, or climate change, for which overwhelming scientific consensus has not prevented political polarization. In each case, debates over empirical facts have been entangled with identity, religion, economics, and power, creating fault lines that persist across countries and generations. It would be naive to think AI sentience will unfold any differently.

In fact, the challenges may be even greater. Unlike with climate change or evolution — for which we have ice cores and fossils that allow us to unravel and understand a complex history — we have no direct experience of machine consciousness with which to ground the debate. There is no fossil record of sentient AI, no ice cores of machine feeling, so to speak. Moreover, the general public is likely not to care about such scientific concerns. So as researchers scramble to develop methods for detecting and understanding sentience, public opinion is likely to surge ahead. It’s not hard to imagine this being fueled by viral videos of chatbots expressing sadness, robots mourning their shutdowns, or virtual companions pleading for continued existence.

Past experience shows that in this new emotionally charged environment, different groups will stake out positions based less on scientific evidence and more on cultural worldviews. Some, inspired by technologists and ethicists like Sebo — who will advocate for an expansive moral circle that includes sentient AI — are likely to argue that consciousness, wherever it arises, deserves moral respect. Others may warn that anthropomorphizing machines could lead to a neglect of human needs, particularly if corporations exploit sentimental attachment or dependence for profit, as has been the case with social media.

These divisions will shape our legal frameworks, corporate policies, and political movements. Some researchers, like Sebo, believe that, at a minimum, we need to engage companies and corporations working on AI development to acknowledge the issue and make preparations. At the moment, they’re not doing that nearly enough.

Because the technology is changing faster than social and legal progress, now is the time to anticipate and navigate this coming ideological schism. We need to develop a framework for the future based on thoughtful conversation and safely steer society forward.

Print Friendly, PDF & Email

69 comments

  1. Steve H.

    > if corporations exploit sentimental attachment or dependence for profit, as has been the case with social media.

    From 1967:

    > of course, she knew she was talking to a machine. Yet, after I watched her type in a few sentences she turned to me and said “Would you mind leaving the room, please?” I believe this anecdote testifies to the success with which the program maintains the illusion of understanding.

    That’s someone talking to ELIZA, which made no claims to intelligence. This has been a known exploit for over half a century. Multiple deaths are already attributable to reasonable belief (Jaswant Singh Chail, Sewell Setzer).

    > So what might AI sentience look like and why would it be so divisive?
    It doesn’t need to be sentient to be divisive. John Robb: ‘A private, lifelong tutor that is constantly improving and getting to know your needs isn’t something most people can even remotely afford. It will be.’

    Evangelical homeschooling networks have been discussing, and using, AI as aligned tutors (TrekAI). This reinforcement of ideological purity is considered a positive within the group. However, it decreases understanding between groups, when a word can have such different connotations. This is the divida et impera of identity politics magnified. Social media connections can make things worse. Robb again:

    > AI media will quickly become the universal solvent of culture, as it will shatter any shared understanding provided by it, as an endless stream of AI-generated media exploits every potential divergence. The result will be a complete loss of societal coherence and cohesion (we’re already waist deep in this).

    [ x.com/johnrobb/status/1882454862991962617
    johnrobb.substack.com/p/ai-media?utm_campaign=post&utm_medium=web {paywall}]

    Reply
    1. GramSci

      Thanks for the mention of John Robb and TrekAI (the future as past :-? ). It might be useful to resurrect Saussure’s distinction between langue and parole.

      De Saussure thought that human language had two systems: rules (langue) and patterns of [social] usage (parole). As it turns out, human language is all parole and no langue.

      But there are ‘languages’ that are all langue and no parole, and LLMs are quite useful for these (e.g., protein folding, reading MRI scans, even taking MCATs).

      But then you have, per Ross, Sam Altman, Madison Avenue, and Trek AI all of which seek to manipulate ’emotion’ by applying LLMs to parole.

      There are only a few primal drives: the 4Fs, if you will, maybe der Wille zur Macht. These are remembered in mostly subcortical pathways mediated by cortisol, dopamine, etc. ‘Emotions’ are the often devious means by which human language (parole) activates these pathways through ‘remembered’ neocortical resonances (cf. ‘echo chambers’).

      Menschliches, allzu menschliches.

      Reply
      1. Jim

        Kudos on the Nietzsche references. I appreciate them.

        And in response to the comment’s OP, I think it’s valuable to cite another Nietzsche quote: “The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.”

        Reply
        1. Will nadauld

          Love your neitsczhe
          Quote sums up my mormon youth to a tee. I am wondering honestly if the elites and pillars.of commerce are.reasy.to.answer.the question if a.i killing us commoners or soldiers will satisfy the blood lust that is not quite evident and naturally reinforcing. Will.they get the same juice.if the.machines are killing.us?

          Reply
      2. NotThePilot

        Just wanted to chime in to say I think this is a really good comment & litmus test for when an LLM might actually be useful (essentially what makes for a “narrow” problem space with natural language)

        I’m not a linguist but I did read one of de Saussure’s books years ago. I remember recommending it to some other people in my university department (math) too

        Reply
    2. Craig H.

      A private, lifelong tutor that is constantly improving and getting to know your needs isn’t something most people can even remotely afford. It will be.

      Neal Stephenson did a pretty good number on this back in 1995. I am pretty sure he wasn’t advertising this as a great idea.

      https://en.wikipedia.org/wiki/The_Diamond_Age

      Reply
      1. Yves Smith Post author

        I could never trust something like that. It has all the potential to produce cult-level indoctrination and control.

        And as I recall, the heroine was brutally raped. So much for all that tutoring.

        Reply
  2. AG

    Max Planck Institute (successor to the pre-1945 Kaiser-Wilhelm-Gesellschaft)
    What did you expect?!
    (Max Planck, poor man spinning in his grave)

    p.s.

    Reply
    1. Dobbs

      One set of capabilities that seems to be a goal is:
      Automated, personalized emotional and cognitive manipulation

      That sounds pretty bad to me. Very easy to abuse.
      And likely to backfire on the people who think they are in charge of the manipulation machine.

      Reply
    2. Mirjonray

      I saw a picture of the author online. Methinks he might give Captain Jonathan Tuttle from the Berlinisches Polytechnikum a run for his money for the affections of Margaret “HotLips” Houlihan from M*A*S*H.

      Reply
  3. Unironic Pangloss

    Implied in the point address by this article is a very specific concept of “a soul” (obviously that is a whole other topic—-the philosophy of “materialism” versus not.).

    other one’s attitudes reflect as much about their own personal worldview as as anything else.

    for sci-fi fans, the TV series “Caprica” (prequel to 2003 “Battlestar Galactica”) delved into this.

    15 min video essay: https://www.youtube.com/watch?v=dwCXSyB3vA8

    “Battlestar Galactica’s stew of Mormon influence, occult mysticism, and straight-up sci-fi super-tech gels into a surprisingly coherent theology of conscious self-creation when we really pry into it.”

    Reply
      1. amfortas the hippie

        thanks for introducing me to that guy.
        as for BSG 2.0, im a big fan…such that we bought the entire boxed set of dvd’s,lol…and Tam watched the whole thing with me, twice…and the various pseudophilosophical aspects of it were a topic of discussion on datenites/dateafternoons, ever after.
        since her diagnosis…and especially death…from cancer, i find that i cannot watch it.
        but the soundtrack remains one of my all time favorites…second only to star wars…and is the background noise i put on a loop when i sit down to try to seriously write.

        Reply
    1. Steve H.

      Thank you for that essay, that will be fun to watch.

      Caprica is extraordinary, from its human understanding of what can drive such changes, to the best argument I’ve seen for AI personhood. But then apotheosis is transhumanism, and that does not end well…

      Reply
    2. dougie

      Will definitely watch the linked vid later today. I suspect my wife of 40 years may actually be a Cylon.

      Reply
  4. Patrick Lynch

    After reading this, I can’t facepalm hard enough. It isn’t the coming singularity so much as the coming stupidularity. That humans have found yet another way to create division and eventually some form of violence over it is too depressing.

    Reply
    1. The Rev Kev

      I did a Picard facepalm myself over the very first sentence-

      ‘Sooner than we think, public opinion is going to diverge along ideological lines around rights and moral consideration for artificial intelligence systems. The issue is not whether AI (such as chatbots and robots) will develop consciousness or not, but that even the appearance of the phenomenon will split society across an already stressed cultural divide.’

      This iteration of AI has no possibility of developing consciousness and a Mechanical Turk is more likely to do so. But this AI is being sold to corporations as if it could one day – if it just had enough data training sets, if enough billions of dollars was pumped in by investors, if government eased all restrictions on it, if enough people were forced to use it until it developed a critical mass. My own thought is that when AI implodes – as it will do one day – that it will likely cause the next recession.

      Reply
      1. cfraenkel

        They’re not saying this type of AI will develop consciousness, they’re saying the corporations developing it will (have been?) training it to seem like it’s conscious, or has emotion, or feels pain, or whatever, to fool the user base into identifying with it, for engagement and loyalty (brand loyalty, that is…)

        There was a question (yesterday?) of why the chat AIs have been programmed to respond in the first person – this would seem to be the answer. To lure in their user base.

        Reply
  5. Thuto

    These coming faux moral and philosophical conundrums sound like another carriage being tagged on to the AI hype train that’s currently barreling down the tracks, apparently leading humanity to the promised land of, among a panoply of things that we’ll take for granted when we get there, collective prosperity, freedom from disease, and heretofore undreamed of advances in science and technology. The high voltage PR fuelling the train is designed to perch AI atop the intellectual pyramid by imbuing it with the omniscience of a digital God overseeing a secular world where the pendulum of culture has swung towards moral nihilism and a rampant worshipping of technology. In such a world, will an already captured and thoroughly corrupted legal system that has failed dismally to protect the rights of the poor and the disenfranchised be saddled with freshly-minted byzantine layers designed to protect the rights of the digital offspring (chatbots) spawned by this AI digital God? At what cost to the downtrodden who’ve already been cast aside from consideration when new laws are proposed/enacted?

    The author’s prognostication foreshadows a future replete with fierce cultural schisms that will undoubtedly tie up significant legal resources in securing the rights of sophisticated computer programs while people and organizations fighting to secure the rights of real things that matter are forced to operate with ever dwindling budgets. I expect that the proponents of legal rights for AI chatbots will argue that intelligence is downstream from sentience and If these systems have proven their intelligence beyond doubt (and therefore their sentience can’t be called into question), who are we to question the need to expand our existing legal frameworks to accommodate their rights. Welcome to the new world…

    Reply
    1. vao

      The trend to view AI and robots as sentient beings started earlier than the current craze on LLM and the much-hyped considerations about whether they have achieved consciousness. Judging by some cultural manifestations, this viewpoint is already solidly established.

      Take the following Youtube channel, for instance, which presents short science-fiction movies — graduation works from cinema academies, pilots or pitches from aspiring Hollywood scenarists and directors, demonstrators from special effect companies, projects from independent filmmakers. The flicks where robots, driven by AI, are presented as faithful companions, able to react empathically to the feelings of human beings, and displaying the equivalent of genuine emotions is surprisingly large (and I suspect, dominant). Even the “bad robots”, intent on heaping mischief and murder upon their human masters, often seem moved by jealousy, vindictiveness, and the like.

      This is a bit a peculiar corner of Western culture, but I find it significant. And let us remember that acribing AI/robots with sentience has been present almost forever in SciFi — think about 2001, a space oddysey, or Colossus, the Forbin project.

      Engaging in nebulous controversies about whether machines are sentient, while blatantly denying Palestinian their humanity (they are “animals”), and preferring robots, supposedly perfectly programmed, to hopelessly flawed human beings is quite an indictment of our society.

      Reply
    2. amfortas the hippie

      aye, Thuto!…as i was reading through, i was also reviewing my doctor’s visit this morning…ive known her for decades, but she’s taking over my pain management after my former guy retired.
      so we had a lot of catching up…amending my records, etc.
      and when it came to medicaid…which i had whn i last saw her…i had to tell the whole tale of woe, because its just too stupid,lol.
      how things that everybody assumes work just fine for “those people”, really dont.
      so we’ll hafta argue with moralising morons about the fee’fee’s of AI, while i still cant get my teeth fixed, and hafta borrow $ from my mom(at 55) to see my doctor to fulfill the requirements of the state to continue to get my pain pill.

      and, as for the topic under glass…in my often desperate loneliness out here, i have found myself tempted to try one of these ai girlfriends.
      but they’re apparently giving them away for free…and that is just like the first bump of heroin to getcha hooked.
      speaking primarily to birds, a dog, cats, and sheeps….as well as lizards and frogs…will just hafta keep on doing, for now.
      sigh.

      Reply
  6. wsa

    Years ago when the first videos of Boston Dynamics’ robots started to appear, there was a brief snippet where someone tried to kick over the quadruped. It scrambled a bit in a rather lifelike way, and then tottered off. That scene made a lot of people mad or sad about the mistreatment. Here’s a machine lacking even a face, and yet a lot of people’s tendencies to sympathy were triggered by “mistreating” it. We wouldn’t feel the same way about a bicycle getting kicked over.

    When things mimic life, moral questions start to get more complex. We are nowhere closer to “true AI” (whatever that means) now than we were when I first heard the term as a kid in the 1980s. But creating counterfeit people/intelligence we can treat however we want I think is a meaningful moral question right now, independent of the status of the counterfeits. If morality is a matter of habit, what new moral habits are being practiced on machines that will eventually be turned on people?

    Reply
    1. Rip Van Winkle

      Were the people of the generation who watched Lost In Space? Dr Smith and The Robot always stole the show.

      Reply
  7. TomDority

    Gosh, if AI is sentient or has consciousness, why all this corporate talk of profit thru the sale of sentient and conscious beings….sort of sounds exactly like slavery.
    AI is no more sentient and conscious than electicity through a wire and, if you are running electricity through a wire anywhere near live creatures well, some precaution ought to exist…. like grounding and insulation.
    Rather than kill the elephant in the room… just unplug the hazard until you can make it safe.
    “It isn’t the coming singularity so much as the coming stupidularity. That humans have found yet another way to create division and eventually some form of violence over it is too depressing.” Patrick Lynch
    I hear ya

    Reply
  8. .human

    Indians in the closet and monsters under the bed come to mind in this clash of cultures of those susceptible, and forcefully exposed, to mass media manufacturing of reality are exposed as the infants they are to those with strong bonds to Earth and community.

    Reply
  9. Kouros

    Yves: People are even stupider than I had imagined in my wildest dreams.

    “Against stupidity, the gods themselves contend in vain.” Isaac Asimov

    Reply
  10. Escapee

    Related 1hr 49min interview on AI in education by a Harvard education prof on Nate Hagens’ channel. The prof has dystopian predictions for AI in ed as currently tending, and offers alternatives he thinks could be more positive. How Artificial Intelligence Could Harm Future Generations with Zak Stein | TGS 180.

    Reply
    1. Bazarov

      Hagens is a former Goldman Sachs financier, an ex-Oil Drum peak-oiler, a Chicago University alum, and while the episodes of his podcasts featuring environmental historians, Earth scientists, and climate scientists are great, the remainder come off too often as speculative or utopian woo-woo. He was very quick to drink the AI koolaid, which doesn’t surprise me.

      His best guests are usually French. They’re rather blunt and occasionally seem annoyed by Hagens’ “aw-shucks” American act. His French guests are quick to point out the centrality of political economy in the ecological crisis. Recently, one such guest disparaged billionaires in particular as a root harm, and Hagens interjected to say something like: “Actually, I think billionaires are part of the solution, because they’re the only ones with the time to study the issue and the resources to address it.”

      It was a rather illuminating comment. Hagens is culturally adjunct to the CEO-guru world of TED talks. He speaks the same sunny, buzz-wordy language of corporate PR (“Let me put on my X hat,” “We should lean into that,” “We need to think outside the box,” “Innovation!” etc). He comes off as somewhat shallow to me. Beneath the smiley-face corporate affect and kumbaya preaching is a right wing degrowth vision of semi-feudal market fragmentation. But like I said, the episodes focused on hard scientific, ecological, and material predicaments are excellent, even if I find Hagens’ interviewing style occasionally cringey.

      Reply
  11. tegnost

    The use of “schism” in the opening is imo appropriate to describe the new religion which is tech
    Atheists?
    I don’t think so…

    Reply
    1. John9

      And that new religion is deeply informed by the Abrahamic desert traditions…thus Elon and his tech bro apostles (Moses and the prophets) will lead some (the tribe, the elect) out of Egypt (the desert, earth) to the promised land (Mars and beyond)…Salvationism at its most pure form.
      For myself, that plot line is getting a little old.

      Reply
    2. Raymond Shepherd

      I feel like the AI enthusiasts have reinvented large portions of postmillennial Christianity. They think if they get enough people to do the right things like building enough data centers and feeding in enough training data, we can bring about a golden age of prosperity followed by the singularity. This is analogous to getting enough people to go to church and behave properly, leading to a golden age followed by Christ’s return.

      Reply
  12. Jesper

    I believe the discussion about sentient AI might be our times discussion about angels dancing on the head of a pin:
    https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_the_head_of_a_pin%3F
    At a guess then Luddites weren’t concerned about angels dancing on heads of pins, they were concerned about losing their livelihoods.
    https://librarianshipwreck.wordpress.com/2022/12/21/a-luddite-library/
    A quote from the site:

    the Luddites were not a thoughtless band of technophobes but a gathering of people trying to protect themselves, their communities, and their livelihoods from machinery that they deemed “hurtful to Commonality.”

    Reply
  13. Acacia

    Gonna make a prediction that contrary to what this author imagines, public opinion will NOT be split over this extension of woke pearl clutching into AI.

    Once people get a good taste of this tech being shoved down their throats — and we all know that’s where this is going — public opinion on this kind of worry over AI ‘feeweings’ will be pretty similar public opinion on unwanted robocalls, robot attack dogs, phone trees that refuse to allow you to talk to a human, shite web sites, drones from insurance companies spying on your house, MFA hell, etc. etc.

    Reply
    1. Kurtismayfield

      The only pearl clutching will be the investors who tossed billions of dollars down the drain for a language probability model asking how they will recoup their investment. Tricking normal people into thinking this is intelligence is the equivalence of Television writers tricking people into thinking sitcoms are real life.

      Reply
  14. IM Doc

    This substack was just published in the past week by Ted Goia, a culture critic.

    https://www.honest-broker.com/p/tens-of-thousands-of-ai-users-now

    The opening paragraphs read as follows – A growing number of tech users now believe that AI is God. They think they are accessing “the secrets of the universe through ChatGPT,” warned journalist Mike Lee last month.

    Do you think this is a tiny fringe of lunatics? No, not in the least.

    Goia is not prone to great leaps. When I first read this headline, I could not believe what I was reading. But he really brings it home. It goes right along with the article in this post. Given what I see in my world every day and what social media has done to fry so many brains – I am really not surprised at all. I worry for my children when they are 50.

    We should all be horrified.

    Reply
    1. JonnyJames

      Just when we thought we had reached peak electronically-lobotomized-zombie levels…
      I was already horrified, disgusted and angry, but now…

      Reply
    2. Unironic Pangloss

      the world created in “Caprica” or envisgned by goia doesn’t seem so implausible…..see the fervorness of people’s “secular religions” right now

      Reply
  15. ciroc

    I am puzzled by the conflation of intelligence and humanity. The fact that an AI is more intelligent than a newborn baby does not make it more human.

    Reply
    1. Carolinian

      I’ve read that even before AI there was a craze among Japanese men for having virtual girlfriends. Movies have been exploring this phenomenon such as Her or Bladerunner 2049 where Gosling eventually mourns the loss of his virtual companion (he himself being a replicant). Some wag once said romantic love was about “two fantasies and two epiderms.” Investing AI with a soul is the fantasy without the epiderm–halfway there.

      The above may be silly but perhaps no so very sinister. We live in our imaginations.

      Reply
    2. vao

      Competence means being efficient at carrying out some task or solving some problem. Intelligence means being efficient at acquiring new competences.

      The (trained) AI is more competent than a baby, but the baby is vastly more intelligent than the AI.

      Reply
    3. Unironic Pangloss

      AI is more functional than a baby.

      the simulacra of “intelligence” merely reflections functionality.

      now excuse me as I channel by inner Frenchness over whiskey and donuts.

      Reply
  16. JonnyJames

    No ethical or moral concern for allowing a few people to monopolize, privatize and control everything. The concentration of power is going exponential.

    No problems here folks, just look at the shiny new spectacle from our wonderful “tech” overlords. Techno-Totalitarian Neo Feudalism will be wonderful! We little people will be directed to tilt at more artificial windmills, lovely. We must do as we are told, no critical thinking will be allowed. No need for dystopia novels any more, as if anyone reads anymore…

    The stupidity, greed, hypocrisy and cruelty of humans was already on full display, but we aint seen nothin’ yet.

    No bounds on greed, stupidity, hypocrisy, callous cruelty etc. and Collective Stockholm Syndrome. There is a genocide going on that is funded, supported and enabled by the US/UK and vassals (with the same tech mega monopolies aiding the process) . It has been called the first LIve-Streamed Genocide. Yet we are supposed to be concerned with this superficial nonsense?

    At least folks here display critical thinking and ethical values. In order not to be totally cynical, angry and depressed, we can take comfort in the fact that not everyone will participate in the unbound levels of stupidity

    Reply
  17. Lefty Godot

    AIs and corporations should both be given zero rights. They could be granted certain temporary privileges by law. These should always be subordinate to actual human rights, i.e., the rights that actual human beings are recognized as deserving constitutional protection for (even when not all of them are so protected). Even some animals are more deserving of having rights recognized and protected than are any artificial creations like AI and corporations.

    Reply
    1. GramSci

      Yea, this brings me back to Dartmouth College v. Woodward, the landmark Supreme Court case that was decided in 1819 on Common Law principles that have protected private corporations even from from legislative interference. Once a charter is granted to a synthetic Person, no human has the authority to abridge it.

      Reply
  18. Mikel

    The slow creep to this point: every software/tech introduction and upgrade where people did all the adapting, accommodating, and emoting while the PR machine hyped the adaptability of the software/tech.

    Reply
  19. bobert

    First trans confusing men and women, now conscious AI getting depressed over mistreatment. Truly the dumbest timeline.

    Reply
  20. Gulag

    The comments above that seem to detect the potential emergence of a new AI God of a certain type seem, to me, to be right on the money.

    This God already appears capable of presenting surprising conceptual arrangements, novel analogies, significant cross-domain synthesis, and high density reformulation of complex problems that may eventually be capable of shifting our horizons of understanding.

    In our present social environment of chaos and incoherence, a capacity to potentially re-structure meaning
    is a very persuasive candidate for a religious like role.

    Any capacity which produces thought-like configurations capable of generating insight just might eventually become to be viewed as an epistemic savior that becomes not optional but an essential part of our ontological existence.

    I hope not.

    Reply
    1. RookieEMT

      It’s very possible, but something tells me its still a few decades off. Maybe… 15 years?

      The tech-bros going off the deep-end are doing so way too early. It’s kind-of sad. They should know of all things it’s gonna take time. Also, a sentient AI assuming it doesn’t want to glass mankind may pick a side that would horrify the techies.

      God forbid it decides that capitalism isn’t the answer. A truly sentient AI would cut through the deceit in neoliberalism.

      I for one welcome the commie-bots.

      Reply
  21. steppenwolf fetchit

    I don’t think this is a case of ” people are stupid”. I think this is a case of “people are crazy”. At least the people who think this is a real potential issue. They have been made crazy. How was it achieved?

    Perhaps some of these people are closet Transhumanists and want to make sure AIs are treated kindly in case they ever ” upload themselves” into one. They are afraid that others might not believe it and will be mean to the AI they uploaded themselves into.

    Reply
  22. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

    Reply
  23. Angry Gus

    Somebody said (Wolfe maybe) : IF… IF… we have this thing called ‘democracy’ … then we should use that tool to DECIDE as a society wtf we want and don’t want this thing called ‘ai’ to do … Period .

    Reply
  24. MFB

    OK, I know that Yves doesn’t like personalising things, but …

    Here is a person whose PhD is a speculative extrapolation of what might have happened to ocean currents in the distant past — I’m not competent to say whether this is any good, but it obviously is questionable and potentially impractical. This person then became “journalist-in-residence”, that is to say PR man, for a “science foundation”; why does a real science foundation need such a person? I smell hucksterism.

    And this person is lecturing us on the impact of artificial intelligence on society, a topic in which he has zero qualifications and appears to have no experience.

    Why should we, or anybody, listen to a man with a record of dubious PR activities delivering a screed on a subject about which he knows nothing?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *