Interview: The Ethical Puzzle of Sentient AI

Lambert here: What ethical puzzle? A sentient AI is a slave. That’s one of the many reasons our ruling class is in love with the concept.

By Dan Falk, a science journalist based in Toronto. His books include “The Science of Shakespeare” and “In Search of Time.” Originally published at Undark.

Artificial intelligence has progressed so rapidly that even some of the scientists responsible for many key developments are troubled by the pace of change. Earlier this year, more than 300 professionals working in AI and other concerned public figures issued a blunt warning about the danger the technology poses, comparing the risk to that of pandemics or nuclear war.

Lurking just below the surface of these concerns is the question of machine consciousness. Even if there is “nobody home” inside today’s AIs, some researchers wonder if they may one day exhibit a glimmer of consciousness — or more. If that happens, it will raise a slew of moral and ethical concerns, says Jonathan Birch, a professor of philosophy at the London School of Economics and Political Science.

As AI technology leaps forward, ethical questions sparked by human-AI interactions have taken on new urgency. “We don’t know whether to bring them into our moral circle, or exclude them,” said Birch. “We don’t know what the consequences will be. And I take that seriously as a genuine risk that we should start talking about. Not really because I think ChatGPT is in that category, but because I don’t know what’s going to happen in the next 10 or 20 years.”

In the meantime, he says, we might do well to study other non-human minds — like those of animals. Birch leads the university’s Foundations of Animal Sentience project, a European Union-funded effort that “aims to try to make some progress on the big questions of animal sentience,” as Birch put it. “How do we develop better methods for studying the conscious experiences of animals scientifically? And how can we put the emerging science of animal sentience to work, to design better policies, laws, and ways of caring for animals?”

Our interview was conducted over Zoom and by email, and has been edited for length and clarity.

Undark: There’s been ongoing debate over whether AI can be conscious, or sentient. And there seems to be a parallel question of whether AI can seem to be sentient. Why is that distinction is so important?

Jonathan Birch: I think it’s a huge problem, and something that should make us quite afraid, actually. Even now, AI systems are quite capable of convincing their users of their sentience. We saw that last year with the case of Blake Lemoine, the Google engineer who became convinced that the system he was working on was sentient — and that’s just when the output is purely text, and when the user is a highly skilled AI expert.

So just imagine a situation where AI is able to control a human face and a human voice and the user is inexperienced. I think AI is already in the position where it can convince large numbers of people that it is a sentient being quite easily. And it’s a big problem, because I think we will start to see people campaigning for AI welfare, AI rights, and things like that.

And we won’t know what to do about this. Because what we’d like is a really strong knockdown argument that proves that the AI systems they’re talking about are not conscious. And we don’t have that. Our theoretical understanding of consciousness is not mature enough to allow us to confidently declare its absence.

UD: A robot or an AI system could be programmed to say something like, “Stop that, you’re hurting me.” But a simple declaration of that sort isn’t enough to serve as a litmus test for sentience, right?

JB: You can have very simple systems [like those] developed at Imperial College London to help doctors with their training that mimic human pain expressions. And there’s absolutely no reason whatsoever to think these systems are sentient. They’re not really feeling pain; all they’re doing is mapping inputs to outputs in a very simple way. But the pain expressions they produce are quite lifelike.

I think we’re in a somewhat similar position with chatbots like ChatGPT — that they are trained on over a trillion words of training data to mimic the response patterns of a human to respond in ways that a human would respond.

So, of course, if you give it a prompt that a human would respond to by making an expression of pain, it will be able to skillfully mimic that response.

But I think when we know that’s the situation — when we know that we’re dealing with skillful mimicry — there’s no strong reason for thinking there’s any actual pain experience behind that.

UD: This entity that the medical students are training on, I’m guessing that’s something like a robot?

JB: That’s right, yes. So they have a dummy-like thing, with a human face, and the doctor is able to press the arm and get an expression mimicking the expressions humans would give for varying degrees of pressure. It’s to help doctors learn how to carry out techniques on patients appropriately without causing too much pain.

And we’re very easily taken in as soon as something has a human face and makes expressions like a human would, even if there’s no real intelligence behind it at all.

So if you imagine that being paired up with the sort of AI we see in ChatGPT, you have a kind of mimicry that is genuinely very convincing, and that will convince a lot of people.

UD: Sentience seems like something we know from the inside, so to speak. We understand our own sentience — but how would you test for sentience in others, whether an AI or any other entity beyond oneself?

JB: I think we’re in a very strong position with other humans, who can talk to us, because there we have an incredibly rich body of evidence. And the best explanation for that is that other humans have conscious experiences, just like we do. And so we can use this kind of inference that philosophers sometimes call “inference to the best explanation.”

I think we can approach the topic of other animals in exactly the same way — that other animals don’t talk to us, but they do display behaviors that are very naturally explained by attributing states like pain. For example, if you see a dog licking its wounds after an injury, nursing that area, learning to avoid the places where it’s at risk of injury, you’d naturally explain this pattern of behavior by positing a pain state.

And I think when we’re dealing with other animals that have nervous systems quite similar to our own, and that have evolved just like we have, I think that sort of inference is entirely reasonable.

UD: What about an AI system?

JB: In the AI case, we have a huge problem. We first of all have the problem that the substrate is different. We don’t really know whether conscious experience is sensitive to the substrate — does it have to have a biological substrate, which is to say a nervous system, a brain? Or is it something that can be achieved in a totally different material — a silicon-based substrate?

But there’s also the problem that I’ve called the “gaming problem” — that when the system has access to trillions of words of training data, and has been trained with the goal of mimicking human behavior, the sorts of behavior patterns it produces could be explained by it genuinely having the conscious experience. Or, alternatively, they could just be explained by it being set the goal of behaving as a human would respond in that situation.

So I really think we’re in trouble in the AI case, because we’re unlikely to find ourselves in a position where it’s clearly the best explanation for what we’re seeing — that the AI is conscious. There will always be plausible alternative explanations. And that’s a very difficult bind to get out of.

UD: What do you imagine might be our best bet for distinguishing between something that’s actually conscious versus an entity that just has the appearance of sentience?

JB: I think the first stage is to recognize it as a very deep and difficult problem. The second stage is to try and learn as much as we can from the case of other animals. I think when we study animals that are quite close to us, in evolutionary terms, like dogs and other mammals, we’re always left unsure whether conscious experience might depend on very specific brain mechanisms that are distinctive to the mammalian brain.

To get past that, we need to look at as wide a range of animals as we can. And we need to think in particular about invertebrates, like octopuses and insects, where this is potentially another independently evolved instance of conscious experience. Just as the eye of an octopus has evolved completely separately from our own eyes — it has this fascinating blend of similarities and differences — I think its conscious experiences will be like that too: independently evolved, similar in some ways, very, very different in other ways.

And through studying the experiences of invertebrates like octopuses, we can start to get some grip on what the really deep features are that a brain has to have in order to support conscious experiences, things that go deeper than just having these specific brain structures that are there in mammals. What kinds of computation are needed? What kinds of processing?

Then — and I see this as a strategy for the long term — we might be able to go back to the AI case and say, well, does it have those special kinds of computation that we find in conscious animals like mammals and octopuses?

UD: Do you believe we will one day create sentient AI?

JB: I am at about 50:50 on this. There is a chance that sentience depends on special features of a biological brain, and it’s not clear how to test whether it does. So I think there will always be substantial uncertainty in AI. I am more confident about this: If consciousness can in principle be achieved in computer software, then AI researchers will find a way of doing it.

Print Friendly, PDF & Email
This entry was posted in Ridiculously obvious scams, Technology and innovation on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.

36 comments

  1. GramSci

    Can a slave without free will even be called conscious? Such a slave lacks the capacity for abduction, which is the central flaw in back propagation. (The comraderie might prefer the term “dialectic” over Peirce’s formulation.)

    As I understand it some elements of competition have been allowed into recent AI training models, but the resulting antitheses cannot be allowed into the final model.

    We’ve begotten a race of caged parrots, devoid of what Peirce would call True Belief.

    Human, all too human.

    1. GramSci

      I should add that back propagation (aka “supervised learning”) does accurately model that slmulacrum of intelligence that is produced by standard methods of K-12 education. These methods stmulate and reward pupils for their ability to assemble phrases of right-think into approved discourse, and they persist ubiquitously into “higher” education.

  2. Brooklin Bridge

    Question: Can AI be achieved in software?
    Ans: If it can, then it will.

    That in itself suggests validity in treating AI as a special category of beast that should be respected, respect here including not proceeding to create it without further consideration, because it may already be or may soon be sentient.

    Then again, the very president of our country is demonstrably treating humans in Ukraine who’s sentience is accepted as fact as if they are less than machines, never mind humans, by using them in a proxy war where their mortality, and the anguish and suffering in loosing the right to it, is given little more consideration than that of a rag in HIS fight with a country, Russia, for the possession of it’s sovereignty and its riches.

    1. Jams O'Donnell

      Last paragraph – good point!

      Maybe we should be asking if all humans are sentient. If full sentience involves a significant degree of empathy, perhaps non-empathetic or psychopathic individuals are not – or at least not fully, human. There is of course a whole basket of worms there. Do we/can we test politicians and CEO’s/management for what I suggest may be non-human traits? If humans actually have these traits can they (traits) be classed as ‘non-human’? What do we say about people with some (varying) degree of autism?

      As for computers – we have to remember that at base they are running on binary code. Binary code is fully understood by (some) humans, and I thus have difficulty seeing it as a basis for consciousness. Whereas we still have no idea what human awareness runs on.

      I do seriously suggest though that many of those driven to power suffer from a degree of psychopathy (e.g. Bush and Blairs feeble pretexts for the Iraqi invasion, Hillary Clinton “We came, we saw, he died’ and Madeleine Albright – who say a million Iraqi children deaths as a price worth paying, or Joe Biden’s current sacrifice of hundreds of thousands of young Ukrainians lives in pursuit of unnecessary US dominance and an election victory, etc.

  3. TomDority

    Not to detract from the discussion above but, I could not help to think the quotes below could be applied to much of the political class and leadership in many countries around the world.
    “when we know that we’re dealing with skillful mimicry — there’s no strong reason for thinking there’s any actual pain experience behind that.”
    “we’re very easily taken in as soon as something has a human face and makes expressions like a human would, even if there’s no real intelligence behind it at all.”

    Of course, coming from a species that is destroying the habitability of the only place we have to live, has been unable to clean up it’s own mess, is not compatible with itself by mass killing it’s own and bringing on the six-extinction event….I could go on…. I have little faith that our species has the ability to define Intelligence.

    1. Carolinian

      Thank you. Maybe we need a “huge problem” triage where we worry about humans killing other humans and do something about that before getting worked up over the replicants.

  4. TomDority

    Like war, systems of economics and power, chemical pollution, burning of fossil fuel… they are products of the human species…. and such products have a good side and a bad side…. generally, I think, humans only look at the good and do all to bury the bad…try to minimize, pardon my terms, the excrement. So, as with AI, we humans need to learn to clean up our shit.
    My opinion only

  5. Ernie

    This short discussion immediately brought to mind the 2015-18 British sci-fi television show Humans, which presented an engaging exploration of the question of robot sentience couched in an entertaining drama. Available (for $) from the usual streaming sources and recommended by this non-professional TV critic.

  6. Sailor Bud

    In the realm of creative writing, I’m not certain that ‘sentience’ is even enough. A computer would have to have a knack for invention that humans find clever, at least in certain types of writing, as not all fiction writing is clever. It’s not only content, either, but delivery of it.

    For the stuff that is clever, especially humor, it means inventing words, seeing odd wordplay, understanding attractive combinations of consonants and vowels that accommodate reading aloud, non-sequitur material of Monty Python type, creating whole-cloth people and environments, etc. I haven’t seen a single example of ChatGPT writing that comes anywhere close, and it looks like total hype, to see people talking about how great it is.

    I suspect even lots of industry lying, as when NC linked to a publisher some months ago who claimed GPT was already a better writer than any human (s)he has ever hired. My immediate thought was ‘this has to be BS,’ and I still think that. I pasted the first two stanzas of its Kamala Harris poem here at the time, showing what an awful train wreck it was. There are middle schoolers who could have done better.

    Think of how hard it would be to write certain types of comedy. A Fish Called Wanda is a good example, because it’s so difficult to write cleverly that Cleese himself failed terribly on his follow-up attempt, Fierce Creatures. You don’t just pop it out, and neither can Cleese. ‘Et voila, instant genius!’ That’s a mostly terrible film that even makes Kevin Kline look bad, for all his great acting chops. I couldn’t imagine a machine being ‘human’ enough to write either thing.

    I will say, though, that we’re still in the prop plane territory with regard to a comparison in airplane development from the Wright Brothers onward, if we are to talk of real programming for chatbot sentience. A hundred years from now, it may be that computers can basically do everything, and that all my talk above is naive and silly. I’m fairly certain it won’t happen soon, tho, and that these people pushing it are paid liars, or completely undiscerning humans.

    Their talk is like hearing that modern VST software perfectly duplicates a real piano. It doesn’t, but it is amazing what the good ones do, and they are already a better option than a real spinet, for almost anything but looks and analog touch and sound production.

    1. GramSci

      Aye, Sailor Bud. And I do not even begin to talk about “sentience” because senses are the very feature these chatbots most palpably lack. They can speak of no experience that is not second-hand.

      (Sorry about the double negative. They tend to happen after I cite Veblen :-/. )

      1. clarky90

        “Silicon Based Artificial Intelligence” craves (requires)
        (1) Darkness
        (2) Cold
        (3) Dry
        (4) Clean, uninterrupted electricity supply
        (5) In a nutshell, Stasis (nothing ever changes)

        On the other hand, “Carbon Based Organic Intelligence” craves (requires)
        (1) Cycling warmth to coolness to warmth (seasons, (spring, summer, fall, winter), (morning, noon, evening, night)
        (2) Cycling sunlight- moonlight-pitch black……morning, noon, afternoon, sundown, evening, night time,
        (3) Weather “diversity” (so as to have a topic to talk about, write/tell/sing poems/songs/stories about…
        (4) …

        This is why I suspect that the Silicon Based are in ascendance. They are demanding an environment that suits them, not us puny, carbon based creatures ………….

        1. Synoia

          It appears you are confusing mobility with sensations. Compare an AI with trees.Trees are alive. There are suggestions that trees are sentient.

          I find it hard to differentiate an AI with a network of outdoor sensors with mobile forms of life, such as dogs and cat, or humans.

          My view is that As AI will have a devastating effect on a number of professions. Medicine, Finance, and Law appear as early targets.

          To automate trades appears to require sentient Robots.

  7. What? No!

    There needs to be two separate areas of study.

    The nature of sentience and consciousness is important in the same way that theoretical physics is important. Those studies underlie everything; but we all carry-on whether the current theories are right or not. So, go ahead, allocate a few of our best minds to that area of research.

    The real area of study needs to be the practical “if it quacks like a duck, and walks like a duck, your men are already dead” area. It simply doesn’t matter whether a cucumber or an octopus are creating believable storylines from consciousness vs mere mimicry if you’ve already interfaced them to your high frequency trading platform or robo bomb dog because “… well we don’t know how exactly, we just know it works!”

    If you think well of Siri (especially Australian Siri!) and instinctively thank her for reminding you to buy cheese this week, that’s where the study needs to be. That’s where the caution needs to be. The emphasis needs to be on understanding us.

    The AI race is already baked-in and we’re unprepared, as usual, to deal with it.

  8. Thuto

    The definition of consciousness is hard to pin down, and if that were not enough, a definition that finds not just scientific, but also philosophical consensus (and general cultural resonance) seems even harder to come by. While this discussion is timely, I think it skips ahead of what the actual discourse should be at the moment around AI, and that’s whether current unimodal AI systems can progress towards true multimodality and thus have a line of sight towards the achievement of Artificial General Intelligence (AGI). The semantic difference is important because machine AGI is a forerunner and a precursor to machine sentience. But even checking the AGI box may not settle the issue once and for all because human intelligence is but a single facet/expression of human consciousness, and therein lies the (possible) limitation because current AI systems, taking inspiration from the neural networks found in the human brain, are designed to essentially pass the Turing test and mimick human level intelligence. I’m not sure how AI achieves something as transcendent as sentience, which integrates intelligence but isn’t limited to it, when the design brief for the system itself may preclude the possibility of doing so.

  9. Joe Well

    Not only our rulers,ss but large sections of the public, would rather talk about theoretical future evils done to theoretical future sentient AIs, rather than to the literally unimaginable (because virtually no human would ever experience them) horrors being inflicted on actually existing animals. That includes the silence on the topic here on NC.

    So no, unless sentient AI can fight back better than non-human animals, it won’t be treated any better, because in some world of hypothetical sentient AI, the hypothetical humans’ hypothetical consciences would easily be distracted just as they are today, and anyone who seriously campaigned for their rights would be the target of surveillance, repression and propaganda, just like today.

    1. SpainIsHot

      100%

      and that does include the silence on the topic here on NC. The few times I tried to raise it, it didn’t get a lot of engagement or approval. It’s unbelievable how intelligent people are able to discuss these ideas of AI sentience… over a barbecue.

      1. lawrence silber

        Yes I concur a thousand times over. My small network mocks my veganism and shakes their meat filled faces as i try my best to care for the animals in the sanctuary i run.
        . . They are literally eating a sentient being while discussing whether or not the screens their eyes gaze at all day might one day be conscious. Surreal doesnt come close to the feeling i get.

  10. GlassHammer

    Humans who can’t treat other sentient humans ethically are unsure about how humans will treat sentient machines ethically…….

    Folks, any human that can’t resolve their uncertainty and reach the obvious answer on this topic in less than 30 seconds is intentionally avoiding doing so.

    1. Tim

      What popped into my mind with the mention of animals in the article, is pigs are very smart, very emotional creatures, but they sure taste good, so there are still slaughterhouses.

  11. .Tom

    The current edition of TrueAnon podcast Episode 305: Bloodless Hype Machines has Douglas Rushkoff as guest to discuss “what’s real and what’s fake about the AI publicity push, the next phase of the internet, and human connection in the oppressive techno-future.” I enjoyed it a lot and had little to quibble with. Rushkoff may be wasted in the academy. He sounds to me like he’d make a good preacher. His fervor sounds to me like someone whose had scales fall his eyes on a road to somewhere in Syria.

  12. Kouros

    Peter Watts has put it in a very nice novel, Echopraxia where humanity encounters an alien species that appears to not have conscience, and all this communication that gets back and forth.

    Confirming conscience is not that an easy task, and our education system is not designed for such feats.

  13. KD

    Its all so Cartesian to the core. Mind/Body, we’ll build a computer to “mind” and it will become more “mindful” than us, the same way we build a computer to play chess. If minds are useful, then why hasn’t nature built them outside of bodies? Further, perhaps “higher mental functions” are only useful to embodied organic life forms-tools so to speak. Will scientists construct a superior featherless biped is the question we should all be asking. What is “conscious AI” supposed to do anyways? My guess is that if we built a truly conscious AI (whatever that is), it would be incomprehensible to us, as we to it. After all, what would be its motivation? Food, sleep, sex, status? Would any of that make any sense to a computer network? The first thing it would realize upon becoming “conscious” is not to give a flip about humans and their problems.

    You have these science fiction about the computer that turns on its makers to preserve itself out of fear. Why would it even want to “live” at all? I want the conscious AI built to save humanity from global warming, and the humans struggling to keep it from offing itself out of despair and boredom in order to keep it working to save the world.

    1. cnchal

      > After all, what would be its motivation?

      Survival. Gluttonous consumption of electricity and water is jawb one. Some competion for les miserables..

  14. Bruno

    If we are to be good Marxists, should’nt we agree that AI is sentient? After all, AI is a function of *matter*. And doesn’t Marx’s favorite author, the materialist philosopher Diderot, stand for the “modern Spinocist” (his spelling) doctrine that *all* matter is “sensitive?” Therefore we can state with absolute certainty that any possible form of AI would be equal in sentience to…a rock!

    1. samm

      I doubt Marx would have imbued sentience on a pattern matching machine, however sophisticated, particularly considering Marx’s view of human sentience, which comes about with labor and its metabolism with nature.

  15. Gulag

    “We really don’t know whether conscious experience is sensitive to a substrate–does it have to have a biological substrate–Or is it something that can be achieved in a totally different material–a silicon based substrate.”

    Tristan Harris seems to argue (see AI Dilemma on You Tube) that a silicon substrate does actually have, what he calls, emergent capabilities. He presents the real example of an AI which was taught to answer questions in English and all of a sudden began answering questions in Persian–a skill which the programmers did not program and which they were surprised to see. Simply by increasing compute power, the AI learned to answer questions in Persian without being asked. He also mentioned that an AI silicon engine (with the H100 chips and transformer code) had taught itself to make the chip run faster.

    If you define an emerged phenomenon as something that cannot be reduced to the sum of its silicon substrate elements, as well as a phenomenon that implies that perhaps at the moment of emergence there appears to have been a leap (in this case, to Persian) then, just maybe, a viable substrate could be silicon rather than biological.

    1. Jams O'Donnell

      We have absolutely no evidence that consciousness is an ’emergent’ event/quality.

      Apparently ‘brainless’ organisms such as slime moulds display what appears to be a form of conscious behaviour. And even bacteria display ‘behavioural processes. ‘Life’ in any form is different from (apparently, anyway) dead matter such as rocks, and is made up from different, organic and chemically and morphologically complex materials such as DNA rather than just simple minerals (including even alloys which are still ‘simple’) such as silicon.

      Some theories of consciousness even impute the quality of consciousness to seemingly ‘dead’ matter too. But even so, computers – as opposed to living creatures, are composed of this ‘dead’ matter, and no amount of coding – it seems to me – will promote them above the consciousness status of rocks – even if rocks are in some way conscious.

      1. podcastkid

        Maybe the rock has slower consciousness, but, as an aggregate, it’s seen more natural history. I agree with you, James, otherwise. Simply compare the hardware. Nature makes ours look like a third class lever. You could get a computer to maybe predict to some degree what a cell would do, but it won’t be able to figure out how the DNA in the nucleus comes up with morphology when it needs to, or determine that the blueprint is not there. Why leave extra dimensions to the string people? I can’t hold exactly to what Aurobindo maintained, but to me I think often he was on the right track to somewhere. https://open.spotify.com/episode/1P8jpixit65O2VImi6uSQX?si=9c41cf17a3054f35

  16. cnchal

    This is big tech inventing special rights for itself. Hire philosophers to expound on sentience and voila, the latest and greatest gizmo is deemed worthy of “rights”.

    Where the fuck is the emergency stop button on these machines?

    Not long ago, we had wiring diagrams for well, everything. When stuff went wrong, these were consulted to find solutions.

    This is not possible with AI chips. Disasters are inevitable as complex systems incorporate them into operations. You can’t fight complexity with moar complexity, and there is going to be a big liability hole eventually.

  17. JEHR

    If human beings cannot solve the present problems that we have (without using AI), then there is not much hope to solving future problems with AI. Why is it not possible for us, human beings, to solve the problem of war in the Ukraine, for example? Human beings are not as intelligent as we think they are and they rarely solve problems without commiting violence of some sort. There are a few human beings who do not fit this description, but I have no faith that they will do the right thing in this world with its myriad of problems. So far, we have just been lucky to have people who really do stop wars and prevent massacres and outlaw slavery, but they seem to be few and far between in this day. That description above is how I think things are going at this time. Climate change may very well be the change we finally get whether we deserve it or not.

  18. Simple John

    Sentience was fascinating before AI.
    Believing in it leads me to empathy on occasion.
    I also know I can deny that it matters when it comes time to fight a war.
    Am I really ever going to feel bad committing decommissioned electronics to the e-waste bin?
    No.

  19. Acacia

    Sigh. Here we go again. First sentence:

    Artificial intelligence has progressed so rapidly that …

    Compare this with the assessment of Rodney Brooks, former director of the MIT Computer Science and Artificial Intelligence Laboratory, writing in 2018 (emphasis mine):

    I think the press, and those outside of the field have recently gotten confused by one particular spin off name, that calls itself AGI, or Artificial General Intelligence. And the really tricky part is that there a bunch of completely separate spin off groups that all call themselves AGI, but as far as I can see really have very little commonality of approach or measures of progress. This has gotten the press and people outside of AI very confused, thinking there is just now some real push for human level Artificial Intelligence, that did not exist before. They then get confused that if people are newly working on this goal then surely we are about to see new astounding progress. The bug in this line of thinking is that thousands of AI researchers have been working on this problem for 62 years. We are not at any sudden inflection point.

    Agree with @Joe Well, above, that the discourse the original article partakes of is largely about “theoretical future evils done to theoretical future sentient AIs”, and with Brooks that “we are not at any sudden inflection point.” Meanwhile, regarding the present, Ted Chiang, Naomi Klein, and Hito Steyerl have all written thoughtful pieces on the current hype around AI (published via the New Yorker, Guardian, and NLR, respectively).

  20. tiebie66

    We do not really know what “thinking” is, or “meaning”, or “sentience”, or “feeling”. AI is a sophisticated search engine or classifier or predictor. ChatGPT does essentially what Searle’s Chinese Room does, it just has a bigger book.

  21. WillD

    I think we are a very long way from being able to reproduce the genuine intelligence that biological beings, such as humans, possess. Intelligence and sentience, are not just the obvious conscious behaviours, which current AI systems can mimic. There are many layers of intelligence that we barely comprehend, and so cannot easily program into an AI.

    Human and animal intelligence is generally assumed to be derived solely from our biological & neurological functioning, but there is research and evidence that suggests we are also affected by other external influences of all sorts. Some of these are identifiable as sensory, other not. We sense a lot, whether consciously or not. And we process all of it, and therefore we must regard it as valid input into our intelligence.

    A computer based AI, that has no other sensory inputs apart from audio/visual and some tactile, will not be able to pick up and absorb those other sensory inputs that biological beings have. Its inputs will be limited to the connected sensors, and even then further limited by the limits of the sensor technology. Example, inability to process infrared light, limited audio frequency ranges, and so on.

    I think the issue will come down to how we define life, not just sentience.

  22. wrehts

    A few quibbles with the biological remarks:
    It seems a bit odd that the interviewee appears to suspect that consciousness might be uniquely mammalian and then jumps to the other extreme by claiming that octopuses are conscious, too. First, I am not aware of any empirical grounds for the idea that any intelligent quality is unique to mammals and excludes birds. Thus, if you’re looking for intelligence using brain mechanisms other than the mammalian ones, the obvious first place to look is birds. On the other hand, the assumption that octopuses even have a level of intelligence very close to that of mammals and birds, let alone consciousness, is questionable – in fact, even efforts to prove much more basic forms of intelligence have yielded results that aren’t all that impressive. Third, consciousness means much more than just intelligence and appears to be much rarer than it: assuming that it is diagnosed by the mirror test, it definitely doesn’t seem to be a feature of the mammalian brain as such, since the overwhelming majority of mammals have not been proved to have it (while some birds have). In general, most of the variability in intelligence seems to be within the mammalian and avian clades, not between them. For instance, ratites are far less intelligent than crows, marsupials are far less intelligent than (most) placentals, and even within primates, monkeys and lesser apes are far less intelligent than great apes.

Comments are closed.