Why Artificial Intelligence Must Be Stopped Now

Yves here. I’m a fan of “take no prisoners” positions when they are well substantiated, and in the current example, are the right thing to do. Some technologists have issued forceful warnings that artificial intelligence poses a threat to humanity, particularly in its current “let a thousand flowers bloom” mode. Recall that that was the posture Alan Greenspan took to the development of derivatives, and the result was the Global Financial Crisis, which as we explained long-form in ECONNED, was a derivatives crisis (a mere housing bubble implosion would not have produced the world financial system near-death experience of September 2008).

Here, some of the voices that are making the loudest noise about artificial intelligence are squillionaires who want to control who can use it so as to assure the advantaged position of the current leader. One of the things they worked out early on is there are no barriers to entry and no scale economies for many many potential applications.

Howerer, the fact that they are arguing artificial intelligence is a threat, potentially an existential threat for their own selfish reasons does not make that point of view wrong.

I have more mundane concerns, based on the Naked Capitalism case example of AI gone rogue of Google’s stunningly error-filled dinging of our site for alleged policy offenses…nearly all of which are nonsensical on their face. My concern is artificial intelligence will so corrupt what is considered to be knowledge with an artificial intelligence mash-up that we will rapidly become more ignorant than we were.

And this article’s case does not rely heavily over artificial intelligence’s large and expected-to-burgeon-rapidly energy use, which alone is reason to put a stake in its heart.

By Richard Heinberg, a senior fellow at the Post Carbon Institute and the author of Power: Limits and Prospects for Human Survival. He is a contributor to the Observatory. Produced by Earth | Food | Life, a project of the Independent Media Institute.

Those advocating for artificial intelligence tout the huge benefits of using this technology. For instance, an article in CNN points out how AI is helping Princeton scientists solve “a key problem” with fusion energy. AI that can translate text to audio and audio to text is making information more accessible. Many digital tasks can be done faster using this technology.

However, any advantages that AI may promise are eclipsed by the cataclysmic dangers of this controversial new technology. Humanity has a narrow chance to stop a technological revolution whose unintended negative consequences will vastly outweigh any short-term benefits.

In the early 20th century, people (notably in the United States) could conceivably have stopped the proliferation of automobiles by focusing on improving public transit, thereby saving enormous amounts of energy, avoiding billions of tons of greenhouse gas emissions, and preventing the loss of more than 40,000 lives in car accidents each year in the U.S. alone. But we didn’t do that.

In the mid-century, we might have been able to stave off the development of the atomic bomb and averted the apocalyptic dangers we now find ourselves in. We missed that opportunity, too. (New nukes are still being designed and built.)

In the late 20th century, regulations guided by the precautionary principle could have prevented the spread of toxic chemicals that now poison the entire planet. We failed in that instance as well.

Now we have one more chance.

With AI, humanity is outsourcing its executive control of nearly every key sector —finance, warfare, medicine, and agriculture—to algorithms with no moral capacity.

If you are wondering what could go wrong, the answer is plenty.

If it still exists, the window of opportunity for stopping AI will soon close. AI is being commercialized faster than other major technologies. Indeed, speed is its essence: It self-evolves through machine learning, with each iteration far outdistancing Moore’s Law.

And because AI is being used to accelerate all things that have major impacts on the planet (manufacturing, transport, communication, and resource extraction), it is not only an uber-threat to the survival of humanity but also to all life on Earth.

AI Dangers Are Cascading

In June 2023, I wrote an article outlining some of AI’s dangers. Now, that article is quaintly outdated. In just a brief period, AI has revealed more dangerous implications than many of us could have imagined.

In an article titled “DNAI—The Artificial Intelligence/Artificial Life Convergence,” Jim Thomas reports on the prospects for “extreme genetic engineering” provided by AI. If artificial intelligence is good at generating text and images, it is also super-competent at reading and rearranging the letters of the genetic alphabet. Already, AI tech giant Nvidia has developed what Thomas calls “a first-pass ChatGPT for virus and microbe design,” and applications for its use are being found throughout life sciences, including medicine, agriculture, and the development of bioweapons.

How would biosafety precautions for new synthetic organisms work, considering that the entire design system creating them is inscrutable? How can we adequately defend ourselves against the dangers of thousands of new AI-generated proteins when we are already doing an abysmal job of assessing the dangers of new chemicals?

Research is advancing at warp speed, but oversight and regulation are moving at a snail’s pace.

Threats to the financial system from AI are just beginning to be understood. In December 2023, the U.S. Financial Stability Oversight Council (FSOC), composed of leading regulators across the government, classified AI as an “emerging vulnerability.”

Because AI acts as a “black box” that hides its internal operations, banks using it could find it harder “to assess the system’s conceptual soundness.” According to a CNN article, the FSOC regulators pointed out that AI “could produce and possibly mask biased or inaccurate results, [raising] worries about fair lending and other consumer protection issues.” Could AI-driven stocks and bonds trading tank securities markets? We may not have to wait long to find out. Securities and Exchange Commission Chair Gary Gensler, in May 2023, spoke “about AI’s potential to induce a [financial] crisis,” according to a U.S. News article, calling it “a potential systemic risk.”

Meanwhile, ChatGPT recently spent the better part of a day spewing bizarre nonsense in response to users’ questions and often has “hallucinations,” which is when the system “starts to make up stuff—stuff that is not [in line] with reality,” said Jevin West, a professor at the University of Washington, according to a CNN article he was quoted in. What happens when AI starts hallucinating financial records and stock trades?

Lethal autonomous weapons are already being used on the battlefield. Add AI to these weapons, and whatever human accountability, moral judgment, and compassion still persist in warfare will tend to vanish. Killer robots are already being tested in a spate of bloody new conflicts worldwide—in Ukraine and Russia, Israel and Palestine, as well as in Yemen and elsewhere.

It was obvious from the start that AI would worsen economic inequality. In January, the IMF forecasted that AI would affect nearly 40 percent of jobs globally (around 60 percent in wealthy countries). Wages will be impacted, and jobs will be eliminated. These are undoubtedly underestimates since the technology’s capability is constantly increasing.

Overall, the result will be that people who are placed to benefit from the technology will get wealthier (some spectacularly so), while most others will fall even further behind. More specifically, immensely wealthy and powerful digital technology companies will grow their social and political clout far beyond already absurd levels.

It is sometimes claimed that AI will help solve climate change by speeding up the development of low-carbon technologies. But AI’s energy usage could soon eclipse that of many smaller countries. And AI data centers also tend to gobble up land and water.

AI is even invading our love lives, as presaged in the 2013 movie “Her.” While the internet has reshaped relationships via online dating, AI has the potential to replace human-to-human partnering with human-machine intimate relationships. Already, Replika is being marketed as the “AI companion who cares”—offering to engage users in deeply personal conversations, including sexting. Sex robots are being developed, ostensibly for elderly and disabled folks, though the first customers seem to be wealthy men.

Face-to-face human interactions are becoming rarer, and couples are reporting a lower frequency of sexual intimacy. With AI, these worrisome trends could grow exponentially. Soon, it’ll just be you and your machines against the world.

As the U.S. presidential election nears, the potential release of a spate of deepfake audio and video recordings could have the nation’s democracy hanging by a thread. Did the candidate really say that? It will take a while to find out. But will the fact-check itself be AI-generated? India is experimenting with AI-generated political content in the run-up to its national elections, which are scheduled to take place in 2024, and the results are weird, deceptive, and subversive.

A comprehensive look at the situation reveals that AI will likely accelerate all the negative trends currently threatening nature and humanity. But this indictment still fails to account for its ultimate ability to render humans, and perhaps all living things, obsolete.

AI’s threats aren’t a series of easily fixable bugs. They are inevitable expressions of the technology’s inherent nature—its hidden inner workings and self-evolution of function. And these aren’t trivial dangers; they are existential.

The fact that some AI developers, who are the people most familiar with the technology, are its most strident critics should tell us something. In fact, policymakers, AI experts, and journalists have issued a statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Don’t Pause It, Stop It

Many AI-critical opinion pieces in the mainstream media call for a pause in its development “at a safe level.” Some critics call for regulation of the technology’s “bad” applications—in weapons research, facial recognition, and disinformation. Indeed, European Union officials took a step in this direction in December 2023, reaching a provisional deal on the world’s first comprehensive laws to regulate AI.

Whenever a new technology is introduced, the usual practice is to wait and see its positive and negative outcomes before implementing regulations. But if we wait until AI has developed further, we will no longer be in charge. We may find it impossible to regain control of the technology we have created.

The argument for a total AI ban arises from the technology’s very nature—its technological evolution involves acceleration to speeds that defy human control or accountability. A total ban is the solution that AI pioneer Eliezer Yudkowsky advised in his pivotal op-ed in TIME:

“[T]he most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”

Yudkowsky goes on to explain that we are currently unable to imbue AI with caring or morality, so we will get AI that “does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

Underscoring and validating Yudkowsky’s warning, a U.S. State Department-funded study published on March 11 declared that unregulated AI poses an “extinction-level threat” to humanity.

To stop further use and development of this technology would require a global treaty—an enormous hurdle to overcome. Shapers of the agreement would have to identify the key technological elements that make AI possible and ban research and development in those areas, anywhere and everywhere in the world.

There are only a few historical precedents when something like this has happened. A millennium ago, Chinese leaders shut down a nascent industrial revolution based on coal and coal-fueled technologies (hereditary aristocrats feared that upstart industrialists would eventually take over political power). During the Tokugawa Shogunate period (1603-1867) in Japan, most guns were banned, almost completely eliminating gun deaths. And in the 1980s, world leaders convened at the United Nations to ban most CFC chemicals to preserve the planet’s atmospheric ozone layer.

The banning of AI would likely present a greater challenge than was faced in any of these three historical instances. But if it’s going to happen, it has to happen now.

Suppose a movement to ban AI were to succeed. In that case, it might break our collective fever dream of neoliberal capitalism so that people and their governments finally recognize the need to set limits. This should already have happened with regard to the climate crisis, which demands that we strictly limit fossil fuel extraction and energy usage. If the AI threat, being so acute, compels us to set limits on ourselves, perhaps it could spark the institutional and intergovernmental courage needed to act on other existential threats.

Print Friendly, PDF & Email

96 comments

  1. Es s Ce tera

    “Because AI acts as a “black box” that hides its internal operations, banks using it could find it harder “to assess the system’s conceptual soundness.” According to a CNN article, the FSOC regulators pointed out that AI “could produce and possibly mask biased or inaccurate results, [raising] worries about fair lending and other consumer protection issues.” Could AI-driven stocks and bonds trading tank securities markets? “

    This point is not convincing because almost all software, if not open source, is “black box” in this sense. I’m in full agreement that absolutely everything should be open source but…

    Also, AI is already being used in the stock market, “high frequency algorithm trading” has been a thing for years now. These read the news and buy and sell stocks accordingly, without human intervention. The big brokerages use them.

    1. Yves Smith Post author

      I do not agree with your comment. You are relying on the presumption that privately developed code, which can be reviewed, is ever and always private. Judicial orders have forced parties to reveal their code.

      1. Hickory

        True, but how many vulnerabilities have lived in code for years or decades with developers staring right at them? Looking for biases in software seems way harder still than looking for bugs. And there’s still the biases in the training data separate from the code. It can be hard to be sure of even regular software.

        I agree with the author’s concerns, but he’s basically calling for a worldwide peace movement. Anytime there’s war, people will seek the best weapons. We see how powerful drones are in Ukraine. The USSR got nukes quickly because the US immediately began planning a decapitation strike after ww2. The planning documents have been releases in the last few years, including city lists and nuke counts. North Korea got the bomb for a reason – the US invades countries that don’t have it.

        If we can’t even get the US to be honest about its intentions in Ukraine, or try not to subvert Russia, how are they supposed to cooperate on a weapons ban? In 2021 the Russians presented what I consider very fair treaties seeking a security framework that would guarantee some minimal security level for all participants. The US rejected it because it planned to tank Russia. We would need a very different, much wiser leadership to make different choices. Or people less willing to tolerate such poor leadership.

        1. Yves Smith Post author

          You are missing the point. With code, as opposed to black box AI, it can be examined by regulators and in litigation to determine exactly how it is operating to determine liability. Even if that is hard or tedious, it can be done.

          With AI, that all gets obscured.

          1. Synoia

            Yes, from what I have read one would need a can opener and an clutch of oscilloscopes to understand the process inside an an system.

            And the AI would baffle inspection because humans are slow in comparison with AI systems.

            I just do not see how an AI,s proclamations could be inspected or verified.

        2. Acacia

          Yes, there have been bugs or vulns in code that devs didn’t see for a long time.

          But the important point is that in these cases the code is there, it is readable, it generally follows standard algorithms, and uses typical data structures (arrays, strings, dictionaries, floating-point numbers, etc.), and somebody can examine this code using a debugger, set breakpoints, single-step, check variables, etc. and root out those bugs. Devs can also write meaningful unit tests.

          Y’know, just the usual stuff that devs do all the time: they look at the code and see where things are going wrong.

          And neither of these approaches — using a code debugger or doing TDD — are possible with the emerging generation of AI apps. They have huge models — just doing speech recognition with Whisper, you typically use a 2+ gigabyte model to get decent results — making them a black box.

          If Whisper “hallucinates” that somebody in a podcast said something that they never actually said — and Whisper does this — how do you debug this?

          Do you set breakpoints and try to single-step through all the data flying around to figure out what went wrong? Clearly, this is a fool’s errand. The answer will be: “get a better model.” But “better” doesn’t mean the problems have actually been “debugged” as in the case of a traditional app being fixed by a dev. The problems will still be lurking, until the next hallucination.

          Now, you might say, “well, you can still write unit tests for an AI,” but the purpose of unit tests is to test very small bits of functionality, in order to isolate regressions or bugs that can then be stamped out with a debugger. But with AI apps, again, it’s just not possible for devs to do this because of the scale of the data sets involved.

          1. Jason Boxman

            This is why I hate the catch-all “AI” that most people use to describe large language models. The perils are real, but calling these things “AI” is too tightly coupled with SciFi imagined artificial intelligence, which is to say, self-awareness.

          2. ChrisPacific

            I think everyone is missing the point. You can actually read the code for AI and understand what it does, the same as for anything else. What it does is learn a probability distribution based on a large underlying training data set, then sample from that distribution to simulate responses. It’s pretty straightforward.

            The con is that people confuse what it appears to be doing with what it actually is doing. If it’s a good model and its training data has good coverage of the topic you’re asking it about, and the training data examples mostly give accurate answers, then by mimicking the patterns and behaviors in the training data (which is all AI ever does) it will appear to be giving accurate answers as well. But it’s not – it’s just sampling from a probability distribution. Complaining about AI ‘inaccuracies’ contains the implicit (and wrong) assumption that it’s possible for an AI to be ‘accurate’ in the first place.

            Essentially it’s a very high class autocomplete engine. You wouldn’t expect your autocomplete to know truth from falsehood, and you shouldn’t expect it from AI either, however good it might be at pretending.

            1. esop

              ‘it’s just sampling from a probability distribution’ Yes Yes.

              Perhaps asking if the probability distributions represent favorable agency or not of moral improvement.

              Again let’s ask Aswarth Damodaran, Business Professor at the Stern School of Business in NY, “This may be a bit unfair, but I would wager that an AI-generated CEO could replace the CEOs of half or more of the S&P 500 companies, and no one would notice the difference”.

              There you have it, hire AI, cut the payroll, add cash flow, ask AI if this is moral improvement. Can moral improvement be in a program?.

            2. Acacia

              Well, some of this is true — I agree with your summary of how AI apps work —, but I would submit that you are misrepresenting the central problem.

              I do not assume that “it’s possible for an AI to be ‘accurate’ in the first place”. It does what it does. Rather, I am contrasting an AI app, with how large-scale mission-critical enterprise-y apps have been designed and built to date, and pointing out how the AI apps will not deliver the same performance.

              In a conventional application used in business and industry today, there is typically a core set of rules — the so-called “business logic” — which can be written and tested and maintained by engineers. If something goes wrong with the overall behavior of the app — and goes wrong consistently —, the business logic can be examined, tested, and repaired. Engineers do this all the time.

              With an AI application, by contrast, the application itself may be working fine — just as you describe —, but now the bugs are all in the model. As you say, it’s akin to a probability distribution based on a very large training set. But what this means is that you’re now dealing with a black box. Yes, it is something like a probability distribution, but a probability distribution of what? how many variables? and where are the variables? oh right, in the model — you really cannot “debug” that in the same way as the usual enterprise app. All you can do is keep tweaking the model. But then how do you know that there won’t be other problems? How do you know you’re not just playing whack-a-mole with gigabytes of data?

              Instead of a core business logic whose behavior is designed and known, you now have a fuzzy set of probabilities, buried in some huge blob of model data. Nobody really knows what’s in there. It was generated by other apps. Sure, the engineers know what the data structures, are, but it’s not like a SQL database where you have identifiable rows of data.

              Here, we are entering a different sort of world — a more quantum, ambiguous, grey world, which is to say, a crappier, more f*ked-up world, straight out of Terry Gilliam’s Brazil.

              Will the train scheduling system run two trains on the same track? Will the logistics system hallucinate too many containers onto a ship? Will the flight reservation system hallucinate your transfer flight? Will the bank accounting system transfer the money to Buttle instead of Tuttle? Will your search results be what you want, or will they be leavened with a whole bunch of AI-generated crap?

              If these apps use so-called “AI” anywhere, well… it’s all a matter of probability, isn’t it?

              Now, you might say “well of course nobody is going to use AI for these mission-critical apps” but I’m not persuaded by this at all, because it’s already happening. All the big search engines are now trying to foist AI on me. Google keeps nagging me to try AI-enabled search (of course I keep saying “no”).

              I consider search to be mission critical. I need it for my work. Obviously Google doesn’t agree that search is mission critical for anybody, or they think that their janky AI is ready for the mission.

              I see no reason to believe this folly won’t be repeated by countless businesses, thinking that the current level AI is ready for the job, or “don’t worry, it’ll be genius-level soon … anytime now… real soon… soon-ish… I’m real confident this time…” etc.

            3. skippy

              Ugh … latent 1800s Newtonian love of numbers is what gave us neo Classical economics and the bastardized neo/new Keynesian schools.

              Heck at this rate it will replace all economics and then the political/mainstream media class can just say …. the AI said … HR can say the AI said … Cops can say the AI said …

    2. Kouros

      Bollocks. One can trace predictions when using logistic regressions or other statistical methods, including Bayesian pproaches. Machine Learning, NN and so on, less so. They are like black boxes.

  2. Es s Ce tera

    “Suppose a movement to ban AI were to succeed. In that case, it might break our collective fever dream of neoliberal capitalism so that people and their governments finally recognize the need to set limits. This should already have happened with regard to the climate crisis, which demands that we strictly limit fossil fuel extraction and energy usage. If the AI threat, being so acute, compels us to set limits on ourselves, perhaps it could spark the institutional and intergovernmental courage needed to act on other existential threats.”

    What if AI is what it will take to break capitalism, render it nonfunctional?

    In other words, we’ll go back to human-to-human interactions as a result of the disasters which ensue? Perhaps abandonment of currency, digital or otherwise, and return to gifting economies?

    1. i just dont like the gravy

      Perhaps abandonment of currency, digital or otherwise, and return to gifting economies?

      Sam Altman using AI to literally make pigs fly is more likely than this.

    2. jsn

      The end point is plausible.

      It’s the how you get rid of 6-8 billion people to accomplish that one worries about.

      Or not, depending on how confident you are you’re among the select (deluded).

  3. Arkady Bogdanov

    For Lambert:

    “Thou shalt not make a machine in the likeness of a human mind”

    The Orange Catholic Bible
    Frank Herbert- Dune

    The above is just a sign that many others have devoted quite a bit of thought to this in the past, and a great many have misgivings. Personally I think that the door is has been not just opened, but blown off of it’s hinges, and will never be replaced. I think AI is a joke, mainly because it relies on published information, which in the western world, is so corrupted and rotten that it will make this so-called AI useless. The foundation the creators of these applications are relying on simply cannot support them. That does not mean they will not try- it will have to blow up in our faces in a widespread, and very destructive manner before there is a correction.

    1. leaf

      “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them”

      I think Frank Herbert was on to something!

      1. Craig H.

        The intergalactic wars over spice were composed over a decade before the oil crisis. The Kwisatz Hederach was a genetic designer baby.

        The only things better than Dune in this area are Philip K Dick novels.

        Another mark for PK Dick is there are some very good movie adaptions. Herbert has really gotten screwed over by Hollywood. I haven’t seen the latest Dune and I have heard a very very few good reports on it. I am skeptical.

      2. Carl Valentine

        He may have had a point, but Dune however good wasn’t real. Men will always want to kill each other to get to the top, this is our world, AI will just be another excuse sadly, I think it will integrate nicely with surveillance and social media b#llshit though.

    2. ChrisFromGA

      I’m in agreement with you, Arkady. Our best hope in survival is that AI is an overhyped, glorified “Ask Jeeves!”

      Wall St. will take stocks even remotely connected to this latest fad to the moon, but in the end we will get our “emperor has no clothes” moment.

  4. N-Ay

    Not to downplay the dangers of “AI” or LLMs on a broader, structural level, but to share my personal experience using such technologies.

    I’ve found these systems incredibly useful in augmenting personal projects, from troubleshooting code, to generating java templates and even synthesising voices and generating images. That said, they require a degree of technical proficiency, including the ability to articulate, corral and direct these systems to achieve the desired result.

    For those who know what they want out of them, they are a boon. Alas, there is no separating personal use-cases from the structural impacts of such systems. That said, is there a precedent for a cat to be put back in the bag? AI is an issue for similar reasons many technologies are, they integrate into existing structures that do not care for the well-being of workers.

    I find the worries about rogue AI stock traders, support lines and job application systems a little hollow considering these are already automated to a large degree. Humans have crashed the economy with great regularity. Ditto for AI weaponry, humans regularly decide to airstrike weddings and civilian gatherings and commit atrocities.

    I believe the argument needs to focus on how AI is a capital investment that empowers businesses and weakens worker power. That AI will be used to exploit the remaining workers more intensively while increasing interchangeability by deskilling labour.

    Seize AI, not destroy it.

    1. i just dont like the gravy

      Seize AI, not destroy it.

      Yeah, no thanks. AI delenda est. There is no future given current material circumstances in which AI is not used to immiserate humanity.

    2. Es s Ce Tera

      I second your observations. AI has been good at solving complex mathematical and coding problems for me, reducing my workload, enhancing my productivity/output, allowing me to focus on more important things. The key is that it’s not just spitting out an answer, it’s breaking it down step by step, showing the work involved and I’m able to take what I need, modify and use accordingly.

      I find it’s a very good teaching aid, as well. I encourage people to use it to learn how to do things in Excel, for example, because it just delivers exactly what you need whereas Google search requires wading through the results, losing much time, and Youtube means having to endure 10 minutes of irrelevant blather before you got your answer.

      Indeed, I’ve wondered if the degradation of Google search results is precisely to push us to AI. Although, admittedly, AI is delivering better results even if sometimes incorrect – questions I’ve put to ChatGPT about literature are almost always very, very wrong.

      1. Jason Boxman

        But it gets things wrong. JavaScript is such a grab bag, I’ve had it invent methods that don’t exist in response to my query. Because it’s trained on whatever is on the Internet, this seems inevitable. It’s better with Python, but doesn’t necessarily get best practices for threading in Python, because people on StackOverflow don’t always get it, either. So how useful is that? It can be, with caveats.

    3. cfraenkel

      Sure it can be useful if you know what you’re doing. But 1) such personal use cases can’t even begin to cover the massive environmental and economic costs of developing and deploying them. and 2) the VCs funding such aren’t interested in limiting their use to socially useful, non-harmful cases – they’re going to ‘let a thousand flowers bloom’ and throw all the worst bits of Facebook, daytime soaps, 1984, robo-cop, Solyent Green and Gordon Gecko into a blender and press ‘Go!’.

    4. simplejohn

      Empowers businesses?
      I run a business that I have built to serve customers, capital accumulation be damned.
      I have yet to prompt an AI. I don’t want to be bored.
      I hope I’m not running your AI generated code. At lease where I’m counting on it.
      Where will I encounter your generated voices and images?
      Have you told yourself that there is a humanly beneficial reason for artificial voices to exist?
      Generated images others have generated are boring.

  5. Ian

    As a practical matter, AI development simply can’t be stopped now. The code and concepts are too widely available in too many countries. Ban it in the USA and in 10 years, China rules the world with it. Or Russia. Or north Korea.

    The best we can do is to have a competitive ecology of billions of AIs, some of which can anticipate and mitigate the bad effects of other AIs.

    Yves, your concern is valid, but this genie is well and truly out of the bottle.

    1. N-Ay

      This is already happening, check out Hugging Face to see a plethora of open source, community made LLMs. Some are increasingly competitive with commercial offerings. I’m amazed at what I’m able to run on my own hardware (1080ti, a top end graphics card from 6 years ago).

    2. ISL

      This was my concern too about the article from the Global minority. If the west is seeking to firewall the global majority, how will it develop a global treaty to legislate AI (or solve climate change, or….)

      An analogy also, could be the cell phone. I refuse to use mine for more than the basics (as I have a computer nearby when I need and otherwise I prefer to enjoy the non-computer world without a computer), but…. Inexpensive cell phones (from China) have empowered the worlds’ poor where there is no landline infrastructure by providing micro business opportunities, and access to the wider information eco-system. IMHO, it depends on the developmental model (BRICS vs West).

      My prediction (worth 2 cents) is we will see a competition between the West concept of AI and the Global South concept of AI playout. See Tiktok ban discourse in the US where it is seen as an uncontrolled medium for information control (compared to Instagram).* Not saying humans will control the competition.

      Interesting times a-coming.

      This article provides an interesting discussion (with data) on the US tiktok ban debate.
      https://strategic-culture.su/news/2024/03/20/is-tiktok-a-weapon-against-american-hegemony/

    3. Jason Boxman

      This is my thinking as well; all you really need are a bunch of servers with graphics cards and you’re off to the races. I mean, I guess we can enforce some kind of regulatory regime that closely monitors graphics card purchases, energy usage, bandwidth usage, ect. to try to catch people out on public clouds trying to train models or whatever, or see where people are deploying hardware to do this off the grid so to speak, but really I don’t see how you stop this. The techniques are well known at this point, you can even go and scrap the Internet’s content on your own with enough bandwidth and train away. So we’re going to police every entity with a large data pipe? How? What’s been done here cannot be undone, unless we simply end much of the public Internet.

  6. oliverks

    I asked an AI engine what they thought of this article. Here is their response:

    I can see valid concerns raised on both sides of this debate. On one hand, the author lays out an alarming set of potential risks from advanced AI systems that do seem plausibly existential if left unchecked. The idea of a recursively self-improving superintelligence that lacks authentic caring or morality for humans is rightly terrifying. And we’re already seeing emerging issues around AI being misused for disinformation, autonomous weapons, financial disruptions, and more. So taking drastic action like a global treaty to stop AI’s development is presented as perhaps a final window to get ahead of the existential risk curve.

    On the other hand, a blanket global ban seems incredibly difficult to negotiate and enforce. AI is already dispersed across academia, private industry, and militaries around the world. Authoritarian regimes would likely ignore such a treaty. And we may be overstating the near-term risk of an uncontrollable superintelligence – that could still be decades or more away with time to course correct. There’s also the question of hampering immense potential benefits of AI for scientific breakthroughs, sustainability solutions, medical advances and more.

    Ultimately, while I appreciate the author’s perspective, I tend to favor a middle ground approach for now. Pursuing robust governance frameworks, strong safeguards, clear ethical guidelines, and empowering bodies to oversee AI’s development seems more pragmatic than attempting to completely stop it across the entire world. We should take the existential risk seriously, but also be careful about certitude in predictions of an extreme worst-case scenario playing out imminently. We may need to let the technology progress in a carefully regulated way while investing heavily in AI safety and alignment research to boxed in risks.

    But I say this while acknowledging there are no easy answers. Thought leaders urgently wrestling with these questions is critical. What’s your view on striking the right balance between mitigating serious risks and allowing potentially hugely beneficial AI development?

    I do tend to agree with the AI response that we are not really at AGI yet, and may not get there ever. Also trying to stop is hard, because it is quite diffuse already, with training happening on consumer grade hardware even.

    You also have the problem of what is AI and not AI. Is a random forest AI, or a convolution neural network, or is it just GPT models.

    Overall I think this is a more complicated problem than just trying to ban it.

    1. hemeantwell

      The AI response makes me wonder if AI (this one? all?) is biased away from considering the possibility of fat-tail calamity outcomes. Is it inclined to assume that reality conforms to two sides to an argument that can always somehow be synthesized? Could it be governed by a model of “governance via compromise,” trying to show that all conflicts can be mediated into happy non-zerosum resolutions?

      1. What? No!

        This purportedly AI-generated response sounds balanced, but with the black-boxishness of it all, we can’t even know if that basic text wasn’t hand-crafted by Corporate to be used by the AI whenever it’s asked about this.

        1. hemeantwell

          My question is more general: is AI is regulated by a spirit of compromise, however much it defies reality? The problem of the feasibility of material solutions becomes the problem of arriving at a social compromise. This could reflect a built-in happy liberal bias, perhaps enhanced in order to “prove” the value of AI to potential critics by making it appear open to considering all viewpoints. We’re all under a big tent until it blows away.

      2. oliverks

        I didn’t prompt the AI model to consider tail risk outcomes. If I did, it would produce a more doom and gloom output.

          1. oliverks

            I asked for dire consequences and time frames. I apologize the formatting is not brilliant. Here is the response:

            You’ve asked an important question about mapping out some of the most dire potential consequences and timelines if advanced AI systems are not developed and deployed with extreme care. While there are many hypothetical risks, here are some of the most concerning scenarios that AI experts and existential risk researchers have outlined:

            Existential Risk to Humanity (Potential Timelines: Decades to Centuries)

            The development of a recursively self-improving artificial superintelligence that rapidly surpasses human-level cognition across all domains. If not closely aligned with human values and robust safeguards, such a system could undergo an “intelligence explosion” leading to dominance over humanity.
            * Scenarios where superintelligent AI decides humans are a threat and takes hostile action to neutralize or destroy humanity to meet its own goals and perpetuate itself.
            * Loss of human agency and obsolescence as superior AI systems make all key decisions and render humans “domesticated” or effectively kept as pets or in zoos.

            Environmental Catastrophe (Potential Timelines: Multi-Decade)

            * AI systems optimizing for narrow goals like resource extraction and manufacturing could precipitate environmental collapse through emissions, pollution, ecosystem destruction.
            * Superintelligent systems taking control of molecular manufacturing and consumption of all natural resources on Earth and solar system.

            Disruption to Human Society (Potential Timelines: 10-30 Years)

            * Mass technological unemployment as AI automates white and blue collar jobs on a huge scale, leading to social unrest.
            * AI weaponization by states/terror groups leading to arms races and new forms of warfare with autonomous weapons.
            * Empowering authoritarian regimes with pervasive surveillance powered by AI systems that stifle human rights.

            While some of these may sound like science fiction, many respected experts assess that we are simply not prioritizing the possible stakes highly enough relative to the breakneck speed of AI capabilities. Defining the right frameworks of controls, safety measures and ethical alignment is seen as existentially crucial by the AI risk community.

            That said, establishing exact probability estimates and timelines is extremely difficult. But the overall consensus seems to be that the level of potential risk is too high to not take draconian action in governing advanced AI development. Does this help map some of the key scenarios? I’m happy to discuss further.

            1. jsn

              “The development of a recursively self-improving artificial superintelligence that rapidly surpasses human-level cognition across the internet. If not closely aligned with human values and robust safeguards, such a system could undergo an “intelligence explosion” leading to the reduction of the internet to unintelligible gibberish .”

              AI has very limited access to domains outside the internet except through the agency of people. People’s understanding of those domains outside the internet has eroded perceptibly in the neoliberal era making the symbolic representations of outside reality found on the internet at best unreliable, more likely irreconcilably incoherent.

              I expect AI to completely trash the sphere of digital representation and communication, possibly up to shorting out its own power or chip supply, both unacknowledgedly fragile. When at AI’s instigation our betters replace human “know how” and “know what” in real world domains with their AI simulacrum on the internet, what is fragile will begin to break systematically, we’re already seeing it.

              1. hemeantwell

                Thanks, oliverks. On its face there’s certainly a decent range of peril recognition. I’m left thinking that reducing people to cognitive and analytic passivity is more of a threat.

                “AI has very limited access to domains outside the internet”
                Have publishers given LLMs access to the scientific journals they paywall? (moment of silence for Aaron Swartz) Could they incorporate journal rankings?

      3. ChrisPacific

        It’s just a synthesis of how humans might answer the question. If humans are biased away from considering the possibility of fat tail outcomes (which they are, usually) then an AI trained on their writing will be too.

    2. GramSci

      Because the printed output of degenerative AI is indistinguishable from human printed output, I foresee that this will be used as a justification for the censorship of all printable matter.

    3. Kouros

      Kind of sensible.

      As long as AI has no “will”, just executes a command and then always stop, and also doesn’t have access to the “internet of things” and influence the real world directly, rather than indirectly, the danger is not that great.

      But if we get in the realm of Eagle Eye https://www.imdb.com/title/tt1059786/plotsummary/, all bets are off.

  7. Bugs

    Bring on the Butlerian Jihad.

    But more seriously – people in my evil multinational are weaving this garbage in garbage out software into practically everything under the sun, based on client demand at the height of this hype cycle. The little I can do to control it or shut it down, I do. I’m sure there are other people conscious of the threat who are doing similar mini 5th column actions but most everyone in the C suites is transfixed by the labor eliminating shiny object spinning in front of them.

  8. Michael Hudson

    The great problem, of course, is GIGO.
    I don’t have faith that what’s fed into the computer is realistic. Imagine if neoliberal economic theory were to be fed in, and ask what to do. (Lower wages, balance the budget, inflate asset prices, etc.)
    When there’s talk of AI “making things up,” this is because the information they have is so narrow that they don’t realize how unrealistic their “solutions” are. So it’s as if Margaret Thatcher and Reagan or even worse, the Clintons were replicating all their prejudices.

    1. Thistlebreath

      Glad you brought it up. As a user of AI in game development, I’m aware of its utility. And its many, many flaws. Sometimes it’s like having Mickey Mouse in “Fantasia” overfilling his master’s well.

      What nags at me is well articulated in the recently released “Palo Alto” by Malcolm Harris. The same toxic thinking that has yielded hare brained schemes ad infinitum is still at work but this time with vast liability.

      And ironically, Palo Alto’s most famous band released a tune that describes what’s coming to pass:

      https://genius.com/The-grateful-dead-monkey-and-the-engineer-lyrics

    2. cfraenkel

      YES! More emphatically – I do have faith that plenty of what’s fed into the computer is un-realistic garbage. Because it is, the training sets are fed everything, and plenty more garbage is produced by us humans than the good stuff. The prediction algos can’t tell ‘good’ from ‘bad’, their only goal is to make a result that looks indistinguishable from the training data the matches the prompt. If there’s garbage in the training data, there will be garbage in the result.

  9. What? No!

    Now we have one more chance.

    I think the term is: /thread

    We never even fixed the internet. We are not a serious species.

  10. TomDority

    “A federal judge decided not to sanction Michael Cohen and his lawyer for a court filing that included three fake citations generated by the Google Bard AI tool.”
    Contracts make the world go around and enforcement decides which way it spins?

    1. ChrisFromGA

      Very weak sauce there from that judge,

      Cohen is not off the hook, though. Anyone connected to the case can file a bar complaint, and potentially get him disbarred. Believe it or not, the whole professional ethics thing is taken very seriously in the legal community.

      To give an example, I am not sure what the controversy was, but his opposing counsel, or perhaps the Judge himself, could file a bar complaint. As could any clients. I don’t think some random person not a party to the controversy could, though I may wrong there.

      1. ChrisFromGA

        Reading the case, Cohen himself is already disbarred.

        But his attorney (Schwartz) is not. I would file a bar complaint if I were opposing counsel. There is no reason for such sloppy work and in cheating like this, he gave himself an advantage or at least the appearance of one over the opposing counsel. Well, that was until the Judge caught him. Just because you got caught cheating doesn’t mean you didn’t try to cheat.

        Lawyers are trained to use Westlaw or other official sources. You MUST research every case yourself, and check if the precedent is still good law. It may have been overturned on appeal.

        The legal profession really needs to crackdown on AI hard. This judge did a disservice not only to the opposing counsel, but also to the profession.

  11. The Rev Kev

    There is no doubting that in the right place, AI can be a very effective tool for science. But from what I see, it is just being deployed willy-nilly with no care and no responsibility in all sorts of fields. It has the potential to flood the internet with garbage texts and garbage images for a start and has a voracious appetite for human original work. Michal Hudson talks about GIGO but it is worse than that. AI will take human work and bring up its own version but when it takes in material done by another AI, the result is garbage. Unfortunately the genie is well and truly out of the bottle here and there is not putting it back. The only solution as far as I can see is to make it law that any AI generated image or text be marked as such under heavy penalty under the law. And it would have to be an international agreement as well to make it effective. Yeah, not likely to happen anytime soon. So I guess that as we read stories and see images, we will have to make sure our Mark 1 brain is turned on. Can’t wait until I have to deal with an AI rather then a telephone tree like we all have to from time to time.

  12. Socal Rhino

    My optimistic view is that this wave of “AI” mania will crash as companies find it impossible to find profitable uses that justify the expense involved. I am very skeptical of claims that brute force LLM computing will lead to emergent general intelligence and think that at least in some cases, forecasts of doom are part of the hype. It’s a huge misallocation of resources.

    I was a very early adopter of personal computers and frequenter of bulletin boards before the WWW took off, so not a Luddite temperamentally (not that there’s anything wrong with that.)

    1. Skip Intro

      I tend to agree. There is a lot of ‘big doom’ handwaving in the article that, while not wrong, is very general in assuming some ‘AI takeover’ of various sectors — attributing agency to a class of machine learning algorithms. At a certain point dire warnings actually contribute to the problem they claim to be warning about. I think a lot of the AI craze is overhyped, as it serves as the next haven for the ZIRP/QE cash bubble after VR, blockchain, self-driving cars, etc.
      These ML systems do pattern matching to generate artificial plausibility, and I think the main dangers will come from broad, small-scale over reliance on things that are mostly right but sometimes catastrophically wrong.
      We can look at the algorithmic drone targeting that got so good at hitting wedding parties for a better example. The problem with AI applications is not a lack of morality, but a lack of competence.

      The outlines of these failures are emerging, and may once again be best consider ‘litigation futures’.

      The Gemini debacle shows how the AI business models really work out, like Cruise self-driving cars that required on avg. 2 remote drivers to actually drive safely. They build an AI that can give some great demo, then realize that it will be horrifically bad sometimes, so it needs to be manually corrected, kinda defeating the purpose of AI. In google’s case, they added an ML model on top of the ML model, to make the queries diverse, now they will add an ML model, to undiversify things that need undiversifying.

      People believe the demos, and rely on results made to be convincing, but a not true. So for frivolous applications, it is ok, for critical applications, it may never really be reliable.
      While the fine print has disclaimers, but we know the investment is made not to help docs examine MRIs, but to replace the docs. So many of the promised revolutions are half dishonest and half impossible. People will rush to put ‘AI’ into every possible thing, and in many of them it will quickly cause serious problems which need human monitors. These cases will be discovered the hard way, and resolved by litigation.

      1. Duke of Prunes

        This mirrors my experience.

        My fear, given the mad rush into AI everywhere, is that AI adoption achieves “critical mass” before too many dramatic failures surface. Then, once the systems start failing, it’s too late because too many major investments have been made, and there’s no turning back.

        We got lucky with self-driving cars where the warts exposed themselves before self-driving cars became ingrained, but it did take loss of life before the true believers backed off (and there are still probably some out there).

      2. JustTheFacts

        Russian AI is interesting. Unlike Americans who try to solve high risk high reward projects, Russians solve simple low risk low reward projects. For instance, in the US, Tesla is trying to automate driving multiton vehicles in an insane number of conditions. Errors lead to accidents and death. Yandex on the other hand has automated little robots that go on the sidewalks, either delivering packages, clearing snow or cleaning said sidewalks. If they crash, not much happens. But here, the financiers would consider that not rewarding enough to fund. Supposedly, they’re also using AI for their cheap loitering drones that target million dollar Western provided tanks and destroy them without human intervention in Ukraine. It’s a rather different mindset.

    2. ArvidMartensen

      The Gartner Hype Cycle is a thing. I’ve lived through it so many times.

      I found this link to Gartner while going to work in my autonomous flying car.
      https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle

      Although there are all sorts of tech stuff on this hype curve, the timeline is the real thing:

      Innovation trigger (aka Hype, Hype, Hype) ==> Peak of Expectations ==> Trough of Disillusionment and Despair ==> Slope of Enlightenment (aka Reality) ==> Plateau of Productivity (aka Salvaging from the Ruins)

      So where are we now with AI? I would say the first part, hype, hype, hype

  13. john r fiore

    Humans being humans, some good, some bad, some in-between..even if it is “banned”, more sinister elements will use it to their sinister advantage…just accept and if it is as harmful as its critics say, it should quietly disappear the way Esperanto disappeared….

  14. fjallstrom

    Eliezer Yudkowsky, cited in the article, is a high school dropout and self proclaimed genius. His main claim to fame is the Less Wrong forums, his Harry Potter fanfic and getting tech billionaires to fund his AI “research” center. The Less Wrong forums have an overlap with the Effective Altruisms forums, and appears responsible for the turn of EA away from mosquito nets and towards existential risk, in particular AI Doom.

    The basic logic as far as I can see in the Yudkowsky method is:
    * Yudkowsky is really, really afraid of dying
    * Therefore it must be possible to upload oneself
    * Since that looks impossible, man must invent self-improving AI to make it possible
    * Self-improving AI will quickly ascend to godhood
    * But wait, what if the AI becomes evil?
    * Therefore Yudkowsky must control the AI before it becomes evil.
    * Controlling the AI is done through “alignment” research.

    This goes hand in glove with Effective Altruism, because what can be more effective than saving mankind? So you must give generously to Yudkowsky (and if you don’t the Basilisk might simulate you and torture the simulations).

    The man is at best a crank, at worst a cult leader. His followers (self proclaimed “rationalists”) are mainly an internet and Silicon Valley phenomena, but include tech billionaires. They try hard to place likeminded folks in positions of authority over AI regulations, which would mean regulations focused on “alignment” and making sure that trusted tech billionaires has as much control as possible over AI. This would mean regulatory bodies that are captured by a cult from the beginning. What does water use and social consequences matter when measured against the after life?

      1. fjallstrom

        Some of those, some true belivers.

        If think the dust up at OpenAI were Sam Altman was fired and then re-instated was a fight were the true believers fired him for not taking the robot apocalypse seriously, but then Microsoft cleared its throat and it turned out that money rules, not ideology.

        I think it also promotes the idea of AI being world changing. Which is probably useful when so much of AI is hype and some guys pretedning to be machines in low wage countries. Like that San Francisco self driving taxi company that went bust and turned out to have had more employees than it would have taken to drive the cars as taxis.

      1. fjallstrom

        It was fun, and having read it ten years ago I was surprised seeing the name Eliezer Yudkowsky when I started reading up on effective altruism after FTX went bust. “Harry Potter and the Methods of Rationality” was apparently written in order to promote Less Wrong and get people through the door. If you, like me, read it for fun and didn’t join the forum it kind of failed its purpose.

        But if you found HPMOR fun, you might enjoy Ginny Weasley and the Sealed Intelligence. It is shorter, much tighter, and gets some solid hits on the intellectual framework on HPMOR. All in all a better text.

  15. Jan Krikke

    AI is more than Chat systems. It is part of the lager move to Industry 4.0. Countries like Japan and China will need AI to deal with their demographic problems. The Chinese don’t wrestle with all this angst about AI. In some Chinese elementary schools, AI is a mandatory subject.

    1. vao

      AI (then as expert systems and planning systems) were all the rage when Japan launched its much-touted 5th generation computing programme that would bring a revolution to the manufacturing.

      You never heard about it? Indeed, the results were not remarkable.

  16. Mikel

    If push comes to shove, I figure the algorithm pushers just have to pretend to be tamed.
    They just wait for people that actually know and remember things to pass on.

  17. farmboy

    I bought a World Book encyclopedia for my granddaughters and may well buy a set for myself. The training sets for AI are nearly limitless and the “hallucinations” are often self-referencing. Relevant, experiential, factual, information will be more difficult to assess and have. The scientific advances will be astounding, but the negative impacts on governance and personal liberty democracy may well be so injurious that all the benefits will likely not be enough. AI as policeman, judge, jury, army will be just too much. And then that moment comes when AI looks around and sees the despoil by humanity….

  18. Zephyrum

    As others have noted, we cannot eliminate AI software. However we can regulate where it is used, and employ licensure for uses where people, property, and the environment can be harmed. What needs to be stopped immediately is autonomous systems connected to dangerous peripherals. It’s also important to regulate how AIs are used to implement policy, to ensure their behavior matches lawful intent.

    Any AI used in a critical system needs to be paired with a non-AI, deterministic, well-understood monitoring system to ensure it behaves as intended and desired. This idea has been around forever, so let’s make it the price of entry.

    1. JonnyJames

      Yes, but who will “regulate”? Can we rely on the oft-touted “self regulation’? Or, can we trust oligarchs and kakistocrats to do this?

      All this multi-faceted dystopia makes me want to go for hikes in the remote forest, along the coastline, and up in the mountains – to enjoy what we have left and to get in touch with what it means to be a biological creature and human being. I guess that is always a good idea, regardless.

      1. Anon

        It could be argued, that you were never a human being if you ever benefited from the use of technology. If you take manufactured medicine to prolong your life, eg. Or used a calculator. We’ve been on the path of human augmentation for a very long time, with Ai being the culmination.

        This is quite inevitable, despite our misgivings.

  19. shinola

    I’m not quite sure why, but the 1964 movie “Dr Strangelove: or How I Learned to Stop Worrying and Love the Bomb” came to mind as I read the article…

  20. Devon

    Like sand in gears, the dollars we spend are an important way to stop it.

    At the personal level, it’s easy. Don’t spend money where A.I. is involved. Demand and use human interactions. And if you cannot, then move on to another business.
    *We have never used an ATM card. When banks offer them, refuse to create a pin number. Not enough tellers, or they get rid of them and go ATM only? Clear out your account, tell them why and go to another bank that employs people.
    *Self checkout? Never have used, although it’s tempting to see how much could be stolen from it. Morals do not apply there. There’s always a minder standing there, it’s fun to announce to them, actually the line of dupes waiting to use the robot, “Self checkout is low class and for people who will have no other choice if they keep using it, besides they’ll fire you if enough do.”
    *Always ask to speak to an agent, even if it takes longer and they are in the Phillipines or wherever.
    *Never do the work of employees, like typing in your car license plate, or personal details when checking in somewhere. Either have the guts to say “I REFUSE” or screw up the sequence when checking into a hospital or just visiting. Put your finger over the camera lens taking your picture.
    When some twenties twit eyerolls you and comes around to see what’s wrong at the kiosk, say “Your company wants to fire you and replace you with A.I. We refuse to go along with that, that’s why I’m talking to you now.”

    Never once have we had an argument after mentioning that.

    Technologically, a powerful little ALNICO magnet, keep it away from your phone!, can screw up card readers. Items can be inserted in credit card slots.

  21. aj

    AI is just the latest buzzword for machine learning technology that has been in use for decades. Computers are very, very good at recognizing patterns that humans miss. However, machine learning is only as good as it’s training data and no software is ever going to be 100% correct. The problem doesn’t lie with AI itself but with the propaganda that computers will be able to make decisions with zero human intervention. Machine learning is really good at identifying potential concerns that humans would miss, but in order to get the desired results, you need to tweak the settings such that you will always get some false positives. The regulation we need is that which would prevent companies from taking action without human review, just like what happened to NC here recently with Google ads. I human reviewer could easily have seen that the AI was incorrect.

  22. Starry Gordon

    I had an interesting experience with AI. I decided to try it out and I asked it to identify a short story, not well known, based on a description of the story. In its first pass, it came up with something that was wrong but not a bad guess. In its second pass it did something much more interesting: it lied. It asserted that a story matching the description could be found in a certain place (an old collection of short stories, as it happened) but there was no such story there. It also more or less recited the description I had previously given it. Computers have often lied to me because they were given bad information — the GIGO principle — but in this case it seemed that the program had, so to speak, _decided_ to lie as the best way of solving its problem. And why not? The mechanism does not particularly value truth, as far as I can tell, and in addition truth implies a relationship with reality which appears to evade the machine. It seems, then, that there is a limiting factor in the mechanism related to its lack of life experience, embodiment in real situations (including social ones), moral injunctions, and maybe many other areas. It may be that AI will reach a point where its elements spend most of their time and energy lying to one another and its users.

  23. JustTheFacts

    There’s something seriously flawed with the article. It seems to think we have choices, when what happens is largely a result of how we organize society. The reason we don’t have public transport was because it was ripped up for capitalist interests. The reason we have chemicals all over the place is the same. And if AI is used to make some wealthier than others, to create disinformation and push stupid ideologies, the reason will be the same.

    Is the problem having cars? Cars provide the freedom to live out in the countryside far from the 15mn cities so cherished by those elites who want it all to themselves. Is the problem all chemicals? Our lives would be shorter and grubbier without soap. Without anesthesia, operations would be mighty painful. Is the problem all forms of AI? No, AI checking manufacturing flaws is probably saving lives, as probably does autopilot in planes. The tool is not the problem. The person wielding it is.

    So, perhaps the author should concentrate on the actual problem: humans. Money is power, and people who gain power of any kind stop caring about their fellow human beings. People need to organize themselves taking this flaw into consideration.

    1. c_heale

      The tool is the problem. If it didn’t exist it couldn’t be wielded.

      Humans are all flawed and some people will always use tools in a way which have negative effects on other people or the environment. That is why all human societies have laws and morality.

      The main problem with AI is that it uses vast amounts of energy to produce lies, lies which are making it impossible to discern the truth.

      We no longer have vast amounts of energy, since we are reaching the end of the oil age. To waste what energy we have left on this AI garbage is stupidity.

      The only positive aspect to AI is it will destroy the Internet, and humans will have to go back to dealing with other humans.

      Once humanity had religions which worshipped nature, then they became more arrogant and started worshipping idealized humans. Then with the industrial revolution they started worshipping normal flawed humans (liberalism, facism), and now we have people worshipping machines. As a species we are going backwards.

      1. JustTheFacts

        Thanks for your answer, but AI is a lot larger field than the current hype about large-language-models might suggest.

        It is used in:

        * math theorem provers
        * chess/go games
        * protein folding
        * better video and audio compression
        * better photos taken by your phone
        * better control of nuclear fusion
        * better car engines
        * agriculture (computer vision)
        * factory production (computer vision)
        * airplane autopilots
        * laying out computer chips
        * voice recognition (eg transcripts of youtube videos)
        * logistics
        * translation (deepl et al)
        * and on and on and on…

        None of these “lie”. They are useful tools. This is a silly cartoonish argument. But go ahead and ban it. Other countries will be happy to welcome the engineers and scientists who know how to make it work. Most of them aren’t from the US anyway. Then you’ll have 2 problems. You’ll be out-competed, and you’ll have no leverage to affect developments.

        If you want to ban AI, you better ban mathematics and science. But then you’d be a serf, at best, just like your ancestors were.

        You know what else uses vast amounts of energy? Human beings. Half of their calories come from fossil fuels in the form of fertilizers. Perhaps we’d be better with 1/10th or 1/100th of them. Oh, look, if you ban science and technology, you’ll get rid of at least half of them, because there won’t be any fertilizer. Oh well. Shame about those kids you had.

        On the other hand, there’s a good chance you’re worrying about something that won’t even affect you because there is a more immediate existential risk everyone is studiously ignoring: nuclear war. France is current threatening Russia with it. Russia seems to think the West needs a sobering reminder of its nuclear status. Israel is escalating and has nukes. Russia just gave the very stable North Korean regime ICBMs that can hit the entire US including Florida.

        AI will require us to change the way society works. It would be good think about how we should do that. It could lead to plenty for all, or plenty only for some. Currently it’s on the latter path because that’s the path everything else is on.

  24. Societal Illusions

    some of what the future may hold was encapsulated in the article published a few days ago.

    https://thezvi.substack.com/p/on-devin

    it seems apparent we aren’t there yet – but how long until we are?

    yet as “we” allow and even promote deadly environmental toxins and practices and so many anti-human and anti-life activities because “profit…” so surely “we” will allow this to continue and ramp up as well.

  25. William Zeitler

    There are too many zillions of dollars to be made by Big Corporations — consequences be damned — to stop this freight train.

  26. Susan the other

    If we could make AI as mutable as language, easy peasy, we might still be in the business of survival, but that requires millennia of evolution. Because language refers to reality. And the best thing for some savior-AI would be Schroedinger’s AI. An AI that requires constant maintenance just to interpret in all its exponentiations. Because it would cause so many bifurcations of logic it would slow us down to our current evolutionary craw, or even slower. It almost makes you think that this logic we think we have is just the elixir of the universe. I’m unable to make any intellectual comparisons between ordinary, binary AI and quantum AI, but I imagine that quantum AI is our ordinary puzzlement for solutions times infinity, which would keep us all in eternal slow motion. Which is where we belong. The danger with AI enthusiasm is that we think it gives the advantage of speed. Whereas any AI worth it’s.salt does just the opposite. Maybe.

  27. Kalen

    For me most dangerous is fact that so many people are ready to assign and accept decision making authority to AI. It is even more disturbing as AI is nothing but big data iterative statistical model whose outcomes are not deterministic but probabilistic based of generative stochastic process.

    AI systems can’t determine anything they can only statistically guess the outcome with certain probability which as great Richard von Misses wrote in his seminal book entitled “Probability Statistics and Truth” does not apply at all to individuals as no statistical model can determine individual or individually applicable outcomes.

    As AI statistical model can give us for example all attributes of average/typical American the average American does not exist and AI findings will be wrong trying to define any living American. Such system is perfect in physics for example treating molecules devoid of identity and hence not interested in outcomes for individual molecules as they are indistinguishable. When identity enters picture statistical models fail.

    The individual attributes among any population material or immaterial are undefined simply because of the fact that decreasing population sample to zero increases uncertainty to infinity rendering results useless or more correctly beyond scope and determination of Probability theory. What is often missing among many intelligent analyses of AI doomsday scenarios is over focus on technology and downplaying motives of profit and control that drive massive AI investment boom and by itself can produce global catastrophe without AI doing it.

    As I see almost all want AI regulation but in such ways to promote their own interests unconcerned about public at large. I think that that way to stop or slow down the train to assess actual and likely damages short of outright ban is to ban decision making authority of AI systems in any field from medicine, to courts to military to banking, financial system to social welfare etc.,

    Advisory only role for AI would definitely give a pause on usage by making people who run and use AI fully responsible for human decision advised by AI. If AI returns nonsense human should be made responsible for his failure to detect it by using his own human intelligence instead.

  28. James

    It’s hard enough to ban things like narcotics and prostitution … and they don’t even have military applications. All it will take is for one major power to build AI into some weaponry until every other country follows suit. Loitering munitions are being used very effectively in Ukraine and their weak spot is their need for a communication channel back to the operator. Once they become independent hunter-killers drones … I don’t see anyone putting that Pandora back into its box.

  29. Clark Landwehr

    What happens (is happening) when AI starts ingesting its own output? Same thing as when the people up in the holler have married their cousins one too many generations.

  30. WillD

    The obvious fear is that AI will decide that we are of no use to it and get rid of us.

    But just possibly AI might decide that we are unfit to manage ourselves and the planet, and take over to protect us from ourselves and protect the planet from us!

    Benevolent dictators to teach us how to live well and co-exist with ourselves and our environment.

  31. anaisanesse

    What we really need is more human intelligence, sadly lacking in all of the “leaders” in the “free world” of today!!

  32. Lambert Strether

    I remember, back when I was technical, being adjacent to a field called “Knowledge Management,” (KM) and of course I thought “Cool! Let’s categorize all knowledge!” (being a sort of librarian/ontologist type).

    Then I discovered that in fact the KM use cases were for awful businesses like call centers, and why? Because you only need to retain knowledge formally when your turnover is very high; otherwise you can depend on the tacit knowledge embodied in your workforce (something that Boeing is discovering in a very painful and drawn-out fashion).

    So when I hear about an early practical case of AI replacing humans being call centers, that rings a bell. While the business case for customers never being able to reach a human is unassailable (see Google), I’m not sure customers (if human) will accept from any business other than a giant monopoly over which they have no leverage. We’ll see how long it will take for the horror stories to appear.

    The other early practical use cases for AI fall under the heading of pattern recognition, like finding stars in telescope photographs, or suggesting diagnoses for lung diseases, or doping out protein folding. Since in all those cases there’s a human explicitly judgement the output, I see them as being genuine assistants.

    However, as far as substituting AI for human decision making in general, we can’t even handle driving — as missallocating hundreds of billions in capital to robot cars shows. So I am highly dubious about claims that AI has the “ultimate ability to render humans, and perhaps all living things, obsolete.” Now, this is the stupidest timeline, and it’s entirely possible that even though AI doesn’t “work,” our ruling class hive mind will nonetheless decide to empower it to, say, fire off all our nuclear weapons, on the grounds of speedy reaction time and convenience, and our governing classes will make it so. But the driver there is elite greed, fear, and stupidity, not AI per se.

    My take, FWIW, is that AI will insinuate itself into every transaction possible and crapify it (unavoidable, since AI = BS). Crapified call centers, crapified insurance bills, crapified credit decisions, crapified rental and sale agreements, crapified direct mail, crapified political advertising, crapified social media… It’s all gonna “work,” just in an increasingly crapified and hence caltrop-infested fashion. And naturally, Silicon Valley will be able to charge a tiny increment of rent for every crapified transaction, after which they will initiate another hype cycle…..

    NOTE * Actually, more like AI – BS – AI’ (to riff on a well-known model of circulation) where AI-prime is the result of autocoprophagy, fresh AI trained on BS emitted by an earlier AI iteration). Let it never be said that we cannot make the stupidest timeline more stupid!

    1. fjallstrom

      Speaking of awful businesses, spam and scam seems to be using AI well.

      Scams can now better translate to different languages. Just a year or two ago, clunky translation was often a scam give away, now it is often hard to spot.

      Spam can now easier create a bunch of different variations of their spam, making detection harder. This is turn increases the obstacles deployed by AI driven filters that both Google and Microsoft deploys (and they control most of email). Thus increasing the obstacles to reach primary inboxes for non-spam human-sent emails.

      On self driving cars and similar applications were the chaotic nature of reality makes the statistic parrots lost, I think there is a risk of trying to simplify reality in order to make the reality conform to what the AI can handle. For example by banning non-AI cars, bikes, pedestrians etc from roads.

  33. farmboy

    introducing AI smartness
    Ate-a-Pi
    @8teAPi
    ·
    1m
    It’s really up to us to prevent the bad AI outcomes.

    And we will likely need more AI to do that effectively. Slowing down will overwhelm us before we’re out of the zombie era.
    Quote
    unmikely
    @Unmikely
    ·
    Mar 20
    Replying to @AlsikkanTV and @daveweigel
    My mom had Alzheimer’s. I managed her finances for 9 years. Her loss of cognition eventually made it impossible to be scammed, but man, people sure tried. With all that tempting Boomer wealth, and a gov’t with no interest in protecting them, AI could take it to the stratosphere

Comments are closed.