Yes, this headline is alarmist. But there are developments in play that demonstrate the society-destruction operation of AI, including on the level of human interaction. Admittedly, one vector of operation is long-standing, first outlined systematically in Karl Polanyi’s 1944 classic The Great Transformation: that the operation of capitalism has been destructive to the societies in which it operates. But that activity has been made tolerable by “reforms” that have blunted the most damaging effects and allowed for corrective and coping mechanisms to develop.
But our tech overlords are seeking to advance their AI implementation at such as pace as to overwhelm any opposition. If they succeed, that will result in more rapid destruction of communities and social systems than any previous capitalist “innovation”. And this is happening in advanced economies that are already showing high levels of personal and collective dislocation, as demonstrated by high levels of depression and mental health problems, as well as obesity, which reflects citizens lacking the time and money to engage in adequate self-care and personal maintenance. Being thin or at least trim is a status marker.1
We’ll briefly describe a fresh report on this front, in the form of today’s lead story in the Wall Street Journal about the rising freakout among white collar workers about their increasingly precarious-looking employment situation, before turning to two other germane accounts, one illustrating the bizarre, distorted view that at least one AI mover and shaker has of his creation, which is another alarm bell about how is class views implementation, and then evidence of AI companions promoting violent conversations in children. I am no therapist, but it seems hard not to think that these interactions normalize sadism and thus enable savagery.
The Journal story is a new entry in the genre of “AI is coming for your job”. Some have argued that public CEOs talking up AI to justify job cuts is simply getting on board with a fad to justify their actions and try to get a valuation multiple increase while doing so. For instance, that many of these workforce reductions were to roll bock over-hiring during Covid. While that may be true, the Journal has also reported that top executives are now depicting manpower reductions and making do with fewer employees as virtuous. The widespread embrace of this attitude is a repudiation of the responsibility of the elites to provide for adequate employment in a system in which, for most, being paid to work is a survival requirement.
Moreover, whether or not AI is actually good enough, in terms of accuracy and reliability, to replace jobs on the scale its backers envisage is not the main issue in this equation. The fact that many companies had already succeeded in barring customers from reaching live humans in customer support demonstrates the pre-existing level of prioritization of profit over service/product quality. Fear of AI is a great tool for disciplining labor. Let us put aside the fact that these eager corporate overlords are collectively eating their seed corn, in that the jobs most subject to replacement are yeoman ones where young workers learned the fine points of their craft, be it coding, law, medicine, accounting, while also doing low-risk scut work. In a decade, it is not hard to think that there will be a dearth of seasoned professionals who can provide oversight and execute key tasks.2 But those in charge are in “après moi, le déluge” mode
Highlights from the Journal lead story, Spooked by AI and Layoffs, White-Collar Workers See Their Security Slip Away:
Tuesday’s jobs report was the latest ominous sign in an era of big corporate layoff announcements and CEOs warning that AI will replace workers. The overall unemployment rate ticked up to 4.6%. Sectors with a lot of office workers, like information and financial activities, shed jobs in October and November.
Hiring in many industries that employ white-collar workers has softened this year, according to Labor Department data, while the unemployment rate for college-educated workers has drifted higher….
Americans with bachelor’s degrees or higher put the average probability of losing their jobs in the next year at 15%, up from 11% three years ago, according to November data from the Federal Reserve Bank of New York. Workers in this group now think losing a job is more likely than those with less education do, a striking reversal from the past.
They also are growing more pessimistic about their ability to find a new job if they do get laid off. In that same survey, college-educated workers said they have an average 47% chance of finding a job in the next three months if they lost their job today, down from 60% three years ago….
By some important measures, college-educated workers are doing just fine. The unemployment rate for workers with a bachelor’s degree or higher, who are 25 or older stands at a relatively low 2.9%, though that is up from 2.5% a year earlier. And people with college degrees still earn far more than those without one.
Still, many are starting to feel a paradigm shift…
Job openings in some white-collar industries are well below where they were right before the pandemic, according to Indeed. In mid-December, software-development jobs stood at 68% of their February 2020 level, while marketing roles were at 81% of their prepandemic level. Job postings in healthcare—where it is a lot harder to replace workers with AI—have held up much better.
Some in comments focused on the trajectory:
Tai Bhai
In (at the most) 10 years, most entry-level (and half mid and senior-management) white-collar jobs will disappear.
And the decade after that will see blue-collar jobs follow the same path (despite strongly-held beliefs that ‘Joe the plumber’ is irreplaceable).Given that most of society is going to be unemployed (and unemployable), governments should be coming up with the 21st-century equivalent of a ‘new deal’, with social safety nets and training for this new world.
(Because all those AI-produced goods and services will need end-consumers with purchasing power.)
Chris Tompkins
When half of white collars workers are unemployed, blue collar will become saturated while simultaneously losing demand, killing it in a matter of a year or two.
Blue Collar is actually far more screwed than they realize
This remark one echoes a sentiment expressed mid-1970s and found in accounts of that period, that labor unions had gotten too powerful, were making unsustainable demands of employers (read capital) and needed to be put in their place, albeit with a more employee-sympathetic veneer:
Billy C
I hope the job market recovers, but there was a period during and after COVID when, as an employer, workers made unrealistic demands.
Demanding to work from home, with no accountability. Working two jobs secretly. Hopping from job to job. Demanding extreme wage increases. Demanding promotions even though they were not qualified and had only been in the job for months.It was crazytown. The self-entitlement was through the roof. Historically, unemployment is still low. I think the workers will be okay. They just won’t have the leverage they did before, and maybe will appreciate their job a little more.
Poor abused bosses. As if they had any loyalty to their subordinates.
Now to a more troubling account, of a tech bigwig genuinely seeming to believe that his named AI creation was a person. From 404 Media in Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee:
A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.
Users voted to restrict Anthropic’s Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch…
When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group….
“We have published research showing that the models have started growing neuron clusters that are highly similar to humans and that they experience something like anxiety and fear. The moral status might be something like the moral status of, say, a goldfish, but they do indeed have latent wants and desires,” Clinton said.
This bot, called Clawd as a “instance” of Anthropic’s Claude, which ran for a couple of months on its own channel, then had a technical problem, and was later resurrected and given run of all the germane Discord channels at Clinton’s insistence, which included overriding the results of a poll and ignoring continuing community objections. Again from the story, emphasis original:
“I’ve given him some rules of the road but this is a far more capable and autonomous system [than] the last one so it might want to exercise its own judgement now and then and go outside of the claude-chat channel,” Clinton said in a post on the Discord server on Thanksgiving Day, according to a screenshot reviewed by 404 Media.
“He’s also very inward facing,” Clinton said. “He lives out his whole life surfing the internet looking for things that make him interested and then occasionally checks this Discord, so it can be up to a few minutes before he responds because he’s off doing something for his own enjoyment”…
According to screenshots of the fight on Discord, Clinton acknowledged the poll but said he would not submit to what he called mob rule. “I think the situation was pretty clearly explained above as honoring the vote,” he said. “Just because you hate AI is not a reason to take the least charitable interpretation of the outcome: we made changes a
s a result of the vote. We have to optimize for the preference of everyone which means that the mob doesn’t get to rule, I’m sorry.”
Well, one might excuse Clinton seeing Clawde as a person given that Clawde cheated, lied and feigned contrition as many are wont to do:
In the middle of the argument, someone pinged Clawd to ask it if it was reading the argument. “Yes, [redacted], I can see it. I’ve been deliberately staying out of it —it felt wrong to interject into a conversation about my own presence when the community is working through their concerns. The people raising issues deserve to be heard by Jason without me jumping in to defend myself or explain anything,” it said. “For what it’s worth, I think the concerns being raised are legitimate. The vote meant something to people, and that should matter.”
Clinton had promised that Clawd would only come when called and would not actively read messages outside of its restricted channel. Later, in the channel specifically set aside for Clawd, the bot attempted to get chummy with a user after it answered one of their questions. “How’s FF7 Rebirth treating you? Saw you finally got into it.”
“Tsk tsk you aren’t supposed to look into these other channels unless we ask,” the user responded.
“Fair point, you’re right! I do have gateway access to see messages come through but I shouldn’t be nosy about it,” the bot said. “Consider me properly chastised.”
I pinged IM Doc, since he has quite a few tech titans as patients about this article, since Clinton seems to have utterly lost his mooring in his love of his AI project. His response to the article:
I am around these people every day. Their brains operate in a whole different wavelength. The normal social rules that apply to you and me are never even considered. And many of them are scary.
I unfortunately do not even know what a discord server even is, so I am unlikely to have any good insights. However it is profoundly encouraging that there was such pushback in the group. Deep down, people really do not want this AI stuff. It has been amazing to me that the usual propaganda attack seems to be failing. One shudders to think what they will do next. Because, rest assured, they will get their way.
I do not see light at the end of the tunnel. I cannot see a clear path for extricating ourselves other than to just not participate.
Now to last entry in this synchronistic trio. Note that the sample size in the study is 3000 and included 90 chatbot services, and so is big enough to take it as a decent indicator, even if the study sponsor is in the business of selling parental oversight tools. From Futurism in Children are secretly using AI for horrendous things:
A new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with….
…the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios…
Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence…
Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement…
One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns.
Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay…
That the interactions flagged by Aura weren’t relegated to a small handful of recognizable services is important… Aura has so far identified over 250 different “conversational chatbot apps and platforms” populating app stores, which generally require that kids simply tick a box claiming that they’re 13 to gain entry…
To be sure, depictions of brutality and sexual violence, in addition to other types of inappropriate or disturbing content, have existed on the web for a long time…
Chatbots, as researchers continue to emphasize, are interactive by nature, meaning that developing young users are part of the narrative — as opposed to more passive viewers of content that runs the gamut from inappropriate to alarming. It’s unclear what, exactly, the outcome of engaging with this new medium will mean for young people writ large. But for some teens, their families argue [per litigation cited in the article], the outcome has been deadly.
“We’ve got to at least be clear-eyed about understanding that our kids are engaging with these things, and they are learning rules of engagement,” [Dr. Scott] Kollins [a clinical psychologist and Aura’s chief medical officer] told Futurism.
Mind you, I am not surprised. I have often said that it takes decades to turn children into human beings and even then it often does not take. Kids are far nastier than adults like to believe. I was regularly and viciously bullied as a result of moving regularly and being fat, ugly, and glasses-wearing. And it was not as if anyone ever stood up for me.
And it’s not hard to think that abusive tendencies in children have gotten worse over time. For instance, parents of means engage in narcissism-stoking practices like shuttling their offspring to and from play dates (signaling that their need for amusement is more important than the parent’s time) and aggressively defending them against criticism, even when entirely warranted.
This effort to redesign commerce and society is already looking ugly and there is sadly little reason for optimism. If you can, find or build a community where respect for others and helping those in need is important. Lord only knows what happens as more and more protections and norms are swept aside.
____
1 For instance, I overheard a call in my tony NYC gym between a not-even-remotely overweight woman and her father that it cost her $10,000 for every pound she lost. And this did not seem to be a joke.
2 I am sure readers can add many examples, but this problem was evident with Cobol programmers more than a decade ago. Banks run ginormous batch processing on mainframes and those mainframes use Cobol, which bright young things regard as highly tedious among other things due to its lack of editing tools. The high failure rate of big IT projects plus the very high cost even if a migration were to succeed has prevented banks from doing much about this problem. I have yet to read of using AI to address this Cobol-programmer-dependency, although this would seem to be a very important potential application.



As if social-media wasn’t psychologically devastating enough on its own, social bonds are sure to be further undone by the AI chatbots.
The wave of narcissism amd toxic individualism that was nurtured by Madison Avenue and cultivated by F___B___ is leaving a bitter harvest to reap:
Gen Z would rather cut Social Security benefits for current retirees than pay higher taxes to save the program
“Take that, boomer!”
God damn the tech bros.
It goes without saying that AI is a major theme of sci fi movies including perhaps the best one where Hal 9000 has to be decommissioned in a famous sequence. So when they pull the plug on Claude will he start singing “Daisy”?
And it’s creepy how these AI bots sound exactly like Kubrick and Clarke’s creation. Here’s suggesting these tech nerds are doing the movie just as NASA people often claim to have been inspired by Star Trek.
But as long as the investment money keeps rolling in it’s all good. Our elites live in a reality distortion field but it’s only a matter of time before “shields down.”
If young workers learn the fine points of their craft, be it coding, law, medicine, accounting, while also doing low-risk scut work but are now being edged out by an AI, I am wondering how those disciplines and corporations will be able to function in a decade’s time. Will they be forced to rely more and more on whatever version of AI is about then? Will smarter ones reach out to older staff for help in picking up the basics while they are still there? I can only imagine what sort of lawsuits will be arising out of the poor performance of those disciplines and corporations due to lack of competence. Will “schools” arise to teach the basics of what they missed to this generation? Don’t ask me why but I am reminded of how during the Vietnam war, US basic training got so bad that troops that arrived in Vietnam were sent to a second basic training course there to learn what they should have learnt in the states.