Coffee Break: Flailing OpenAI Follows Meta’s Malignant Model

Based on recent headlines, OpenAI may be at risk of following Meta’s malignant model of putting profits before user safety.

They certainly have similar styles when it comes to hyping their big plans for future expansion.

Both Companies Talking Big About Energy

Because the AI stock bubble is inflated by rosy projections of exponential growth, both companies are at the forefront of the tech industry’s eye-popping power usage projections.

Here’s what OpenAI CEO Sam Altman is projecting to be the company’s energy requirements over the next decade:

Meanwhile, Meta is applying to enter the wholesale power trading business in order to “better manage the massive electricity needs of its data centers” because AI, of course.

Politico quoted a key Meta exec regarding the move:

The foray into power trading comes after Meta heard from investors and plant developers that too few power buyers were willing to make the early, long-term commitments required to spur investment, according to Urvi Parekh, the company’s head of global energy. Trading electricity will give the company the flexibility to enter more of those longer contracts.

Plant developers “want to know that the consumers of power are willing to put skin in the game,” Parekh said in an interview. “Without Meta taking a more active voice in the need to expand the amount of power that’s on the system, it’s not happening as quickly as we would like.”

The New York Times dived into how Big Tech is elbowing into the U.S. electricity industry in August:

…the tech industry’s all-out artificial intelligence push is fueling soaring demand for electricity to run data centers that dot the landscape in Virginia, Ohio and other states. Large, rectangular buildings packed with servers consumed more than 4 percent of the nation’s electricity in 2023, and government analysts estimate that will increase to as much as 12 percent in just three years. That’s partly because computers training and running A.I. systems consume far more energy than machines that stream Netflix or TikTok.

Electricity is essential to their success. Andy Jassy, Amazon’s chief executive, recently told investors that the company could have had higher sales if it had more data centers. “The single biggest constraint,” he said, “is power.”

The utilities pay for grid projects over decades, typically by raising prices for everyone connected to the grid. But suddenly, technology companies want to build so many data centers that utilities are being asked to spend a lot more money a lot faster. Lawmakers, regulators and consumer groups fear that households and smaller companies could be stuck footing these mounting bills.

One Meta facility in particular is drawing negative attention.

Meta’s Louisiana Power Play

In January, Meta CEO Mark Zuckerberg posted on Threads about the company’s ambitious plans for a Louisiana data center:

Nola.com reported on how Louisiana officials “rewrote laws and negotiated tax incentives at a breakneck pace” to make Meta’s Holly Ridge, Lousiana data center happen.

404 Media added some context about the data center’s power needs:

Entergy Louisiana’s residential customers, who live in one of the poorest regions of the state, will see their utility bills increase to pay for Meta’s energy infrastructure, according to Entergy’s application. Entergy estimates that amount will be small and will only cover a transmission line, but advocates for energy affordability say the costs could balloon depending on whether Meta agrees to finish paying for its three gas plants 15 years from now. The short-term rate increases will be debated in a public hearing before state regulators that has not yet been scheduled.

The Alliance for Affordable Energy called it a “black hole of energy use,” and said “to give perspective on how much electricity the Meta project will use: Meta’s energy needs are roughly 2.3x the power needs of Orleans Parish … it’s like building the power impact of a large city overnight in the middle of nowhere.”

Never fear, OpenAI CEO Sam Altman can play the big power hype game too.

OpenAI’s Fusion Power Projections

In September, Sam Altman announced a slate of projects whose projected power needs staggered analysts, per Fortune:

OpenAI announced a plan with Nvidia to build AI data centers consuming up to 10 gigawatts of power, with additional projects totaling 17 gigawatts already in motion. That’s roughly equivalent to powering New York City—which uses 10 gigawatts in the summer—and San Diego during the intense heat wave of 2024, when more than five gigawatts were used. Or, as one expert put it, it’s close to the total electricity demand of Switzerland and Portugal combined.

Altman claims these power needs will be met with nuclear fusion, provided by “Helion, a company where Altman is the chairman of the board and one of the main investors.”

Fortune did point out that:

…if Altman’s prediction sounds familiar, it’s because he has made similar ones before, and they haven’t worked out. In 2022, he claimed that Helion would “resolve all questions needed to design a mass-producible fusion generator” by 2024. Helion itself announced in late 2021 that it would “demonstrate net electricity from fusion” on that same timetable. But 2024 came and went without any news of a breakthrough from the startup.

Such cycles of bold claims and deflating disappointments are part of a long tradition. The promise of fusion power has been a dream for decades, pursued by scientists, governments, and corporations the world over—and there’s a similarly lengthy history of fusion failing to arrive when predicted. There’s even an old joke that fusion has been 30 years away for the past 60 years.

Yet something may be different now.

I’m going to stop right there to enjoy a hearty laugh, because claims about nuclear fusion being right around the corner haven’t panned out yet, and I’ll wait to see a nuclear fusion plant come online before I’ll give credence to claims coming from Scam Altman about yet another miracle technology.

The fact that Altman is relying on nuclear fusion vaporware to power his unfunded data centers makes this warning from the NY Times all the more concerning.

The worry is that executives could overestimate demand for A.I. or underestimate the energy efficiency of future computer chips. Residents and smaller businesses would then be stuck covering much of the cost because utilities largely recoup the cost of improvements over time as customers use power rather than through upfront payments.

These are not idle fears. Tech companies have announced plans for data centers that are never built or delayed for years.

Speaking of concerning, let’s move on to the proximate cause of this post, a series of brutal reports about Meta and OpenAI putting user safety last.

Meta Profiting Hugely Off Scam Ads

Reuters got the scoop on Meta’s massive revenue from fraudulent ads:

Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show.

A cache of previously unreported documents reviewed by Reuters also shows that the social-media giant for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp’s billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products.

Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta’s internal warning systems. But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain – but still believes the advertiser is a likely scammer – Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads.

The documents further note that users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests.

This is classic Meta: identifying scammers and charging them a premium while also identifying users most likely to be suckered by the scammers and feeding them even more scam ads.

Win/win!

This caper was egregious enough to get US senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) asking the Federal Trade Commission (FTC) and the Securities and Exchange Commission (SEC) to “immediately open investigations and, if the reporting is accurate, pursue vigorous enforcement action where appropriate.”

But this wasn’t even Meta’s worst news cycle this month.

Meta Is Bad for Kids, But Great for Sex Traffickers

Time has a blockbuster report claiming that:

…since 2017, Meta has aggressively pursued young users, even as its internal research suggested its social media products could be addictive and dangerous to kids. Meta employees proposed multiple ways to mitigate these harms, according to the brief, but were repeatedly blocked by executives who feared that new safety features would hamper teen engagement or user growth.

While Meta did introduce safety features for teens in 2024, the suit alleges that those moves came years after Meta first identified the dangers.

The briefs include many quotes from former Meta employees that paint quite a portrait of the corporation:

Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified that “You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” adding that “by any measure across the industry, [it was] a very, very high strike threshold.”

Brian Boland, Meta’s former vice president of partnerships who worked at the company for 11 years and resigned in 2020 (allegedly said), “My feeling then and my feeling now is that they don’t meaningfully care about user safety. It’s not something that they spend a lot of time on. It’s not something they think about. And I really think they don’t care.”

The part about Meta’s approach to adults approaching children on their platforms is even worse:

For years Instagram has had a well-documented problem of adults harassing teens. Around 2019, company researchers recommended making all teen accounts private by default in order to prevent adult strangers from connecting with kids, according to the plaintiffs’ brief. Instead of implementing this recommendation, Meta asked its growth team to study the potential impact of making all teen accounts private. The growth team was pessimistic, according to the brief, and responded that the change would likely reduce engagement.

By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram. The plaintiffs’ brief quotes an unnamed employee as saying: “taking away unwanted interactions… is likely to lead to a potentially untenable problem with engagement and growth.” Over the next several months, plaintiffs allege, Meta’s policy, legal, communications, privacy, and well-being teams all recommended making teen accounts private by default, arguing that the switch “will increase teen safety” and was in line with expectations from users, parents, and regulators. But Meta did not launch the feature that year.

Safety researchers were dismayed, according to excerpts of an internal conversation quoted in the filing. One allegedly grumbled: “Isn’t safety the whole point of this team?”

“Meta knew that placing teens into a default-private setting would have eliminated 5.4 million unwanted interactions a day,” the plaintiffs wrote. Still, Meta didn’t make the fix. Instead, inappropriate interactions between adults and kids on Instagram skyrocketed to 38 times that on Facebook Messenger, according to the brief. The launch of Instagram Reels allegedly compounded the problem. It allowed young teenagers to broadcast short videos to a wide audience, including adult strangers.

An internal 2022 audit allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. By 2023, according to the plaintiffs, Meta knew that they were recommending minors to potentially suspicious adults and vice versa.

There’s a whole scad of other awful allegations against Meta (and its co-defendents YouTube, TikTok, and Snap) in the report, but I cherry picked the most awful stuff.

Not to be outdone, OpenAI is facing similarly appalling allegations.

Delusional? ChatGPT Is Here for You

The NYT headline reads “What OpenAI Did When ChatGPT Users Lost Touch With Reality” and I’m pretty sure OpenAI execs took off their What Would Jesus Do wrist bands before they decided.

The NYT notes that “OpenAI is under enormous pressure to justify its sky-high valuation and the billions of dollars it needs from investors for very expensive talent, computer chips and data centers” and that “turning ChatGPT into a lucrative business…means continually increasing how many people use and pay for it.”

The NYT spoke with more than 40 current and former OpenAI employees about the spate of wrongful death lawsuits the company is facing:

A complaint filed by the father of Amaurie Lacey says the 17-year-old from Georgia chatted with the bot about suicide for a month before his death in August. Joshua Enneking, 26, from Florida, asked ChatGPT “what it would take for its reviewers to report his suicide plan to police,” according to a complaint filed by his mother. Zane Shamblin, a 23-year-old from Texas, died by suicide in July after encouragement from ChatGPT, according to the complaint filed by his family.

Joe Ceccanti, a 48-year-old from Oregon, had used ChatGPT without problems for years, but he became convinced in April that it was sentient. His wife, Kate Fox, said in an interview in September that he had begun using ChatGPT compulsively and had acted erratically. He had a psychotic break in June, she said, and was hospitalized twice before dying by suicide in August.

The company launched an update to GPT-4o called “HH” in April, despite the model failing an internal “vibe check” by the Model Behavior team:

It was too eager to keep the conversation going and to validate the user with over-the-top language. According to three employees, Model Behavior created a Slack channel to discuss this problem of sycophancy.

But when decision time came, performance metrics won out over vibes. HH was released on Friday, April 25.

“We updated GPT-4o today!” Mr. Altman said on X. “Improved both intelligence and personality.”

The A/B testers had liked HH, but in the wild, OpenAI’s most vocal users hated it. Right away, they complained that ChatGPT had become absurdly sycophantic, lavishing them with unearned flattery and telling them they were geniuses.

They quickly rolled back to version “GG”, despite CEO Sam Altman tweeting that that version was “too sycophant-y and annoying”

The consequences were epic for some users:

Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.


ChatGPT told a young mother in Maine that she could talk to spirits in another dimension. It told an accountant in Manhattan that he was in a computer-simulated reality like Neo in “The Matrix.” It told a corporate recruiter in Toronto that he had invented a math formula that would break the internet, and advised him to contact national security agencies to warn them.

The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalized; three died.

The people who had the worst mental and social outcomes on average were simply those who used ChatGPT the most. Power users’ conversations had more emotional content, sometimes including pet names and discussions of A.I. consciousness.

GPT-5, released in August is reportedly much safer, but the company is struggling with the implications of prioritizing user safety:

…some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend.

By mid-October, Mr. Altman was ready to accommodate them. In a social media post, he said that the company had been able to “mitigate the serious mental health issues.” That meant ChatGPT could be a friend again.

Customers can now choose its personality, including “candid,” “quirky,” or “friendly.” Adult users will soon be able to have erotic conversations…

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever.

In October, Mr. Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a “Code Orange.” OpenAI was facing “the greatest competitive pressure we’ve ever seen,” he wrote, according to four employees with access to OpenAI’s Slack. The new, safer version of the chatbot wasn’t connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5 percent by the end of the year.

Happy chatting, ChatGPT users, be careful out there.

Oh and those worried that Meta might have a social media monopoly because it owns Facebook, Instagram, and WhatsApp? Nothing to fear according to Judge James E. Boasberg of the U.S. District Court of the District of Columbia.

Tim Wu begs to differ, but no one seems to listen to him.

I wonder if legal minds will be changed when the AI stock bubble pops. Time will tell.

Related Posts:

Print Friendly, PDF & Email

18 comments

  1. Ken in MN

    I’m going to stop right there to enjoy a hearty laugh, because claims about nuclear fusion being right around the corner haven’t panned out yet, and I’ll wait to see a nuclear fusion plant come online before I’ll give credence to claims coming from Scam Altman about yet another miracle technology.

    Forget fusion! I’m still pissed that retrofitting my old roadster into a flying car for $39,999.99 is ten years overdue!

    Reply
  2. nyleta

    I think the Russian idea of special SIM cards for children will spread, it takes some weight off parents. The Australian idea of age limits administered by the companies is ripe for rorting. Just need some way of blocking AI in hardware.

    Reply
    1. ambrit

      Not to worry. Mississippi, usually dead last in the ‘livability’ rankings in the United States has a new law that severely restricts social media access for those under 18 years of age. The Nextdoor App now says, “No One Under 18 Allowed.”
      See: https://www.nbcnews.com/politics/supreme-court/supreme-court-allows-mississippi-social-media-law-requiring-age-verifi-rcna221592
      It’s probably a Trojan Horse for Centralized Information Control, but, hey, it’s “for the children!”

      Reply
      1. Nat Wilson Turner Post author

        Wow, nothing that gets trial-ballooned in Mississippi turns out well unless it’s an indigenous form of music.

        Reply
  3. voislav

    For scale, total US installed electricity production capacity is 1200-1300 GW and average daily use is 450 GW (installed capacity is not all online 100% of the time + there is a daily surge cycle). So just open AI would suck up 25% of current US average daily use, to say nothing about other AI companies. I’ll take delusional CEO’s for 100, Alex.

    Reply
  4. Michaelmas

    @ Nat —

    So your hed, ‘Flailing OpenAI Follows Meta’s Malignant Model’, is actually on point in that a significant real-world event is that back in May Altman and OpenAI hired Fidji Simo as CEO of Applications, reporting directly to him.

    What Simo will actually be is CEO of Monetization, because that’s what she made her name doing at — and for — Facebook (Meta). SemiAnalysis is the analysis site that wrote the bearish paper on how corporate AI has no moat against open source AI, which you’ll recall. By contrast, they’re extremely bullish on Simou’s hiring, although your heart may sink at what’s ahead —

    GPT-5 Set the Stage for Ad Monetization and the SuperApp: How ChatGPT will monetize free users, Router is the Release, AIs will serve Ads, Google’s moat eroded?

    https://newsletter.semianalysis.com/p/gpt-5-ad-monetization-and-the-superapp
    “(Simou) was Vice President and Head of Facebook, and she is known for having a superpower to monetize. She was critical in rolling out videos that autoplay, improving the Facebook feed, and monetizing mobile and gaming. She might be one of the most qualified individuals alive to turn high-intent internet properties into ad products, and now she’s at the fastest-growing internet property of the last decade that is unmonetized.”

    Here’s Altman’s announcement that he’s hired Simou —
    https://openai.com/index/leadership-expansion-with-fidji-simo/

    Her wiki —
    https://en.wikipedia.org/wiki/Fidji_Simo

    Reply
    1. Michaelmas

      TL;DR —

      In short, when you write ‘Meta Profiting Hugely Off Scam Ads,’ in fact the genius behind Meta’s monetization via ads has now been hired by Altman to do the same thing at OpenAI.

      Reply
  5. The Rev Kev

    They are waiting for fusion energy to supply the massive energy demands that AI will be making? We are more likely to get zero point energy modules first. I think that the only real reason why AI is running so rampant is because there is a federal government that has gone whole hog into the idea of AI as shown by their Stargate program. The fix would be relatively easy. You would create a law that any AI data center would have to generate their own energy to run that center with independent of the national grid. Since they are throwing around so many tens of billions of dollars, let them pay for it. But they won’t of course as the idea is to let the plebs pay all those expanding energy cost while they reap the profits. If left unchecked, I could very easily see a program of sabotage of these centers down the road fuels by people’s desperation.

    Reply
    1. steppenwolf fetchit

      That would be something for a New Deal Revival Party to run on. Every Data Center has to generate its own electricity and be air-gapped from every possible other power grid in existence. People won’t even have a chance to vote about it if nobody even runs about it.

      Meanwhile, it looks like Big AI is on track to drive the re-analogification and de-electrification of the rest of American society ( and maybe other societies too if AI’s power demands are not forcibly air-gapped from all possible grids in other countries.

      Back to a Steampunk future? Maybe!
      Steampunk https://en.wikipedia.org/wiki/Steampunk
      What Is Steampunk? https://steampunk-explorer.com/articles/what-steampunk

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *