Coffee Break: Nihilists at Google Wrecking Existing Business to Chase AI Growth

In 2016 Google CEO Sundar Pichai announced that “in the long run, we’re evolving in computing from a ‘mobile-first’ to an ‘AI-first’ world.”

In 2018, he famously told Kara Swisher that AI is “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire.”

It’s generally over-looked that he opened that statement by saying “We don’t take a very optimistic view of AI” and closed by responding to Swisher’s comment that “fire is pretty good” with “(fire) kills people too, we have learned to harness fire for the benefits of humanity but we have to overcome its downsides too.”

And yet, just over four years later Pichai, by then also CEO of Google’s parent company, Alphabet, issued a ‘code red’ and directed multiple department heads including those in charge of research, trust, and safety “to switch gears to assist in the development and launch of AI prototypes and products.”

This directional switch took place only three weeks after the launch of OpenAI’s ChatGPT product.

At the time, The New York Times warned that “because these new chat bots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact” and “that could turn people against Google and damage the corporate brand it has spent decades building.”

The Times also quoted University of Washington professor Margaret O’Mara saying, “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”

That was in late 2022, as of mid-year 2025, it is clear that Pichai has damaged Google’s brand and its core search product in a seemingly competition with OpenAI.

As I tried to understand what Pichai was doing, my initial framework was Cory Doctorow’s “enshittification” concept defined as when “vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to users and business customers to maximize profits for shareholders.”

Over the past year I’ve found Ed Zitron’s analysis of Google’s decision-making has deepened my understanding of WTF Pichai is up to. I’ll come back to Zitron in a moment, but for those new to his work, I’d recommend giving these three pieces a quick skim:

  1. The Rot Economy
  2. The Man Who Killed Google Search
  3. The Era Of The Business Idiot

I’ve also found pioneering AI researcher Gary Marcus’ work very useful — he flagged most of the limitations of LLM AI decades ago and has been consistently vindicated in his skepticism that the current OpenAI and Alphabet approach to AI will truly prove “as important as fire.” We’ll come back to Marcus too.

But first I want to touch on an idea that Naked Capitalism readers have seen recently in another context via Curro Jimenez: nihilism.

Curro was quoting the French sociologist Emmanuel Todd’s analysis of Israel’s recent behavior in Gaza and Iran and this quote jumped out at me: “Perhaps in the unconscious depths of the Israeli psyche, being Israeli today is no longer about being Jewish — it’s about fighting the Arabs.”

I would argue that for Pichai and those remaining at Google under his leadership, being Google today is no longer about being the dominant search engine — it’s about fighting OpenAi/ChatGPT.

On Wednesday, I’ll be applying Todd’s formulation to the current leadership of the Democratic party which is no longer about being the party of the working class but rather it’s about fighting off the progressive wing of the party.

Now, let’s dive into Ed Zitron’s explanation of why Google is willing to risk its incredibly lucrative core search business and maybe even destroy the open web itself in a quest to compete with Open AI.

Zitron is an effective polemicist so let’s let him cook a little:

Google no longer provides the “best” result or answer to your query – it provides the answer that it believes is most beneficial or profitable to Google. Google Search provides a “free” service, but the cost is a source of information corrupted by a profit-seeking entity looking to manipulate you into giving money to the profit-seeking entities that pay them.

The net result is a product that completely sucks.

…That’s because Google has, like every major tech company, focused entirely on what will make revenues increase, even if the cost of doing so is destroying its entire legacy. Google has announced their own “Bard AI” to compete with Bing’s ChatGPT integration, and I’ll be honest – I feel a little crazy that nobody is saying the truth, which is that Google broke the product that made them famous and is now productizing fixing their own problem as innovation.

Venture capital and the public markets don’t actually reward or respect “good” businesses or “good” CEOs – they reward people that can steer the kind of growth that raises the value of an asset. …Sundar Pichai isn’t paid $280 million a year because he’s a “good CEO.” After all, Google has all but destroyed its search product. He’s paid because he finds ways to increase the overall growth of the company (even while their cloud division still loses money), and thus the stock goes up.

Zitron expanded on these ideas in his “Business Idiot” piece:

Our economy is run by people that don’t participate in it and our tech companies are directed by people that don’t experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

The incentives behind effectively everything we do have been broken by decades of neoliberal thinking, where the idea of a company — an entity created to do a thing in exchange for money —has been drained of all meaning beyond the continued domination and extraction of everything around it, focusing heavily on short-term gains and growth at all costs. In doing so, the definition of a “good business” has changed from one that makes good products at a fair price to a sustainable and loyal market, to one that can display the most stock price growth from quarter to quarter.

Ewan Morrison elegantly summarized why Pinchai’s commitment to AI is so riskly to its search business and the Open Web itself:

And like Captain Ahab, Pinchai knows he must have a crew just as dedicated to his suicidal mission as he is. He might not be nailing a gold coin to the mast and demanding loyalty oaths, but he is aggressively pushing buy-out offers on Googlers who are not down with the program:

Let me wrap with a couple of paragraphs from Freddie de Boer about the issues the whole of tech faces and why it thinks AI is the answer:

the hype is a phenomenon driven by needs that are fundamentally financial in origin. The tech companies need a new suite of products that can restore their eroding profitability and inspire the public the way that the public was inspired in the late 2000s and early 2010s; the financial sector and investors need the tech companies to be the unicorn stocks that they once were. As usual with speculative capitalism, the tail is wagging the dog. When hockey stick growth does not emerge naturally from reality, it will be invented.

Media hype about AI, the almost literal absence of any countervailing narrative, the relentless way that the stodgiest publications inflate the threats/hopes that have been invested in these technologies… it’s all downstream of the desire to feel about tech the way people did in the Obama era and the demand from the moneyed to have the right lottery tickets to buy. A lot of people who are used to getting what they want are looking to let the good times roll again, and “AI” is precisely the kind of vague instrument into which they can throw those hopes. Anyone who points out that the emperor has no clothes is simply told that they don’t understand the technology, and every year that goes by that human life is not seriously disrupted by this technology is just a minor delay. Rinse and repeat.

In future posts, I’ll look at other players in the AI space — OpenAI, Meta, Microsoft, Amazon, and Apple.

Print Friendly, PDF & Email

15 comments

  1. ciroc

    Tech billionaires have debunked the myth that private companies produce the best products. They charge the highest prices for the lowest quality products. The solution is clear: nationalize search engines and AI.

    Reply
    1. cfraenkel

      Hey, it worked for Larry Ellison / Oracle. Everyone else is just following in his footsteps.

      Some would point to IBM as the original innovator in this space.

      The rot goes way back.

      Reply
    2. Vicky Cookies

      You can nationalize the industry, or the industry can privatize the government. As things stand, the top echelons of the tech giants see their interests largely aligned with the ideologues and bureaucrats and power-seekers in the state apparatus. A new trend is top dudes at Palantir and other companies joining the army (and immediately being promoted to Lt. Col.). It goes both ways.

      Reply
  2. Hickory

    The US definitely has an issue with short-term thinking, and that influences how the gov’t and companies interact with AI, but there’s a lot more going on. Russian President Putin said he was treating AI as on par with nukes in strategic importance. The capacity for AI-enabled weapons to transform war means those who don’t have it are severe underdogs. And while there’s a lot of “ai slop” there is a lot of genuinely impressive capability that is consistently improving, even with occasional bad updates, enshittification, short-term incentives, and so on. Self driving cars are a real thing, esp in China. Autonomous drone swarms that self-select their targets are real. AI isn’t going away, and it will only get better (at least for the paying customers, if not for consumers of “free” services who are actually the ‘product’).

    Just because US companies are structurally stupid (because the owners enforce short term thinking on executives) doesn’t mean AI is not a game changer.

    Fundamentally, what is google? A search engine! what does a search engine do? It connects people with the information or resources they seek. That is what Chatgpt does, but in a much more engaging direct way (at least by many users’ standards). To get the ‘answer’ directly instead of sifting through industry technical standards, endless usenet forums, news website archives, or whatever data source – that’s a big shift, and clearly it’s possible even if now it’s not yet super reliable.

    That’s why one of Google’s top execs recently said something like “we will go bankrupt before we lose the AI race”. Their search was already degrading years ago and financially they were doing just fine. But this LLM chat-interface really could eat their lunch, and they see that. It sounds like they don’t know how to preserve their ad biz just yet, but they know they need to be at the forefront of this new tech and rebuilding their revenue streams to incorporate it, not hold on to an old model that just won’t be as competitive. There’re are a lot of reasons to be upset with Google, but their embrace of LLM-based interactions at the search page backed by their traditional search engine seems smart to me. Separate from the chat interface is LLMs’ reasoning capability, or ability to connect disparate data points in meaningful ways. That will transform search. Google would be foolish not to go head first into this and get the best people and capabilities early on.

    Don’t get me wrong, I think AI is a disaster for humanity and it’s already having lots of terrible impacts, as NC had documented. But I think ragging on Google for trying to jump on this new tech is not acknowledging the situation they’re in, or the likely future abilities of the tech.

    Reply
    1. cfraenkel

      Google search sucks because they refused to filter out the spam, because the spam was what brought in the revenue.

      However: It connects people with the information or resources they seek. That is what Chatgpt does, but in a much more engaging direct way (at least by many users’ standards). To get the ‘answer’ directly

      The ‘answer’, right or wrong, with no way to evaluate it’s veracity. At least you were honest enough to put it in quotes.

      If this is the direction society is going, we’re doomed. The FDA evaluating drug safety on what AI halucinates from the swamp of anti-vax drek? EPA deciding pollution rules based on the weight of anti-regulation spam out there? Building codes revised to reflect opinion pieces by developers and material vendors?

      Because that’s all that AI is – it’s running a popularity contest to see what the most probable ‘answer’ is for any input. That results (for now) seem reasonable is just because the spam hasn’t had a chance to overwhelm the valid, human generated content. Just wait.

      Reply
    2. XXYY

      Self driving cars are a real thing, esp in China. Autonomous drone swarms that self-select their targets are real.

      These examples are based on pattern matching, which is something that certain AI technologies can do pretty well. Don’t confuse these with llvm applications, which try to find the the most probable series of human authored tokens in their training data to correspond to a prompt.

      I don’t anticipate anything good or useful coming out of llvm, partly because of its high error rate, and partly because of its tendency to corrupt future training data.

      Reply
  3. voislav

    Google’s P/E is down to 18, so lower and more normal compared to the rest of the tech sector (Apple, Microsoft and NVIDIA are all around 30-40, Meta is 25). So investors are increasingly valuing the stock based on actual revenue, not some nebulous growth potential.

    AI race is all about restoring the perception of these tech stock as having growth potential and pushing the P/E ratios back up without creating actual revenue. Short-term game because you can only maintain the illusion for so long, but enough to create substantial stock price growth for Pinchai and his buddies to collect their bonuses and cash out.

    Reply
  4. Michael Fiorillo

    Computer sub-literate here, so it’s a given that my opinion/analysis transacts at a very high discount, but I still have to rhetorically ask: if AI is being trained on an ever-increasing mass of Internet data, ever more of which is trash/increasingly enshittified, then isn’t it inherently entropic? We know too well the quality of it most Internet content, and I assume it exceeds the amount of pre-digital knowledge/information daily if not hourly.

    As humans become ever-more passive and stupefied by tech, why is AI not doomed to become the equivalent of a fiftieth-generation photocopy, or Gresham’s Law applied to information science?

    Reply
    1. cfraenkel

      See, it doesn’t take an engineering degree to see the basic fallacy at play! You are entirely correct.

      Reply
    2. JCC

      Yes. Essentially it is exactly what it says it is, a Probability engine based on a Large Language Model database that includes all inputs stored in that database, good or bad, scientifically valid and invalid, copyright protected or not.

      And now I will break for lunch… hopefully the wood glue has kept the cheese from sliding off my pizza and onto the broiler floor.

      Reply
  5. Rolf

    Thank you for this post. I find much of what Ed Zitron has to say, e.g., on tech companies and their CEOs in general, fairly perceptive. He identifies the fundamental problem of scaling in gen AI (similar to that which Yves has identified for blockchain), in his post There Is No AI Revolution:

    Putting aside the hype and bluster, OpenAI — as with all generative AI model developers — loses money on every single prompt and output. Its products do not scale like traditional software, in that the more users it gets, the more expensive its services are to run because its models are so compute-intensive.

    And later,

    The only product that OpenAI has succeeded in scaling to the mass market is the free version of ChatGPT, which loses the company money with every prompt. This scale isn’t a result of any kind of product-market fit. It’s entirely media-driven, with reporters making “ChatGPT” synonymous with “artificial intelligence.”

    Reply
  6. Jason Boxman

    “still working at Google at this point” might also mean that most workers, even highly paid tech workers, are still workers that rely on a paycheck and have to make choices in life in regards to what kinds of principals you can eat and what kind you cannot eat.

    Oh, what a world where we could all say fock you to the American capitalist elite and just quit this exploitative system en mass.

    Reply
  7. Cato the Uncensored

    I occasionally use AI to to double check that I have covered most or all of the relevant bases of subjects I once knew well but I haven’t looked at for some stretch of time.

    It’s truly amazing how much BS hallucination gets injected as fact, which someone with even a rusty degree of subject matter expertise might easily catch, but think of the poor souls who used AI to breeze through their education or training, and who haven’t developed the critical eye to help them distinguish actual fact from AI fact.

    AI is the super-highway to Idiocracy.

    Reply
  8. Zephyrum

    Excellent post and comments.

    For my part I did a Google search on Ahab’s gold coin because high school was long ago and I wanted a refresher. The Wikipedia article on the Moby Dick coin is pretty good, but I wanted to go a bit deeper. That led me to using ChatGPT with the following transcript: https://chatgpt.com/share/6859bdda-f50c-8004-940c-f7a995af0e2d showing it’s not just math errors you have to watch out for.

    I just find it remarkable that this glib AI pretends to know so much, spins unsupported theories, asserts easily discovered falsehoods, and is obsequious but not regretful when confronted. Reminds me of certain annoying people from high school. Back then people would have blamed the parents, which seems even more justified here and now.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *