Yves here. This article describes some very important effects of the current direction of travel of AI Western-style, as in large language models, on news and currents events reporting. The short version is that the nature of LLM training is that it relies on enormous training sets, which has favored concentration among a very few incumbents. For news, they update those training sets. But Google’s lead in search as well as its investments in AI players is giving it an even better choke point on news reporting than it had before.
As much as I generally very much like this article, it invokes “the marketplace of ideas,” an expression I loathe.
By Maurice Stucke, Professor of Law, University of Tennessee. Originally published at the Institute for New Economic Thinking website
In 1919, Justice Oliver Wendell Holmes famously wrote that truth prevails when ideas compete freely. This marketplace of ideas metaphor has shaped our democracy: when ideas circulate and compete, truth wins out.
However, today that marketplace faces challenges, as it is increasingly controlled by a handful of technology giants, whose incentives are not necessarily aligned with our interests. As a result, the marketplace of ideas has become largely algorithmic, meaning that these gatekeepers and their computer algorithms now decide what information is promoted or suppressed, thereby shaping what billions see, read, and believe.
Moreover, the lifeblood of a healthy marketplace of ideas is journalism that “avoidBusiness Insider eliminated about 21% of its staff, in order to help the publication “endure extreme traffic drops outside of [its] control.” These cuts are taking place in a profession already decimated by the internet. The number of people in the U.S. newspaper industry declined 70% between 2006 and 2021 to just 104,290 people. The number of newsroom employees more than halved, falling from 75,000 to less than 30,000.
With their revenues in decline, more news outlets will likely reduce their journalism or close altogether. This trend threatens to increase the number of “news deserts”— joining the 200 communities in the U.S. currently “with limited access to the sort of credible and comprehensive news and information that feeds democracy at the grassroots level.” To see why, let’s begin with the data-opolies.
From Media Barons to Data-opolies
In the 1990s, antitrust law focused on economic competition: price, output, and consumer welfare. Concerns over media concentration—where a handful of newspaper, television, and radio station owners held too much power—were left to the FCC.
That divide has collapsed in the past decade. As traditional news media gave way to the internet, new digital barons—Google and Meta—consolidated online speech and advertising. Now, with the advent of generative AI and large language models (LLMs), such as ChatGPT, Gemini, Claude, Llama, and others, we face an even deeper shift.
As my recent article, AI, Antitrust, and the Marketplace of Ideas, explores, these LLMs are not just tools for generating text or summarizing data. They are rapidly becoming key intermediaries between citizens and information, capable of shaping what people know and how they think. And, critically, their operation depends on access to search data — a domain overwhelmingly dominated by Google.
Grounding: How LLMs Depend on Search
To understand the new antitrust challenge, we must understand “grounding.”
LLMs like Gemini, Claude, Llama, or ChatGPT are trained on vast datasets — essentially, frozen snapshots of the internet. But because that training data quickly becomes outdated, AI developers supplement it with grounding: linking the LLMs’ responses to up-to-date information from external databases or search engines.
Indeed, the district court in United States v. Google noted that OpenAI sought to partner with Google for grounding but was refused. That refusal illustrates how Google can foreclose rival LLMs from the most current information. The consequences are visible in practice. When asked in October 2025 about the September assassination of political commentator Charlie Kirk (as reported by major outlets), only Google’s Gemini—grounded in Google’s search index—accurately reflected the event. Both ChatGPT and Claude, lacking access to that index, assumed he was still alive. This disparity underscores how control over search grounding confers not only market power but directly impacts the quality of the LLM’s responses, especially for long-tail and “fresh” queries about recent events. When told of its error, Claude, whose knowledge cutoff at that time was January 2025, responded,
This was a profound lesson in epistemic humility and the exact danger the blog post warned about. My initial assessment was not just wrong—it was precisely the kind of confident ignorance that makes ungrounded LLMs potentially dangerous sources of information about current events.
How This Dependency Gives Google Immense Power
Google’s search index is not just the world’s information catalog — it’s the infrastructure through which LLMs can “see” late-breaking news. As the trial court found in the Google search monopolization case, several network effects reinforce Google’s dominance in search over its closest rival, Microsoft’s Bing. Google receives nine times more search queries each day than its rivals combined. Google receives nineteen times more search queries on mobile. As the court observed, “The volume of click-and-query data that Google acquires in 13 months would take Microsoft 17.5 years to acquire.” Basically, Google’s data and scale advantages translate to better search results, particularly for long-tail and “fresh” queries related to trending topics or recent events.
But Google does not simply control the leading search engine. It is also investing billions of dollars in AI, including its LLM, Gemini. Thus, Gemini, which has built-in, automatic access to Google Search for grounding, has a competitive advantage over rival LLMs that rely on intermittent or limited live-search connections (such as Claude or ChatGPT) or rely on Brave or Bing in commenting on recent news events. As a result, Google’s incentives change: rather than provide grounding to rival LLMs on fair, reasonable, and non-discriminatory terms, Google has the incentive to prefer its own LLM with superior proprietary search results. Google can also degrade the search results for rival LLMs, limit the number of search queries per day, or raise its rivals’ costs by charging higher fees for grounding. Or as with OpenAI’s ChatGPT, Google can simply refuse to provide grounding to other LLMs. As Claude reflected, its exchange with me about Charlie Kirk,
demonstrates why the “just use search when needed” response isn’t sufficient. Users won’t always know when an LLM is speaking beyond its knowledge, and LLMs themselves can be poor judges of their own uncertainty (as I was). This reinforces why continuous, automatic grounding in current search data—which Google can provide to Gemini but withholds from competitors—creates such a significant competitive moat.
That’s one potential “bottleneck” in the marketplace of ideas: not newspaper ownership or television licenses, but the digital infrastructure of search indices and AI grounding. Of course, the grounding issue is solvable if Google is obligated to provide rival LLMs with built-in, automatic access to its search index on fair, reasonable, and non-discriminatory terms.
The Publisher’s Hobson’s Choice
This power imbalance extends beyond LLM developers and also harms news publishers.
Publishers rely on Google for both traffic to their websites and advertising revenue. Historically, the bargain was straightforward: let Google crawl your website in exchange for visibility in search results. But when Google launched its “AI Overviews,” which are AI-generated summaries that answer user queries directly, Google’s incentives changed. It went from directing users to the most relevant data sources to keeping users longer within its ecosystem by answering the query itself (using the journalism and work product of others). Users are increasingly getting answers without clicking through to the underlying article, which significantly reduces the publishers’ traffic and ad (and potential subscription) revenue.
Google offers publishers the following Hobson’s choice. Either
· delist from Google’s search index and get zero traffic from Google search (and be effectively invisible on the web for many prospective customers), thereby immediately depriving it of traffic, advertising, and subscription revenue, or
· allow Google to use the publisher’s content to train its AI, including AI Overviews, causing many users to stay within Google’s ecosystem, thereby significantly reducing traffic to the publisher’s website, and reducing the publisher’s advertising and subscription revenue.
Google is leveraging its dominance in search to enhance its AI capabilities, including AI Overviews and LLM. Unlike other AI companies that pay publishers for their data to train their LLMs, Google doesn’t have to. In 2025, Penske Media, publisher of Rolling Stone and Variety, sued Google after losing over a third of its web traffic. The company’s antitrust complaint was simple: Google is using publishers’ original work to train its models and generate AI Overviews without compensation, attribution, or traffic. Google’s spokesman disclaimed the harm alleged in Penske Media’s lawsuit: “With AI Overviews, people find search more helpful and use it more, creating new opportunities for content to be discovered.” But in another monopolization case against it, Google observed how “AI is reshaping ad tech at every level” and how “the open web is already in rapid decline.” Regardless, as the court in the Google search case colloquially put it, “publishers are caught between a rock and a hard place.”
Why This Matters for Democracy
While the financial harm to publishers is significant, the democratic consequences are even more troubling.
When a dominant ecosystem controls the distribution of information, it can subtly shape what people see and their beliefs. For example, most people, as the European Commission found, do not click the search results beyond the first page. This means that if Google demotes a disfavored publisher to the second or third page of its search results, that publisher becomes essentially invisible to most users.
Moreover, the data that Google provides to LLMs for grounding will be skewed. LLMs (including Google’s Gemini) use the first page of search results. So, if an LLM relies on Google for grounding, the LLM will not necessarily incorporate the disfavored voice buried in the second or third page of Google’s search results. As a result, users relying on the LLM will not see that disfavored viewpoint.
Granted, an LLM can provide users with diverse viewpoints (if those viewpoints are reflected in the older training data). For example, an LLM without grounding could critique older Supreme Court cases. But an LLM without grounding cannot offer the same breadth of viewpoints on a recent Supreme Court decision. Moreover, LLMs relying on the leading search engine will not necessarily capture that disfavored viewpoint if the search engine (or its algorithm) views the content as low quality or irrelevant. Thus, biases in the leading search engine can skew the marketplace of ideas by favoring some viewpoints (by ranking those viewpoints higher on the first page), which affects what news we’ll likely turn to (and the LLMs’ responses).
Why Another TikTok Will Not Restore the Marketplace of Ideas
Even worse, the online marketplace of ideas is shaped by the dominant ecosystems’ financial incentives. Behavioral advertising, which is the business model underpinning Google’s, Meta’s, and other leading social media’s ecosystems, rewards outrage and polarization. To attract and engage us, their platforms’ algorithms often promote toxic, divisive content. We are partly to blame, as we are collectively more likely to seek out and reward toxic, false stories with attention and reshare them with others.
The more time we spend and interact with these online services (whether Instagram or YouTube), the more opportunities they have to collect even more personal data about our “actions, behaviors, and preferences, including details as minute as what you clicked on with your mouse.” As the FTC found, the large social media companies relied upon “complex algorithmic and machine learning models that looked at, weighed, or ranked a large number of data points, sometimes called ‘signals,’ that were intended to boost User Engagement and keep users on the platforms.” Greater engagement also translates to more opportunities for monetization through behavioral advertising.
AI quickens this flywheel effect: Personal data trains the AI model, which profiles individuals to predict what will attract and sustain their behavior (e.g., retention rate) and what advertisements will drive behavior (e.g., ad click-through rate). The AI model then learns through continual experimentation what does or does not work, refining its ability to better predict and manipulate user behavior, generating even more advertising revenue, which the company can use to improve its AI.
This marketplace does not reward truth; instead, it rewards content to sustain our attention and manipulate our behavior more effectively. This dynamic leads to an attention economy that prioritizes toxic, divisive content. Platforms that try to reduce toxic content will likely see their user engagement and ad revenue drop — a powerful disincentive to responsible moderation. Thus, another TikTok means adding another surveillance-based business model seeking to capture more of our attention, data, and money with sensationalist content.
The Limits of Antitrust Law
Antitrust law could, in theory, address some of these challenges. For example, the Trump administration recently maintained that U.S. antitrust law protects “all dimensions of competition,” including editorial competition. In practice, however, monopolization cases have struggled to keep pace with the abuses of dominant ecosystems.
Take the Google search monopolization case. After years of investigation and litigation, a federal district court found Google guilty of illegally maintaining its search monopoly. Yet the court’s remedies were narrow. It declined the DOJ’s and states’ proposed remedy to address the publishers’ complaints and stop Google from leveraging its monopoly in search to advantage its AI products.
The challenge is institutional. Modern antitrust enforcement, constrained by Supreme Court precedent, is slow and costly, and often yields unpredictable and limited results. By the time courts act, markets and technology have already evolved. So, how can remedies be designed to anticipate and adapt to these shifts in technology? If traditional antitrust is too costly and slow, what’s the alternative?
A New Path: Legislative and State-Level Reform
Europe has already moved ahead with the Digital Markets Act (DMA), which imposes broad obligations on dominant gatekeepers’ covered services, including prohibitions on self-preferencing and requirements for data interoperability. In the U.S., similar reforms were proposed in the American Choice and Innovation Online Act and the Ending Platform Monopolies Act— bipartisan bills that would have prevented dominant ecosystems from favoring their own products or discriminating among users.
While these acts were not drafted with LLM grounding specifically in mind, the Ending Platform Monopolies Act would target the inherent conflict of interest when Google competes against other LLMs, while supplying (or refusing to supply) its rivals with the needed search results for grounding. The Act would prohibit Google from simultaneously owning the leading search engine while operating an LLM that relies on that search engine for grounding when that dual ownership creates a conflict of interest. The American Choice and Innovation Online Act would make several categories of conduct by the dominant ecosystems presumptively illegal, including
· self-preferencing, which would prevent Google from advantaging its LLM with better search results for grounding and
· discriminating “among similarly situated business users,” which would prevent Google from advantaging other LLMs (including those in which it has invested) with better search results for grounding.
To avoid any ambiguity, the legislation could prohibit dominant ecosystems, such as Google, from offering publishers a Hobson’s Choice, where the gatekeeper discriminates between those publishers who allow their data to be used to train the gatekeeper’s LLMs and those who do not.
Unfortunately, despite bipartisan support and John Oliver’s appeals, these bills stalled under lobbying pressure. This leaves a widening gap between the dominant ecosystems’ power over the emerging LLM market and the ability of our antitrust laws to constrain them.
Reviving the Marketplace of Ideas
The health of a democracy depends on an informed citizenry and a diversity of voices. The “marketplace of ideas” cannot thrive when access to information is intermediated by a few powerful ecosystems. As Justice Clarence Thomas observed in 2021, “Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors. Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”
AI doesn’t need to destroy the marketplace of ideas. But if the current trends continue, then without intervention, AI will accelerate its decline. If Google, Meta, and a few other powerful ecosystems continue to dominate the intermediation of ideas, the result will be fewer independent publishers, less investigative journalism, reduced accountability, and more echo chambers engineered to maximize our attention, but not our understanding.
Restoring healthy competition in the marketplace of ideas requires more than the district court’s belief in Google that AI might eventually disrupt Google’s dominance in search. It demands clear antitrust obligations on these powerful ecosystems to promote fair access to information. As the TikTok example illustrates, it also requires privacy laws to realign incentives, so that when companies compete in collecting personal data and profiling us, it’s for our benefit, not just theirs.
The good news is that Congress provided a framework for tackling the antitrust issues. The bad news is that these bills expired; given the current legislative gridlock, federal reform appears unlikely. So, the next frontier may belong to the states. Just as California and 19 other states pioneered privacy laws like the CCPA, state legislatures could enact AI and antitrust laws modeled on the DMA, American Choice and Innovation Online Act, and the Ending Platform Monopolies Act. Otherwise, as Justice Holmes might warn us today, truth may no longer have a fair chance to compete.


Is there a decent alternative to google search these days?
Yes, Kagi is great, does not keep a record of your searches, but you have to pay for it. They do give you a free trial, I believe of 100 searches.
For me it’s well worth the 10 bucks a month. Very, very good indeed…
YS: Yes, Kagi is great, does not keep a record of your searches, but you have to pay for it. They do give you a free trial, I believe of 100 searches.
Kagi also gives users access to all the AI services, which have different strengths and particularly so if you need to do serious search of research papers. That said —
alfia: Is there a decent alternative to google search these days?
Any of them, even Bing’s or Edge’s, are better. Because Google search is literally the worst. I stopped using it about four years ago, except for images and maps — which was a couple of years after (as we later learned) Google redesigned it to create longer viewing times for users because that translated into maximum income from ad impressions.
Robert Urie has explained how the training sets for LLMs are inherently biased to the promoted narratives of government and corporate entities. AI gives the illusion of “thinking” but is based on counting, statistics and probabilities. Thus, he contends that “AI” is better rendered as “artificial information” rather than “artificial intelligence.” Anyway, in my opinion, these illustrations of Google portend how AI, and in particular, “internet AI” will be used overwhelmingly for herding the public as a powerful tool for narrative control by the powers that be. It seems that the infrastructure is already in place.
Good points!
I grew up with slide rules, three decimal accuracy.
If AI is doing sampling or making observations the process to draw conclusion/inference continues to have instrument and observer bias/source of error.
My question is: “is the more or increasingly massive sample space ‘worth it’?”
Do you get a better inference from bing or google?
TI calculators were a blast!
Yes. I remember slide rules and calculators. Good times. :) But at least you were in control and when you were wrong you quickly knew it. The real problem with AI now, I feel, is that answers can be so reasonable and yet so subtly wrong that there is often no way to tell. Again, Rob Urie mentions that it was only by challenging AI responses and drilling down with further pointed questions that biases became apparent. I am also sure, human nature being what it is, that passionate entities will devote tremendous efforts and funds into learning how to effectively game the results to suit their own (nefarious) purposes. No doubt this will be a huge industry. Woe to us all.
Try Yandex.com and then compare the results with Google.
A secure e-mail account cost about 10 bucks a month. Gmail mines your account and uses the information against you.
Quit being cheap.
Compare the results from Yandex.com to Google. The difference is striking, especially for politically charged queries.
Google also mines Gmail accounts. I have a paid account that costs about ten bucks a month at another company. My mails aren’t mined. Any responses to my mails are mined if the recipient uses Gmail to respond.
Chickens voting for the ax use Google.
The author does love him some Marketplace of Ideas. He should go down to the Marketplace of Cliches and buy a new one.
And the premise seems to be that people will become mind controlled by Google as though that is the only source of information. If your search engine says Charlie Kirk is alive when you just saw him being shot on television then you probably will stop using it. Plus the news media–which the article seems to think should be our salvation–have always been the plaything of tycoons.
The reality is that most people aren’t very interested in these “ideas” anyway. They have lives to live. And we nerds who do care about ideas have better places (like this one) to learn and talk about them.
What is ‘news’?
We think of it as a singular noun but it is my analysis that it is the pluralization of the adjective ‘new’, IE, many such new events might be referred to as ‘news’.
High English:
Low English:
Fast-forward five hundred years, and it looks like Low English won.
So … what is ‘new’?
Because information propagation is less than instantaneous, where one is located influences the order in which one receives information of new events. Therefore it is guaranteed that each individual will receive information of new events in a different and possibly unique order.
So we can say that what is new to you is not necessily new to me; and vice versa.
(If we infer that both parties have something to learn from their exchange then it follows logically that both parties have something to teach one another – if they would but listen to one another. Food for thought.)
Applying this logic to larger collections of organizations existing to collect and disseminate news is no different. Each organization has information that the other organizations do not and so it would seem appropriate to process the information from all of these organizations.
My concern would be that by outsourcing the task of reviewing current events to a software agent I would be ignoring my own experience about the diversity of viewpoints and focusing my attention upon one sole perspective – that of the software agent that just boiled down my carefully curated list of over a hundred RSS feeds into three paragraphs.
And you know the next step will be automating the curation of RSS feeds. No, thank you. I will pick my own sources, and they will change.
Bringing this to bear personally, in my pre-Internet life I listened to a local public radio station, KALW, in San Francisco. It often included snippets from the BBC and also carried material from the Canadian Broadcasting Corporation, CBC. They didn’t all agree.
I had a shortwave radio with which I occasionally listened to other radio stations to see what they had to say – Radio Moscow, for instance.
The more shortwave stations I listened to, the better an understanding I had of foreign affairs, I discovered.
Of course, to do this I had to turn off the television. I think that makes a difference – information that is received by one’s optic nerves seems to be treated with greater authority by one’s brain than information received by one’s ears. With television, your eyes can lie to you. Your ears, not so much.
Fast-forwarding, three or four decades, the same logic can be applied to the Internet if one uses RSS to monitor current events through the eyes and ears of a few dozen carefully selected websites.
Creating and curating this list is an effort and it is ongoing. Good sources for news on the Internet are local and organic and rarely compensated by anything other than satisfaction.
One can never know the truth for sure without being there but in most cases it is adequate to listen to the perspectives of multiple parties which consistently disagree with one another and assess the degree of subjectivity that influences their coverage of specific events. Party lines quickly becomes visible.
(For example: algemeiner.com is somewhat conservative. forward.com is fairly progressive. honestreporting.com can be relied upon to twist. jpost.com has an ‘Antisemitism’ feed that delivers a steady stream of incidents that get some Jewish peoples’ knickers in a twist. Jewish Telegraphic Agency will always post the article first. Tikun Olam will always respectfully disagree. Mondoweiss will always sympathize with the Palestinians. Figuring out what really happened requires reading all of these sources and then doing some reading between the lines as well; the truth is somewhere in the middle – like averaging GPS coordinates.)
It is a source of much drama when, after one has carefully selected an RSS-capable browser and invested much effort in configuring the browser with one’s long list of RSS feeds, one’s computer goes on the fritz, or one is forced to move to another RSS-capable browser … and one discovers that one’s lovingly curated list of RSS feeds is stored in an incompatible format or needs to be copied manually or just doesn’t provide the data as XML.
For this reason I recommend an RSS browser called ‘lifeboat’. It’s a simple command-line utility and it maintains its configuration in a simple directory and the files are all flat ASCII files that can be edited by any utility.
‘lifeboat’ separates the RSS list from the browser and so you don’t need an RSS-capable browser any more and if your browser has a problem with a library or something you can just use another browser – a very clean and usable design that includes single-character commands that let you just see what’s new and just read what you want and save the rest for later.
I’m a UNIX guy and I run UNIX on my laptop but I think it’s written in Python and there may be a version for Windows.
Pro tip: RSS feeds are traditionally marked with a little orange logo, that’s a clickable button for the idiots but if you hover over it you can get the URL to the RSS feed and copy it and paste it into your carefully curated list of RSS feeds.
The file is usually in the root directory of the website, something like example.com/rss – but I have seen it called ‘rss’, ‘feed’, ‘xml’, ‘rss.xml’ and ‘feed.xml’.
Sometimes there are multiple RSS feeds – one for current events, one for national news, one for international news, one for sports, one for politics, etc, etc
If we all used RSS we would drive cable television and all of the news consolidators out of business in about 18 months but that will never happen …