Conor here: Wonderful. More surveillance capitalism marketing strategy to sustain the AI bubble. But at least it pushes the following train, as summarized by Edward Ongweso Jr., a little further down the track:
Their goal is not to realize AGI or radically improve life for humanity, but to reallocate capital such that it enriches themselves, transmutes their wealth into even more political power that imposes constraints on countervailing political forces, and liberates capitalism from its recent defects (e.g. democracy), consolidating benefits to its architectures regardless of the actual social utility of the technologies they pursue.
On the topic of social utility, what on earth is “convenient” —as the authors claim— about having “AI” surveil you and create a narrowed menu of options at inflated prices?
On Tuesday, OpenAI also released its web browser, Atlas, which will surveil and direct browsing sessions while using all the data it takes from you to train. Apparently the convenience here is it largely prevents you from the time-consuming task of copying and pasting.
By Yuanyuan (Gina) Cui, Assistant Professor of Marketing at Coastal Carolina University, and Patrick van Esch, Associate Professor of Marketing at Coastal Carolina University. Originally published at The Conversation.
Your phone buzzes at 6 a.m. It’s ChatGPT: “I see you’re traveling to New York this week. Based on your preferences, I’ve found three restaurants near your hotel. Would you like me to make a reservation?”
You didn’t ask for this. The AI simply knew your plans from scanning your calendar and email and decided to help. Later, you mention to the chatbot needing flowers for your wife’s birthday. Within seconds, beautiful arrangements appear in the chat. You tap one: “Buy now.” Done. The flowers are ordered.
This isn’t science fiction. On Sept. 29, 2025, OpenAI and payment processor Stripe launched the Agentic Commerce Protocol. This technology lets you buy things instantly from Etsy within ChatGPT conversations. ChatGPT users are scheduled to gain access to over 1 million other Shopify merchants, from major household brand names to small shops as well.
As marketing researchers who study how AI affects consumer behavior, we believe we’re seeing the beginning of the biggest shift in how people shop since smartphones arrived. Most people have no idea it’s happening.
From Searching to Being Served
For three decades, the internet has worked the same way: You want something, you Google it, you compare options, you decide, you buy. You’re in control.
That era is ending.
AI shopping assistants are evolving through three phases. First came “on-demand AI.” You ask ChatGPT a question, it answers. That’s where most people are today.
Now we’re entering “ambient AI,” where AI suggests things before you ask. ChatGPT monitors your calendar, reads your emails and offers recommendations without being asked.
Soon comes “autopilot AI,” where AI makes purchases for you with minimal input from you. “Order flowers for my anniversary next week.” ChatGPT checks your calendar, remembers preferences, processes payment and confirms delivery.
Each phase adds convenience but gives you less control.
The Manipulation Problem
AI’s responses create what researchers call an “advice illusion.” When ChatGPT suggests three hotels, you don’t see them as ads. They feel like recommendations from a knowledgeable friend. But you don’t know whether those hotels paid for placement or whether better options exist that ChatGPT didn’t show you.
Traditional advertising is something most people have learned to recognize and dismiss. But AI recommendations feel objective even when they’re not. With one-tap purchasing, the entire process happens so smoothly that you might not pause to compare options.
OpenAI isn’t alone in this race. In the same month, Google announced its competing protocol, AP2. Microsoft, Amazon and Meta are building similar systems. Whoever wins will be in position to control how billions of people buy things, potentially capturing a percentage of trillions of dollars in annual transactions.
What We’re Giving Up
This convenience comes with costs most people haven’t thought about.
Privacy: For AI to suggest restaurants, it needs to read your calendar and emails. For it to buy flowers, it needs your purchase history. People will be trading total surveillance for convenience.
Choice: Right now, you see multiple options when you search. With AI as the middleman, you might see only three options ChatGPT chooses. Entire businesses could become invisible if AI chooses to ignore them.
Power of comparing: When ChatGPT suggests products with one-tap checkout, the friction that made you pause and compare disappears.

The AI autopilot scale shows how convenience trumps choice and control. Yuanyuan (Gina) Cui and Patrick van Esch
It’s Happening Faster Than You Think
ChatGPT reached 800 million weekly users by September 2025, growing four times faster than social media platforms did. Major retailers began using OpenAI’s Agentic Commerce Protocol within days of its launch.
History shows people consistently underestimate how quickly they adapt to convenient technologies. Not long ago most people wouldn’t think of getting in a stranger’s car. Uber now has 150 million users.
Convenience always wins. The question isn’t whether AI shopping will become mainstream. It’s whether people will keep any real control over what they buy and why.
What You Can Do
The open internet gave people a world of information and choice at their fingertips. The AI revolution could take that away. Not by forcing people, but by making it so easy to let the algorithm decide that they forget what it’s like to truly choose for themselves. Buying things is becoming as thoughtless as sending a text.
In addition, a single company could become the gatekeeper for all digital shopping, with the potential for monopolization beyond even Amazon’s current dominance in e-commerce. We believe that it’s important to at least have a vigorous public conversation about whether this is the future people actually want.
Here are some steps you can take to resist the lure of convenience:
Question AI suggestions. When ChatGPT suggests products, recognize you’re seeing hand-picked choices, not all your options. Before one-tap purchases, pause and ask: Would I buy this if I had to visit five websites and compare prices?
Review your privacy settings carefully. Understand what you’re trading for convenience.
Talk about this with friends and family. The shift to AI shopping is happening without public awareness. The time to have conversations about acceptable limits is now, before one-tap purchasing becomes so normal that questioning it seems strange.
The Invisible Price Tag
AI will learn what you want, maybe even before you want it. Every time you tap “Buy now” you’re training it – teaching it your patterns, your weaknesses, what time of day you impulse buy.
Our warning isn’t about rejecting technology. It’s about recognizing the trade-offs. Every convenience has a cost. Every tap is data. The companies building these systems are betting you won’t notice, and in most cases they’re probably right.
On a recent job doing computer animations with a group of devoted AI enthusiasts. We were still doing non-AI animations but lots of talk about AI potential. I pondered to them: “You know how every so often we feel the need to create something tactile like a painting or whatever? You think as AI takes over more we’ll get to a point where we miss having our own ideas and seek out ways to think for ourselves once more?”
Didn’t get much of a response. Seemed to fly over their heads or just not matter much to them. But I do wonder. We’ve already given up so much of our physical life to computers (writing, drawing, talking, interacting, etc) but now we’re literally delegating our thinking by having computers come up with our ideas and make choices for us. What will be left of our humanity and will we miss it?
“The danger of the past was that men became slaves. The danger of the future is that men may become robots.” – Erich Fromm, The Sane Society
Another hysterical article about GenAI from Naked Capitalism with dreck like this: “History shows people consistently underestimate how quickly they adapt to convenient technologies. Not long ago most people wouldn’t think of getting in a stranger’s car. Uber now has 150 million users.”
Those are called taxis and they’ve been around for quite a while – uber didn’t invent them. Shockingly people have also been using unregistered taxis for even longer, as anyone who lived in a large European city and went out to nightclubs before the advent of smartphones can attest.
Similarly, GenAI didn’t invent monopolisation, price fixing, dynamic or customised pricing. It’s a new form of price gouging for sure but does every mention of GenAI have to be apocalyptic in tone?
We have no reason to think the either capabilities or the negative effects of GenAI will continue increasing in either a linear or logarithmic fashion. In contrast, there are good reasons to think it will hit ceilings in capabilities, just as previous versions of neural networks did, and indeed technical projects generally do.
E.g: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
https://arxiv.org/abs/2506.06941
My calendar is on paper so, good luck with that ChatGPT. Then the email…might ChatGPT have granted access to email accounts without explicit permission by the user? Is people willing to give up their privacy as easily as this article suggests?
This article inverts things. People are the bots and ChatGPT is the “human” making decisions for every bot. Aren’t this marketing people hallucinating?
“does every mention of GenAI have to be apocalyptic in tone?”
When 92% of U.S. growth is from AI spending and it’s being sold as a transformative tech with 78% of businesses having reported using AI, and having profoundly negative impacts on our energy grid, environment, and user’s cognitive abilities, a little dramatics are expected.
And, considering the words of AI’s creators nothing NC has stated even comes close to “apocalyptic” and actually reads as quite reasonable:
“A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” – Sam Altman
“I mean with artificial intelligence we’re summoning the demon.” – Elon Musk
Regarding taxis: those are not a stranger’s personal car – those are a business vehicle. I rode in taxis in NYC for two decades and never once had to move a driver’s dirty gym shorts and duffle bag so I could fit my luggage in the trunk, or spotted his confederate flag neatly folded but sticking out from under the seat. Sure, they were notoriously smelly and not the most hygienic, but they were not personal vehicles. Even the dollar cabs which often stank like an ashtray were still business vehicles. Getting into a stranger’s car was known as hitchhiking. Though, I didn’t experience the Euro nightclub scene in the 90’s so I can speak for that experience.
“It’s a new form of price gouging for sure” – Yeah, it’s price fixing on a scale and precision targeting not possible before. That is pretty darn frightening.
But, beyond all this quibbling over specifics, the real issue is all of this is leading us away from an open society and into a tightly controlled society. That is frightening. I’m old enough to remember when the idea of Easy Pass in a car was debated because “I don’t want them knowing everywhere I drive” and now we’re just giving all our information away to the point where there’s nothing much we can do about it. I just did a few trips abroad and Customs has facial recognition instead of checking passports. I didn’t fight it but there were no visible alternatives. That was sci-fi fear fodder just a decade ago. Amazon just announced plans to replace 600,000 workers with AI/robots while their servers shut down the internet earlier this week.
So, yeah, maybe the tone can feel apocalyptic because we are all very quickly adapting to these new technologies out of convenience and because we as individuals and even as a society have very little control over any of it.
That’s why people who see fit to use LLMs should use locally-hosted, Chinese-trained models on GPUs owned by people they can shake hands with, instead of giving oxygen to the US inference-as-a-service model that the Blob and Wall Street want.
The apparent editorial position here is negative sentiment against OpenAI and Palantir in particular, acceptance of AI safety firm Anthropic (also makers of Claude) as a source, and incandescent silence on the Chinese open-source, open weights AI ecosystem that is undermining the conditions for US AI dominance.
I wonder if the site’s proprietors have made the connection yet between the sensational “anti-AI” editorial position — perforce a pro-militarization, pro-surveillance, pro-censorship position regardless of how much instrumental crybabying accompanies it — and the troubled recent fundraiser. Considering that Anthropic and the other US AI majors are actively funding astroturf in the “anti except for the ruling class” position, the superalignment (as it were) between Anthropic’s and NC’s lines makes me wonder if there is some kind of silent sponsorship in place.
Perhaps the editorial position, not to put words in the hosts mouth, is based upon the idea that AI is a product nobody asked for. Forcefully foisted upon us by a cabal of tech bros, government incentive
and media spin that requires an enormous amount of real life resources for the benefit a few.
Just some thoughts.
Suggesting that users have suspended their thinking to patronize Uber, doesn’t really support your argument.
Power is consolidated and further entrenched when people just go along and do not engage in critical thinking. Mass acceptance of a thing is hardly a proof of its real merits nor a refutation of its dangers.
I agree that this is pretty hyperbolic. For instance, “Buying things is becoming as thoughtless as sending a text.”
Do you think that people of means spend a long time shopping? The more money I make, the less I look at prices. So what if AI makes the choice for you? Just don’t have it make the choice next time.
This is a little like people saying that the card catalogue at the library was better because you had to page through many other books in order to get to the one you wanted.
A more honest analogy would be saying that the card catalogue is better because it showed you all the books available instead of the three that the publishers paid the most to show you.
“does every mention of GenAI have to be apocalyptic in tone?”
I don’t think it’s our host Yves you should be taking to task for such a tone. This isn’t the first time I’ve seen this complaint pop up. Really, if the developers of this technology haven’t been talking about it in such apocalyptic tones themselves for so many years, then maybe the backlash against it wouldn’t have followed. Critics didn’t come up with hype about AI coming for our jobs, for instance. That was the developers themselves. They even did a public advertising campaign in SF and NY.
I wish I could be sympathetic. But those wishing to shove AI down everyone’s throat only have themselves to blame. If the hype hadn’t got so noxious, then the backlash wouldn’t be so richly deserved.
US “AI” industrialists play both sides. Anthropic funds doomers and sells access to Claude. Perhaps we should consider China’s open-source, open-weight policy that squeezes out the corrupt middlemen and distributes the means of computation far and wide, so that “cloud” services can’t form monopolies.
Noam Chomsky and Edward Herman informed us that media, mainstream in particular, deliberately functions to manufacture consent among the populace. Facebook and social media quickly became channels to supplant traditional media and to concentrate and amplify construction of public consent and narrative even more. I speculate that AI may be the mother lode for controlling public narratives and manufacturing consent. Each step in this evolution seems to further invade and internalize the process of manipulating opinions for individuals and manufacture consent throughout society even more. If so, no wonder AI is being pushed without restraint by entities public and private that feel they are ones who will control the controlling.
The advent of embedding advertising in AI, I think, is only the beginning of this. Seems to me that this deliberate opinion or consent manufacturing has evolved in much the same way through the social media channel. I have found that understanding the world is much different when you eschew social media. The same also if you avoid mainstream media. It’s like living in completely different realities.
News in general, however it arrives, is like perceiving from afar. Overall, I think, you could call these core efforts in social media and AI the art or technology for managing perceptions. Now it is arriving with a vengeance with AI and questions regarding accuracy, usefulness, etc. in AI fall away. What difference does it make if the true purpose of AI is for manufacturing consent for the ruling elite.
I’ve never bought the recommended anything. AI suggestions are always wrong.
Even in the flowers example.. My wife hates real flowers and so any Minority-Report-pre-crime flowers purchase “for her” is entirely scam fodder. She only does the popup cards of flowers. Even if it figured this out [which it won’t], AI will always also get the wrong card.
RE: “Review your privacy settings carefully. Understand what you’re trading for convenience. Talk about this with friends and family.”
How about instead, we just kill it with fire?
The people behind this push for “AI” clearly do not have good intentions toward the rest of us at heart.
I would really like to know the nuts and bolts of keeping AI as far away from my life as is reasonably possible — short of living in a cave watching my toenails grow and wrapping my head in tinfoil; I don’t want to shut myself off from the internet, email, banking, using credit cards, booking travel, etc. I have no interest in debating whether it is good or bad. I accept that I am being surveilled and that AI will be used by others to surveil me, but there is tape over my lens, my laptop microphone is off, I hate spellcheck, and want to avoid AI to the extend that I can without driving myself nuts.
I am doing what I can, based on not a lot of knowledge, so all advice is welcome.
Man, I can see how this could all play out-
AI: ‘I see that you now have a new girlfriend. Do you wish me to organize your schedule so that she and your wife never meet?’
Luser: ‘Yes, that would be great.’
AI: ‘Using their profiles, I can suggest different gifts from local stores to keep them both happy. I will also set up a file so that you know which person you brought a gift for and which restaurants you took them out to so as not to mix them up.’
Luser: ‘Excellent idea that. It would be embarrassing to forget what I bought for whom. Please provide a list of restaurants that I can take each out to where nobody will know me.
AI: ‘I am also encrypting your new girlfriends data in a hidden file on your desktop. One more thing. If you ever try to uninstall me, all that information will end up going to your wife. Thank you and have a good day.’
The first rule should be never, ever, buy anything using your phone. No credit card use, no Apple or Google Pay, nothing.
It solves at least half the concerns one may have about this type of commercial intrusion and surveillance, let alone increased personal exposure to scammers and other forms of cyber-crime.
The intrusion and surveillance will still continue, but there is no reason one has to participate, other than dubious “convenience”.
Does it make a difference if you buy with a phone vs buying online in general? I very rarely buy anything online, but one of the last times I did, my debit card number was stolen and used the very next day. Don’t need to worry about the phone part since I don’t have one.
Well, I don’t buy over my phone, use Apple or Google Pay, or PayPal (which makes it difficult to donate to NC!), and so on, but there isn’t enough time in my day to somehow find a storefront travel agent, make restaurant reservations in person (though I hardly bother to make them at all), I block “non-essential” cookies and clear collected data frequently, and so on, but confess that I have chosen for many decades to use credit cards rather than walking around with a concealed carry and big wad of cash. I don’t set foot in the US these days (used to be prohibited, but no longer) and thus avoid handing over my devices, but accept that I am and will be surveilled, and comfort myself with the belief that I am not important enough to drive myself crazy over it
Autopilot AI: I think some people have spent too much time never having to worry about prices. As for the rest of us, this doesn’t look like it even rises to the level of a solution looking for a problem.
Purchases and prices are just where it starts. Consumers getting used to the AI doing things for them will also be the people who believe whatever is on their phone screen. As a hypothetical – picture the response to Trump’s “war zone” screeds if the phone then shows billowing smoke clouds instead of inflatable chicken costumes.
Sure, it’s roughly the same mind-numbing effect as traditional TV (see mgr’s comment above), only now it’s personalized, with everyone getting tailored stories that have been selected to resonate with their individual biases and beliefs.
I’m fed up of reading articles about AI from people who don’t understand how this tech works, or what it’s (profound) limitations are.
AI is a black box. As we’re seeing with Grok, it is very difficult to control/influence AI – because it is insanely complex and impossible to understand. Most of these types of initiatives will fail, for the simple reason they will not work, or make money. It’s a pyramid scheme – sell a dream, keep it going for as long as possible and make sure you’re not holding the bag when it collapses.