The partnership between Microsoft and OpenAI ignited the AI boom but frayed under pressure in 2025 as the halcyon early days of the tech came to an end.
Now with Google’s Gemini grabbing market share from ChatGPT and Microsoft’s AI-centered Windows 11 turning into a massive headache for the company things are getting desperate now.
— Nat Wilson Turner (@natwilsonturner) January 7, 2026
Microsoft and OpenAI Are Deeply Entangled
This chart from The Information’s reporting (via AI With Kyle) illustrates the Microsoft and Open AI relationship.
— Nat Wilson Turner (@natwilsonturner) January 7, 2026
Microsoft and OpenAI first partnered in 2019 when the Redmond behemoth invested $1 billion in Sam Altman’s do-gooder non-profit.
Here’s how they described it at the time (savor the prose):
Microsoft Corp. and OpenAI, two companies thinking deeply about the role of AI in the world and how to build secure, trustworthy and ethical AI to serve the public, have partnered to further extend Microsoft Azure’s capabilities in large-scale AI systems. Through this partnership, the companies will accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence (AGI). The resulting enhancements to the Azure platform will also help developers build the next generation of AI applications. The partnership covers the following:
- Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies
- OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence
- Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies
I resisted the temptation to include the statements from OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella, but fans of CEO-speak are encouraged to click through and read for themselves.
Why Did OpenAI Need Microsoft?
This paragraph from Wikipedia on the founding and early funding of OpenAI tells the tale:
In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs.[18][19] A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019.
It was co-founder Elon Musk, who’d invested $44 million in OpenAI, that pushed Altman to work with Microsoft rather than Amazon.
Musk wrote, “I think Jeff (Bezos) is a bit of a tool and Satya (Nadella) is not, so I slightly prefer Microsoft, but I hate their marketing dept.”
Altman replied that “Amazon started really dicking us around on the [terms and conditions], especially on marketing commits. And their offering wasn’t that good technically anyway.”
Built-In summarized the company’s thinking and Altman’s falling out with Musk:
…by 2017, the co-founders realized they would need to become a for-profit company to raise enough capital to purchase the computing power necessary to process vast troves of data. When Altman and other founding engineers floated the idea of transitioning to a for-profit structure, Musk initially agreed, according to archived conversations published by OpenAI.
But Musk also insisted on having a majority equity stake, “absolute control” and to be the CEO of the for-profit company, according to OpenAI. Musk stepped down as co-chair in 2018, saying he thought he had a better chance of creating artificial general intelligence through his other company, Tesla.
Altman was then appointed CEO, and OpenAI created a capped, for-profit subsidiary that reports to the nonprofit organization governed by a board of directors. After raising $1 billion from Microsoft in 2019, OpenAI launched ChatGPT in 2022
Things were going so well that Microsoft put another $2 billion into OpenAI in 2021 (pdf).
Altman and company made productive use of Microsoft’s second round of investment.
Microsoft + OpenAI Kicked Off the AI Boom
OpenAI’s ChatGPT debuted in November 2022 and garnered 1 million users in five days.
In December of that year, Kevin Roose of The New York Times dropped his tone-setting “The Brilliance and Weirdness of ChatGPT” which extolled the powers of the app:
ChatGPT feels different. Smarter. Weirder. More flexible. It can write jokes (some of which are actually funny), working computer code and college-level essays. It can also guess at medical diagnoses, create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty.
The technology that powers ChatGPT isn’t, strictly speaking, new. It’s based on what the company calls “GPT-3.5,” an upgraded version of GPT-3, the A.I. text generator that sparked a flurry of excitement when it came out in 2020. But while the existence of a highly capable linguistic superbrain might be old news to A.I. researchers, it’s the first time such a powerful tool has been made available to the general public through a free, easy-to-use web interface.
At that point Microsoft was more than happy to dramatically raise their stake in OpenAI.
Microsoft Upped Their OpenAI Stake to $13 Billion
On January 23, 2023, this joint announcement dropped: “Microsoft and OpenAI extend partnership” promising “Supercomputing at scale, New AI-powered experiences” and that Microsoft would be OpenAI’s exclusive cloud provider.
No dollar amounts were included in that initial announcement, but, CNBC described the deal at the time and detailed the money figures:
Microsoft’s once under-the-radar investment is now a major topic of discussion, both in venture circles and among public shareholders, who are trying to figure out what it means to the potential value of their stock. Microsoft’s cumulative investment in OpenAI has reportedly swelled to $13 billion and the startup’s valuation has hit roughly $29 billion.
That’s because Microsoft isn’t just opening up its fat wallet for OpenAI. It’s also the arms dealer, as the exclusive provider of computing power for OpenAI’s research, products and programming interfaces for developers. Startups and multinational companies, including Microsoft, are rushing to integrate their products with OpenAI, which means massive workloads running on Microsoft’s cloud servers.
But Microsoft might have been playing a double game.
According to The Information, “several top Microsoft executives told colleagues they thought OpenAI’s business would eventually fail, even if its technology was good, according to a former manager who discussed it with them.”
That revelation inspired Ed Zitron to write a great piece in June 2025 going into the motivations of Microsoft’s investment into OpenAI which he boiled down to two possibilities:
- Satya Went Out On A Limb, Believing That OpenAI Had Potential To Create Massive Profits and Growth That Never Arrived, And He Wanted Its IP
- Microsoft Always Believed OpenAI Would Die And Invested As A Means Of Acquiring Its IP and Customers, And Didn’t Really Have A Great Plan
Read Ed’s whole piece if you have the time, it clarifies a great deal.
But back to 2023, because that’s when things got really weird.
The AGI Boogie Man
ChatGPT was going so well that the leadership of OpenAI got scared, or at least started telling the public they were scared.
In May 2023, OpenAI execs CEO Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever posted the infamous “Governance of Superintelligence” post claiming “now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”
Brave readers can hopefully handle the opening paragraphs from these reluctant TechBro Cassandras:
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.
We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
But with great power comes great responsibility and soon Altman’s OpenAI bros decided he couldn’t be trusted with the super powers they were building.
Microsoft Foiled The Coup Against Sam Altman
The Verge summed up what happened next:
On November 17th, 2023, OpenAI’s nonprofit board abruptly announced that co-founder and CEO Sam Altman was out. The shake-up came just shy of one year after the launch of ChatGPT, which quickly became one of the fastest-growing apps in history and initiated an industry-wide race to build generative AI.
Over a period of just a few days, the CEO job shuffled between CTO Mira Murati and former Twitch boss Emmett Shear. Meanwhile, hundreds of OpenAI employees said they would leave for jobs at Microsoft, OpenAI’s lead investor, unless the board reinstated Altman. In the end, Altman returned, along with co-founder Greg Brockman and a revamped board of directors.
According to Stella Stylianides Da Silva of the Law Talks podcast, Microsoft’s hand became visible in the resolution of the coup:
Despite attempts to portray an ‘arms-length’ relationship, the true nature of the partnership was exposed in late November 2023 when OpenAI fell into chaos. CEO and co-founder Sam Altman was fired by OpenAI’s board on the basis that he had not been “consistently candid in his communications with the board”. Within a matter of days, Altman had been offered a position at Microsoft, before being reinstated at OpenAI following reported pressure from the company’s employees as well as Microsoft’s CEO Satya Nadella. This involvement from Microsoft demonstrated that it plays a significant role in the operational and strategic decisions of OpenAI.
(The firing of Sam Altman) was the start of a five-day crisis that some people at Microsoft began calling the Turkey-Shoot Clusterfuck.
…
Microsoft hadn’t been at the forefront of the technology industry in years, but its alliance with OpenAI—which had originated as a nonprofit, in 2015, but added a for-profit arm four years later—had allowed the computer giant to leap over such rivals as Google and Amazon.
…
Nadella called Microsoft’s chief technology officer, Kevin Scott—the person most responsible for forging the OpenAI partnership. Scott had already heard the news, which was spreading fast. They set up a video call with other Microsoft executives. Was Altman’s firing, they asked one another, the result of tensions over speed versus safety in releasing A.I. products?
The piece also highlights Kevin Scott’s influence over OpenAI and Microsoft’s strategy for unleashing LLM-driven AI on the public:
Kevin Scott respected their concerns, to a point. The discourse around A.I., he believed, had been strangely focussed on science-fiction scenarios—computers destroying humanity—and had largely ignored the technology’s potential to “level the playing field,” as Scott put it, for people who knew what they wanted computers to do but lacked the training to make it happen.
…
Scott and his partners at OpenAI had decided to release A.I. products slowly but consistently, experimenting in public in a way that enlisted vast numbers of nonexperts as both lab rats and scientists: Microsoft would observe how untutored users interacted with the technology, and users would educate themselves about its strengths and limitations. By releasing admittedly imperfect A.I. software and eliciting frank feedback from customers, Microsoft had found a formula for both improving the technology and cultivating a skeptical pragmatism among users.
But back to the boardroom drama:
…the Microsoft executives would move to Plan B: using their company’s considerable leverage—including the billions of dollars it had pledged to OpenAI but had not yet handed over—to help get Altman reappointed as C.E.O., and to reconfigure OpenAI’s governance by replacing board members. Someone close to this conversation told me, “From our perspective, things had been working great, and OpenAI’s board had done something erratic, so we thought, ‘Let’s put some adults in charge and get back to what we had.’ ”
And so it was done.
But according to The Verge, “this messy incident shook (Microsoft’s) confidence in OpenAI.”
Loss of Trust in Sam Altman at OpenAI
It wasn’t just Microsoft that lost confidence, Vox.com reported on the internal loss of faith at OpenAI:
Altman’s reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth — someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors.
For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses?
For employees, all this led to a gradual “loss of belief that when OpenAI says it’s going to do something or says that it values something, that that is actually true,” a source with inside knowledge of the company told me.
All of this set the scene for a rocky 2024 between the companies.
Microsoft and OpenAI Drifted Apart in 2024
Time moves fast at the center of a technological and speculative boom, especially when OpenAI was losing $5 billion a year in 2024.
By October of that year, The New York Times was reporting that “Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying:”
Microsoft wouldn’t budge as OpenAI…continued to ask for more money and more computing power to build and run its A.I. systems.
Mr. Altman once called OpenAI’s partnership with Microsoft “the best bromance in tech,” but ties between the companies have started to fray. Financial pressure on OpenAI, concern about its stability and disagreements between employees of the two companies have strained their five-year partnership…Microsoft has started to hedge its bet on OpenAI.
“We have continued to invest in OpenAI at many discrete points in the partnership,” Kevin Scott, Microsoft’s chief technology officer, said in a recent interview. “We are certainly the very largest investor of capital in them.”
But in March, Microsoft paid at least $650 million to hire most of the staff from Inflection, an OpenAI competitor.
The rifts would only expand in 2025 as the tide turned against the AI boom.
Microsoft and OpenAI Couldn’t Weather the Vibe Shift Together
Everything changed for the AI boom in 2025
In 2023 and 2024, each major model release felt like a revelation, with new capabilities and fresh reasons to fall for the hype. This year, the magic faded, and nothing captured that shift better than OpenAI’s GPT-5 rollout.
While it was meaningful on paper, it didn’t land with the same punch as earlier releases like GPT-4 and 4o. Similar patterns emerged across the industry as improvements from LLM providers were less transformative and more incremental or domain-specific.
…
AI companies received unprecedented scrutiny in 2025. More than 50 copyright lawsuits wound through the courts, while reports of “AI psychosis” — the result of chatbots reinforcing delusions and allegedly contributing to multiple suicides and other life-threatening episodes — sparked calls for trust and safety reforms.While some copyright battles met their end most are still unresolved. Though, the conversation appears to be shifting from resistance against using copyrighted content for training to demands for compensation.
Meanwhile, mental health concerns around AI chatbot interactions — and their sycophantic responses — emerged as a serious public health issue following multiple deaths by suicide and life-threatening delusions in teens and adults after prolonged chatbot usage. The result has been lawsuits, widespread concern among mental health professionals, and swift policy responses like California’s SB 243 regulating AI companion bots.
By the middle of the year, Microsoft and OpenAI were openly fussing in the press as seen in The Wall Street Journal’s “OpenAI and Microsoft Tensions Are Reaching a Boiling Point“:
OpenAI wants to loosen Microsoft’s grip on its AI products and computing resources, and secure the tech giant’s blessing for its conversion into a for-profit company. Microsoft’s approval of the conversion is key to OpenAI’s ability to raise more money and go public.
But the negotiations have been so difficult that in recent weeks, OpenAI’s executives have discussed what they view as a nuclear option: accusing Microsoft of anticompetitive behavior during their partnership, people familiar with the matter said. That effort could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign, the people said.
Such a move could threaten the companies’ six-year-old relationship…
Axios boiled the June 2025 conflict between Microsoft and OpenAI down to brass tacks:
OpenAI needs Microsoft’s go-ahead for a restructuring that OpenAI wants to achieve soon in order to meet commitments it made to recent investors. That means translating Microsoft’s current share of OpenAI profits, up to a certain point, into a specific stake in the company.
Another big sticking point relates to a trigger in their existing deal that calls for Microsoft’s access to be significantly curtailed once OpenAI has reached the threshold of artificial general intelligence.
Right now Microsoft has extremely broad rights until AGI is reached and limited access beyond that point.
Heaps of dollars are on the table, from the size of the stake in OpenAI’s operation that Microsoft gets, to how much server business OpenAI sends Microsoft’s way, to how much the two companies collaborate or compete for consumer and corporate subscriptions.
…OpenAI wants Microsoft to forgo its rights to future profits in exchange for a roughly 33% stake in the restructured company, according to a person who had spoken to OpenAI executives.
In October 2025 Microsoft and OpenAI posted “The next chapter of the Microsoft–OpenAI partnership” which led with this bullet point, “Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.”
Whew, that’s a load off.
But the deal included a lot of actual reality as Bloomberg summed it up:
OpenAI is giving its long-time backer Microsoft Corp. a 27% ownership stake as part of a restructuring plan that took nearly a year to negotiate, removing a major uncertainty for both companies and clearing the path for the ChatGPT maker to become a for-profit business.
Under the revised pact, Microsoft will get a stake in OpenAI worth about $135 billion…In addition, Microsoft will have access to the artificial intelligence startup’s technology until 2032, including models that achieved the benchmark of artificial general intelligence (AGI), a more powerful form of AI that most say does not exist yet.
Microsoft will also continue to be entitled to receive 20% of OpenAI’s revenue…
With the agreement, OpenAI said its corporate restructure is now complete. The company had spent much of this year working to form a more traditional for-profit company, which is considered more attractive to investors. Microsoft, which backed OpenAI with some $13.75 billion, was the biggest holdout among the ChatGPT maker’s investors…
Which was a balm to Microsoft investors who were not happy at headlines like “Microsoft takes $3.1 billion hit from OpenAI investment” and “Microsoft Needs to Open Up More About Its OpenAI Dealings.”
Microsoft Can Now Call OpenAI a Hedge
Seeking Alpha has the happy ending:
Currently, Microsoft owns 27% of OpenAI, translating to a current value of $135B, which is a massive asset on the balance sheet if Microsoft ever had the need for some quick cash. And even if I have a lot of doubts about the company’s valuation, especially considering its relatively low sales figures, it offers Microsoft guaranteed revenue as OpenAI is legally committed to spending an additional $250B on Azure services over the next several years, and Microsoft has access to OpenAI’s IP until 2032. More so, I believe that OpenAI is now more of a strategic hedge rather than a dependency, which was initially the case earlier in the partnership. Microsoft has begun to offer Claude 4 within Office 365, showing investors that it isn’t trapped by OpenAI even if it really likes it. But there are still negatives, as OpenAI is now selling ChatGPT Enterprise, directly targeting the same companies that Microsoft has been targeting with Copilot. They are competitors, but at the same time, Microsoft has skin in the game in both projects. On another note, OpenAI’s deal with Oracle also means that the startup is diversifying its cloud providers, reducing Azure’s leverage.
It’s a good thing too because Microsoft CEO Satya Nadella had to revert to “founder mode” to get it all done.
And we all hate when that happens.


Thanks Nat. I think your articles are getting better and better – really enjoying your work!
Thank you Ben, this one was kind of a departure so I’m glad to hear you enjoyed it.
OpenAI is committed to spending 250 billion dollars on Azure, though it does not have 250 billion dollars and it is losing money hand over fist. It has a loss every time that their product is used, but try to make up for it in volume. Stock valuation of the publicly traded companies Nvidia, Oracle and Microsoft all seem to have peaked during the fall. My assumption is that this means they are running out of runway, and if so the crash is near. But I have been wrong before in underestimating for how long a bubble can last.
On the more philosophical side, it is worth noting that when people in AI talk about safet in particular existential risk, they don’t mean normal safety concerns, they mean the computer taking over like in Terminator (or possibly Matrix). It is for practical purposes a cultish belief that springs from the self-styled “rationalists” around the blogs LessWrong and later Slatestarcodex. They have significant overlap with Effective Altruists, Longtermists and (via Slatestarcodex) Neoreactionaries. And they all live in California (more or less).
This is were the AGI/ASI lingo comes from. This lingo and belief in being on the verge of creating Data from Star Trek (and then Skynet) was useful in getting the bubble up and running by presenting risks that enhanced the perception of a very powerful technology. I think the coup attempt at OpenAI was the last gasp of the true believers, then they were pushed aside by the money. (Neither side is good.)
All excellent points. I’ve covered the circular financing of AI a couple of times (as have others here at NC) but didn’t have room for more than a couple of references in this one.
I’m especially glad you brought up the “rationalists”, Effective Altruists, Longtermists and Neoreactionaries.
I’ve covered that stuff in my previous post on the Silicon Valley Ideologies and referenced the batshit belief system of Nadella (which I doubt even he actually believes) in Monday’s post, but I’m still grappling with the apparent reality that these brilliant geniuses actually believe this crap.
Also I wonder how much of OpenAI’s floundering since the failed coup has to do with losing the “true beliver” talent. ChatGPT is pretty impressive tech, even if it’s not leading to AGI, shouldn’t be implemented all over the place, and in no way justifies the cost.
I would expect this could be the year that OpenAI founders, because they finally have to start forking out cash they don’t have to other companies. Like Oracle. Like CoreWeave. Microsoft has basically been running a scam with OpenAI, they invest $10 billion in OpenAI, but it’s in the form of Azure credits, which OpenAI then cashes in and Microsoft books it as $10 billion in revenue out of the $13 billion they last reported. Funny how they stopped reporting AI revenue after that though. Sutskever is another one to watch, he left and started another company that is incinerating cash almost as bad as OpenAI and doesn’t even have a product yet.
Probably the trigger will be when Nvidia reports they didn’t sell more gpus in a quarter than the previous quarter. Once that happens then I think the skepticism over AI will increase, and these data center deals will go bust because their only collateral is these aging gpus. It’ll be a massive implosion, the blackhole of OpenAI will suck in everything with it when it goes, because it’s all propped up with nothing but lies.
I’m bracing myself because the consequences of these idiot grifters actions will impact everyone on Earth.
There was a recent NC link – by Freddie deBoer of all people – on AI ‘hallucinations’ (apparently I’m not alone in thinking that confabulation is a better term for what AI does). The comments section was more interesting than the needlessly long post. People who defend their use of AI say it’s useful for a domain they’re familiar with because they can spot and discard the obvious errors. This seems to me to be a major epistemological fail. Unless one knows all the information presented, how does one know what the less obvious errors are? Say you ask for the tallest mountains in the world and their elevations. If the results include Lake Superior, sure, you can toss that one out. Why then do people think the elevations won’t be wrong as well? And if you do know all the information, why are you asking AI?
Confabulations can be small as well as large.
To me, “Confabulation” still suggests some kind of agency or faculty that the software doesn’t actually possess, and that supports the broader narrative that this is anything more special than some code with a shitload of stolen content and computation behind it.
I see LLM output as more like a word salad that has been tossed to order to approximate whatever meaning the prompter implied they wanted with the language they used in the prompt.
That’s something I haven’t seen discussed much (maybe I just haven’t seen it): most people aren’t that aware of how (even very small) nuances in the language they use can (sometimes strongly) imply the response they want in return.
In a conversation with another person, the recipient may not notice (or choose to ignore) the suggestion implicit in the phrasing of the request and respond from a place of knowledge and context to deliver a response that is meaningful and factual, even if it is obviously not what the first person wanted to hear. Or they can ask clarifying questions if they don’t know what the first person is talking about, whereas ChatGPT just natters away like LIZA on steroids.
The whole notion that this is anything other than a self-eating shit sandwich has been bewildering to me from the very beginning, but I guess the basilisk demands it, or something.
Yea it’s pretty terrifying — in fact it’s so terrifying that I wonder if the actual goal of AI isn’t to permanently scramble information for the vast majority of people so we have no way of knowing WTF is happening and can be preyed on more easily.
That’s the question. Wouldn’t this the place to do it yourself, reuse your earlier work, or get a junior person to to make the first pass so they can learn?
Obviously, spending your own time or using staff appears to have a cost. Given the cost of data centers and the risk, seems the cheaper option.
Know nothing about the “chat” varieties, but we have been using Claude for C++ code reviews pretty successfully. Claude lists out problems / suggestions and the dev reviews them.
We’ve been taking it a step further and asking Claude for evaluations of reported issues/bugs and for that so far it hasn’t been great. Note that these are hard issues even for devs (much of the code is legacy and I think was initially written in C so today’s devs need to get up to speed on the architecture/design of various features). Claude will confidently announce “I have found the root issue” but reviewing the findings shows it doesn’t have a deep enough understanding so what was claimed as the “root issue” may be a weakness in design, but not the actual cause. So the dev has to point out the problem and then iteratively get Claude to revise its findings.
So net it is helpful but not a “super intelligence”.
Rw: OpenAI – FYI
Two Chrome Extensions Caught Stealing ChatGPT and DeepSeek Chats from 900,000 Users
https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html
“Prompt Poaching”
As Oscar Wilde quipped, “the unspeakable in full pursuit of the uneatable.”
I hope they all get the indigestion they so richly deserve.
Microsoft 365 has become a joke. I don’t feel comfortable composing documents or building spreadsheets in that environment. Enshittification with a subscription fee. Why would I want to give up control of the storage of the document itself, as well as give up control of the contents of the document to various “agents”? I find it intolerable and am finally making the move to open source.
As for Nadella, unfortunately he appears to be pretty awful at business. How could someone in his lofty position be so stupid as not to realize that AGI is impossible using the LLM approach of OpenAI? Tons of people recognize this. And why would he so myopically focus on AI to the extent that he is oblivious to the fact that he has thereby turned his core product into a shitty, burdensome, highly irritating joke?
I would expect that the stock price of MSFT will be under $300 by later this year.
Nadella is paid very well to be wrong — that is, he’s got to convince investors that Microsoft is still a “growth stock” at all costs because of the massive multiple. Once investors realize that MSFT is a staid incumbent, coasting on the intellectual property infringement of past decades and set for a long, slow, decline, Nadella and other MSFT stock holders will take a long cold bath.
Nadella has said explicitly that he does NOT believe in AGI in the tech bro science fiction sense. I’ve several times tried to send you links to what he’s actually said and been blocked by the NC system.
I would suggest you go look for yourself.