Do moguls like Masayoshi Son, Sam Altman, and Larry Ellison actually believe Artificial Super Intelligence is imminent?
As the polycrisis lurches into a new year, let us take a few moments to question the motivations of some of its leading actors and their stated belief system.
Do They Really Believe in ASI?
This quote from Shanaka Anslem Perera’s Substack got me thinking about the (likely delusional) belief system that is driving the multi-trillion dollar push for Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI)
Either the gods are being built in the Texas desert, or the greatest financial delusion in human history is unfolding in real time while sophisticated observers debate quarterly earnings.
It’s one thing to try to understand what’s happening on our planet using observation and reason, but the degree of difficulty increases considerably when one realizes that many leading actors are driven by ideologies and even eschatologies that are at best supra-rational and at worse completely insane nonsense.
World’s Dumbest Money Believes
For a specific example, let’s go back to Perera and his description of the stated beliefs of SoftBank CEO Masayoshi Son:
In the June 2024 transcript of SoftBank’s Annual General Meeting, buried on page forty-seven between a question about dividend policy and a disclosure about cross-shareholding arrangements, Masayoshi Son stopped being a businessman and became something else entirely.
“SoftBank was founded for what purpose?” he asked the assembled shareholders, most of whom had come expecting guidance on quarterly earnings and capital allocation strategy. “For what purpose was Masa Son born?”
The question was not rhetorical.
“It may sound strange,” he continued, his voice carrying the weight of a man who had spent four decades building toward a single moment of revelation, “but I think I was born to realize ASI. I am super serious about it.”
The room did not gasp. The financial press mentioned the comment in passing and moved on to earnings estimates. The sophisticated analysts covering SoftBank stock dismissed it as another grandiose proclamation from a man whose Vision Fund had become synonymous with venture capital excess, a man who had incinerated forty billion dollars on WeWork and whose investment judgment had been publicly questioned by shareholders, regulators, and journalists for years.
Former SoftBank exec Alok Sama put Son’s investment strategy in context for The Next Big Idea Book Club in late 2024:
In the mid-nineties, Masa foresaw the internet revolution. At the height of the 2000 tech bubble, he owned 8 percent of all internet stocks, briefly making him the richest person in the world. He also had an agreement to buy a third of a small online bookseller called Amazon, but he ran short of cash. Later, he bought a struggling phone company five times the size of his SoftBank, based on a vision of connected smartphones—before the iPhone existed. He also made a $20 million bet on a Chinese schoolteacher with “strong and shining eyes” and turned it into the greatest venture investment of all time: a $100 billion stake in Alibaba. His audacious $32 billion acquisition of chip designer Arm Holdings was a bet on a tomorrow of “connected and intelligent things” and is now worth almost $200 billion.
Masa’s unique brand of crazy is a canny competitive strategy. When Masa Son’s capital cannon gets behind a business, the competition frequently folds, as Uber did in China and Southeast Asia after SoftBank invested in its local competitors. Because, as Masa memorably says, in a fight between a smart guy and a crazy guy, the crazy guy always wins.
Sama also details Son’s ASI beliefs:
Ten years before the launch of ChatGPT, Masa Son would talk to me obsessively about AI and the singularity. At the time, this was a largely theoretical future event in which machine intelligence might surpass human intelligence. While Elon Musk’s Neuralink project sought a Vulcan mind-meld with machines to control them, Masa put his faith in companion humanoids with “emotional intelligence.” And while Musk seeks to colonize Mars in preparation for doomsday, Masa remains evangelical about his faith in a benign AI.
And Son is putting his money where his mouth is.
SoftBank Fulfills $40 Billion OpenAI Pledge
In March, Son’s SoftBank promised to invest $40 billion in OpenAI at a $300 billion valuation.
Despite the skepticism of Ed Zitron, SoftBank has closed the deal.
SoftBank has completed its $40 billion investment commitment to OpenAI, sources told CNBC’s David Faber.
The Japanese investment giant sent over a final $22 billion to $22.5 billion last week, according to sources familiar with the matter, who asked not to be named in order to discuss details of the transaction.
SoftBank had previously invested $7.5 billion in the ChatGPT maker and syndicated another $11 billion with co-investors, the Japanese conglomerate confirmed in a press release, with the final aggregate commitment at $41 billion. The investment takes SoftBank’s stake in the company to around 11%.
CNBC reports that SoftBank had to dump other investments to come up with the cash:
Last month, SoftBank liquidated its entire $5.8 billion stake in major AI beneficiary Nvidia.
A different source familiar with the move to sell the stake told CNBC at the time that the sale, combined with other cash sources, would support its OpenAI investment.
SoftBank dumped its entire Nvidia stake for OpenAI. Someone must be a smooth talker.
Sam Altman Artificial Super Intelligence Messiah
That someone is OpenAI CEO Sam Altman. Here is a recent example of his patter via Politico:
I expect, though, the trajectory of the capability progress of AI to remain extremely steep. We’ve seen just in the two years or three years since ChatGPT has launched, how much more capable the models have gotten. And I see no sign of that slowing down. I think in another couple of years, it will become very plausible for AI to make, for example, scientific discoveries that humans cannot make on their own. To me, that’ll start to feel like something we could properly call superintelligence.
…
One of the things that I have learned continuously is, although we can say the ramp will be very steep, it’s difficult to be very precise that, you know, it’ll happen this month or this year. But I would certainly say by the end of this decade, so, by 2030, if we don’t have models that are extraordinarily capable and do things that we ourselves cannot do, I’d be very surprised.
…
I’ve heard many people describe many different versions of what the relationship between an AI and humanity will be. The one that has always been my favorite is: My co-founder, Ilya Sutskever, once said that he hoped that the way that an AGI would treat humanity or all AGIs would treat humanity is like a loving parent. And given the way you asked that question, it came to mind. I think it’s a particularly beautiful framing.That said, I think when we ask that question at all, we are sort of anthropomorphizing AGI. And what this will be is a tool that is enormously capable. And even if it has no intentionality, by asking it to do something, there could be side effects, consequences we don’t understand. And so it is very important that we align it to human values. But we get to align this tool to human values and I don’t think it’ll treat humans like ants.
Well, that’s very reassuring.
In fact, if I had $40 billion I’d be sorely tempted to light it all on fire invest it all with the man Elon Musk has called Scam Altman.
Just kidding.
One SoftBank investment that Ed Zitron was correct to be skeptical about involved Sam Altman as well as Oracle CEO Larry Ellison.
Is Stargate Building God in the Texas Desert?
POTUS Trump kicked off 2025 with a press conference (full video) to announce a massive American AI infrastructure project, per CNN:
OpenAI CEO Sam Altman, SoftBank CEO Masayoshi Son and Oracle Chairman Larry Ellison appeared at the White House Tuesday afternoon alongside President Donald Trump to announce the company, which Trump called the “largest AI infrastructure project in history.”
The companies will invest $100 billion in the project to start, with plans to pour up to $500 billion into Stargate in the coming years. The project is expected to create 100,000 US jobs, Trump said.
Stargate will build “the physical and virtual infrastructure to power the next generation of AI,” including data centers around the country, Trump said. Ellison said the group’s first, 1 million-square foot data project is already under construction in Texas.
…
“I think this will be the most important project of this era,” Altman said on Tuesday. “We wouldn’t be able to do this without you, Mr. President.”
So far, so good.
Or Maybe Not?
By July, a report in The Wall Street Journal was pouring cold water over the deal:
A $500 billion effort unveiled at the White House to supercharge the U.S.’s artificial-intelligence ambitions has struggled to get off the ground and has sharply scaled back its near-term plans.
Six months after Japanese billionaire Masayoshi Son stood shoulder to shoulder with Sam Altman and President Trump to announce the Stargate project, the newly formed company charged with making it happen has yet to complete a single deal for a data center.
Son’s SoftBank and Altman’s OpenAI, which jointly lead Stargate, have been at odds over crucial terms of the partnership, including where to build the sites, according to people familiar with the matter.
While the companies pledged at the January announcement to invest $100 billion “immediately,” the project is now setting the more modest goal of building a small data center by the end of this year, likely in Ohio, the people said.
The same WSJ piece revealed the pop culture inspiration behind Altman’s vision, or at least the branding of it:
Altman has used the Stargate name, shared with a 1994 Kurt Russell film about aliens who teleport to ancient Egypt, on projects that aren’t being financed by the partnership between OpenAI and SoftBank. The trademark to Stargate is held by SoftBank, according to public filings.
For instance, OpenAI refers to a data center in Abilene, Texas, and another it agreed in March to use in Denton, Texas, as part of Stargate even though they are being done without SoftBank, some of the people familiar with the matter said.
Let’s let Ed Zitron explain the sleight of hand behind this bait-and-switch:
I have confirmed that SoftBank never, ever had any involvement with the site in Abilene Texas. It didn’t fund it, it didn’t build it, it didn’t choose the site and, in fact, does not appear to have anything to do with any data center that OpenAI uses. The data center many, many reporters have referred to as “Stargate” has nothing to do with the “Stargate data center project.” Any reports suggesting otherwise are wrong, and I believe that this is a conscious attempt at misleading the public by OpenAI and SoftBank.
…This is an astonishing — and egregious — act of misinformation on the part of Sam Altman and OpenAI. By my count, at least 15 different stories attribute the Abilene Texas data center to the Stargate project, despite the fact that SoftBank was never and has never been involved. One would forgive anyone who got this wrong, because OpenAI itself engaged in the deliberate deception in its own announcement of the Stargate Project.
…
You can weasel-word all you want about how nobody has directly reported that SoftBank was or was not part of Abilene. This is a deliberate, intentional deception, perpetrated by OpenAI and SoftBank, who deliberately misled both the public and the press as a means of keeping up the appearance that SoftBank was deeply involved in (and financially obligated to) the Abilene site.Based on reporting that existed at the time but was never drawn together, it appears that Abilene was earmarked by Microsoft for OpenAI’s use as early as July 2024, and never involved SoftBank in any way, shape or form. The “Stargate” Project, as reported, was over six months old when it was announced in January 2025, and there have been no additional sites added other than Abilene.
There’s definitely a ‘who’s zooming who’ aspect to at least the Stargate deal, but let’s circle back to the January Presidential press conference for some insight as to how Sam “Scam” Altman may have hooked Oracle CEO Larry Ellison.
AI Cure for Cancer?
From the transcript of the January 21 Star Gate presser:
Sam Altman: I believe that as this technology progresses we will see diseases get cured at an unprecedented rate.
We will be amazed at how quickly we’re curing this cancer and that one and heart disease. And what this will do for the ability to deliver very high quality healthcare, the costs, but really to cure the diseases at a rapid, rapid rate, I think will be among the most important things this technology does.
Larry Ellison: One of the most exciting things we’re working on, again, using the tools that Sam and Masa are providing is a cancer vaccine.
It’s very interesting. It turns out, I’ll be quick, all of our cancers, cancer tumors, little fragments of those tumors float around in your blood. So you can do early cancer detection. You can do early cancer detection with a blood test. And using AI to look at the blood test you can find the cancers that are actually seriously threatening the person.
But wait, surely there’s another angle.
Larry Ellison’s Big Plans
Audrey of The Drey Dossier provides some psychological insight to the players here:
Something that’s important to know about Larry Ellison is that he is obsessed with cancer, like quite literally obsessed with it. In fact, he’s so obsessed with it that some might say that if he actually wanted to cure it, we wouldn’t still be here talking about it today.
Drey gets into a sidebar about a 1970s DIA program (that she calls a CIA program) that had been named Stargate, but then she gets back to talking about Larry Ellison, AI, and cancer.
There is no way that they’re actually looking to solve cancer. I mean, we all know this, right? And if you don’t know this, then grow up. I don’t know what to tell you because it’s a trillion dollar industry.
Trillion dollar industries are not problems for governments. Plus, if cancer were to go away, then they would lose something even more valuable than those profits.
They would lose their number one pitch for getting away with literally anything. The cancer pitch has been used for decades, wrapped in packages with all very different motives.
…
Bottom line is that cancer gets you in the door. Cancer gets you the regulatory exemptions Cancer gets you access to intimate data that you shouldn’t have any access to. And the cure may never come, but the data infrastructure becomes much more permanent. Okay, so if Stargate LLC isn’t about curing cancer, then what is it all about?
…
In 2023, Larry Ellison personally invested in a $23 million company called Imagene AI. Not using Oracle’s money, his own money. And Imagene AI was founded by, wait for it, officers of the IDF unit 8200.
…
They’ve been using this allegedly in Israel and Gaza. And what they do is they extract this genomic data from liquid biopsies and they analyze blood samples for fragments of DNA.
…
But in order to find those cancer markers, they need to have a sequence of your entire genome. And once your genomic data is digitized, then it can be stored, copied, analyzed, and sold. You know, the usual.But that still doesn’t answer what Larry Ellison is gonna do with this genomic data once Imagene extracts it. Well, remember Oracle Health?
The company that Larry Ellison owns and controls 9.5 million patient healthcare records through the Cerner acquisition? That Oracle Health?
And those records include your medical history, your treatments, your diagnoses, all of that. And if you’ve had any genomic testing done in a hospital that uses Oracle Systems, which is most major hospitals in the United States, that genomic data is sitting in Oracle’s databases.
So now you have this theoretical structure, right, of Imagene being able to extract these fresh genomic data blood samples from you, Oracle Health storing existing genomic data from millions of patients, and now Stargate’s 10 gigawatts of AI compute infrastructure just ready to process it all.
So maybe Larry Ellison still has his feet on the ground, at least in the sense that he’s putting his hands around our collective throats.
But lest you think he’s not a dreamer, I’ll wrap with some of Ellison’s comments and claims about Artificial Super Intelligence.
“About 18 months ago, when we began to fully grasp what the people at OpenAI and ChatGPT had achieved — a level of artificial intelligence that would actually advance human thinking with neural networks that could answer questions that human brains would struggle with — I made a speech in which I asked, ‘Is artificial intelligence the most important discovery in the history of humankind? Maybe. And we’ll soon find out,’ ” Ellison said at the recent World Governments Summit.
“Today, 18 months later, I think it’s very, very clear: AI is a much bigger deal than the Industrial Revolution, electricity, and everything that’s come before,” Ellison said in a video conversation with former UK prime Minister Tony Blair.
“We will soon have not only artificial intelligence but also — much sooner than anticipated —artificial general intelligence and then, in the not-too-distant future, artificial super intelligence.”
“We will have incredible reasoning power, the ability to discover things that would elude the human mind because this next generation of AI is going to reason so much faster and discover insights so much faster, whether it’s being able to diagnose cancer in early stages or design therapies, custom-design vaccines for those cancers that are custom-made for your genomics and your specific tumor antigens. So in medicine, we’ll see revolutions in diagnostics and in therapeutics.
“We’ve started a project where we’re gathering satellite imagery from Kenya to California. And we can predict crop yields — so we could actually tell a farmer or an entire country whether they’re going to exceed what they expect from this year’s harvest or they’re going to have a shortfall and need to start preparing for that. We can tell individual farmers that a part of their field needs more irrigation, or part of their field needs additional fertilizer. So we can improve yields on an individual farm, and also on a much larger scale where we can improve yields across countries and even across entire regions of the world.
“I can go on and on, but AI will fundamentally change our lives in medicine, agriculture, and robotics across the board.”
And yes, this is the same Larry Ellison who is building a media empire for his nepo-baby son, David which we’ve covered previously:
- Trump and Ellison Turn Paramount Into Compliant Media
- Larry Ellison Goes Beyond Oracle Into Miltary and Media
- Delusion, Deception and Dipshittery: Hasbara on the 8th Front
- Bari Weiss Will Run CBS News for the Ellison Hasbara Empire
- Hasbara Ain’t Cheap, Musk, Ellison, Saudis, All Tapped
- Informational Force-Feeding Keeps Imperial Minions Divided & Distracted


Of the many concerns regarding the AI project raised in the above, I’ll pick just one.
“And so it is very important that we align it to human values. But we get to align this tool to human values and I don’t think it’ll treat humans like ants.”
To just whose human values might he be referring? I believe it was Oscar Wilde who wisely advised against the universal application of the golden rule as tastes may differ.
Confucian-Marxist ones we can certainly hope.
I seriously don’t know what is wrong with these people (Altman, Ellison, Son.) Playing amateur shrink, I would say that they all made so much money that it became meaningless, and they needed some challenge bigger than just getting filthy rich, so they turned to AGI and the singularity as some sort of religion.
For someone like Ellison with no medical degree, no scientific background, to think that solving any problem (cancer, climate change, etc.) just requires brute force computing is shockingly naive and shows a pathological degree of hubris.
The blast radius when this madness finally manifests as a financial crisis is going to be epic.
I occasionally think about what it must be like to have $100 billion or whatever to your name. “Regular” people spend the majority of their time thinking about how they are going to afford the things they need for their life, get their kids through college, get a new transmission in the car, save up for a down payment on a house, or pay for groceries for the week. This is not pleasant, by any stretch, but it does give you something to constantly “push against,” if you will. The human species has evolved in an environment where they are constantly trying to figure out how to get what they need for life, and like it or not, I think this is somehow fundamental.
Once you get to a certain level of wealth, however, you almost literally have no problems, at least in the sense I am describing here. To me that sounds like a vacuous and frustrating existence. Nice, sure, but also avoid of any challenges or worthwhile activities. What does that do to you? It seems like a fish trying to live on land.
So I’m not surprised that these very wealthy people often end up being really strange. But it seems like we should not put them in charge of anything.
I wonder if what they think of themselves would not have become what it is if Trump hadn’t had that insane TV program. That beginning was maybe the means by which the process Bandy Lee MD talks about was given a big voltage boost early on? What seemed to me not worth watching for two minutes was actually a drama that injected his style of power tripping into the psyches of millions? I mean isn’t it a kind of preaching a creepiness? Are Russian oligarchs pumped up in mass media like they’re avatars? You know, I guess not. Speaking of media, old time oligarchs didn’t have media (electronics) like the kind that exists today. So many public servants quiet on genocide, but their ignorance re what led up to the SMO has us going around pondering at all kinds of moments…that in 30 more minutes everything could be gone. Accidents.
Maybe it’s guilt and the only way to escape it is to switch a persona that’s immensely disliked into a savior persona?
We can see what they do. Or at least people could see it if they’d wake up. They waste resources big time. It happens by way of their messianic save-the-world programs. We need all these satellites? What happens when countries at odds with each other start blowing’em up? There’ll be so much space junk folks will have to go back to maps to get to destinations. And a machine that was set to send resources where they’re needed; Musk procured about a thousand monkey wrenches to throw into the thing (thing being a set of institutions made up of humans).
Since they do waste resources, I agree that we we should not put them in charge of anything. And set a limit as well.
“power disease” – posited by John Gofman, MD (anti-nuker)
I don’t know how ‘vacuous and frustrating’ being crazy rich is, or has to be. Quite a few generations of aristocrats and tycoons managed to try to do good things for society; fund institutions, support worthy causes, build libraries, cathedrals, universities, whatnot. They can leave behind majestic legacies if they want to. Lack of character in the current crop is just that, a lack in themselves.
Nah the singularity faith most likely started in undergrad, it’s ubiquitous at Stanford.
So, Nat, an additional question I have is what are you thinking about 2026 in terms of your own political strategy?
Personally, I’m going with trying to help MAGA gain real control of the Republican Party.
This arena seems to me to be where all the heavy action will occur over the next three years.
I was fascinated that Susie Wyles implied in her Vanity Fair interview that she was quite focused on trying to make sure that more traditional Republicans regain control of MAGA.
I’m a sucker for expanding this emerging MAGA populist base in order for it to become as big and broad as possible. Such a strange coalition might then just come up with ideas that can look at our seemingly irresolvable problems in new ways.
Such new pathways also seem to require that our respective political frameworks are no longer predetermined by a priori conceptions that determine where we end up.
My main political activity will be posting here and at IanWelsh.net (and possibly a Substack).
I’m limited by time, money and geographical constraints but if I see an opportunity to pitch in on a compelling local race, I might.
In general I’ll be rooting for rebels in both parties and any promising independent or 3rd party efforts.
I will be advising my MAGA friends to join with the Left Populists to do another American 🇺🇸 Revolution.
Well, if leftist populists do an American Revolution it will not be “another” one. I mean, the first one was not by leftists or populists. More like gentry and bourgeoisie, no?
I have a regular walk that brings me along some waterways that are frequented by pairs of mormon boys, mostly with eyes as blue and naive as the forget me not flowers. And I kind of let them aproach me and give me the good news! And I stop them and start talking about Jesus that wanted to bring the Jubilee and the clean slates and the real reason he was killed and that I go after this Jesus, will they join me? I made no converts so far.
At the risk of veering slightly off-topic, though still AI related, I realized something the other day and have been wondering ever since: when did everyone start sleeping on quantum computing? It’s almost become like the ignored doppelganger of the current AI hype.
That’s not a leading question either; I really don’t have a guess. Is it just not being reported on? Did all the media and millionaires get swept up in the AI mania? Or were there setbacks at the major labs that quietly put it on the deep-freeze?
My understanding is that it’s still an open question if quantum computing can graduate beyond the laboratory, for both engineering and lingering theoretical reasons. But it seemed to have a much more straight-forward (though narrower) value proposition, fewer religious fever-dreams, and way less energy demands than most AI projects do now.
There are reasons to believe that quantum computing has hit one of many bottlenecks that is going to limit progress toward commercial applications for quite some time.
According to this paper, quantum computing is nothing more than a physics experiment: ” Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog” https://eprint.iacr.org/2025/1237.pdf
As the title suggests, its actually quite humorous.
All the scammers are in the AI tent.
This pdf is the best publication I could imagine to read before 2026 started. 12 minutes left… Happy New Year.
Thank you! That is a gem.
Seems to be in the same limbo as the hydrogen power cell… and fusion power plant…
Enough intelligence, ability, and resources currently exists to alleviate much suffering or solve a host of problems in the world. A good deal could be accomplished by NOT doing things – such as so many wars.
And these guys are telling people that AGI or ASI is needed to do what is not being done.
It’s always “something else needs to be invented” before doing what can already be done.
re: on PARAMOUNT vs. NETFLIX, related
Since this is also abut Ellison, here an odd pep talk given by one of the managers involved in the fight over WB on Paramount´s side, who is hailing Ellison like there was no tomorrow:
Matt is joined by Gerry Cardinale, founder and managing partner of RedBird Capital, to discuss Paramount’s bid for Warner Bros., why their bid is superior to Netflix’s bid, Larry Ellison and the Middle East’s involvement, whether they will look to raise their bid, and what the next steps will be
https://podcasts.apple.com/is/podcast/paramount-isnt-giving-up-warner-bros-to-netflix-just-yet/id1612131897?i=1000741915547
I trust Cardinale on the numbers. But if you have to actually work with people, you cannot ignore some “soft” (normal human conduct) factors in film biz.
I can´t prove it, but would it be surprising if many in Hollywood, and those at the levers of WARNER, quietly are abhorred over Ellison´s support of Israel and blatant excuse for a genocide.
If you are not allowed to speak the truth in The Town on Israel – although everybdy knows what is going on – you will still hold your views and then – just like with the Nov. elections 2025 – express your disdain by casting your vote in silence.
So whatever Ellison did or did not do for SKYDANCE (I don´t believe a single word about that SKYDANCE cheerleading re: Ellison but okay, maybe I am wrong) – folks don´t trust Ellisons. I for one am scared of them.
Compared to Ellisons as of now Netflix looks ike a burgler next to Mexican cartels.
So this deal goes was well beyond ususal running-the-numbers and simple movie making in Hollywood. This deal in a covet way might well by a major moral question of your time. At least to those who are involved…
Thanks for the Cardinale link.
I sometimes get lost when people talk about “reason” and the term “reasoning power” leaves me totally baffled. I know that reason bears some – possibly tangential – relationship to logic. I also know that it bears some relationship to the existing stock of knowledge and commonsense and, if you like, uncommonsense.
As for reasoning power resulting in “insights”, that relies on chance, ie, the probability of combining different elements – loosely, thoughts and experiences – to produce a novel idea which may or may not expand knowledge or provide a new insight which contributes to our understanding of reality (and I include mathematics as a fundamental element of human reality). All of us have novel ideas most of which may, or may not, be of great relevance to us but usually to no-one outside our immediate circle – apart from a psychiatrist or Trump appointee, or the poor bugger who has to read and mark (not scan) term papers.
I fail to understand why LLM can produce anything more useful than probabilities, surveillance capabilities, mimicry, re-gurgitating existing knowledge, offer even more hallucinatory responses than Wikipedia or the Huffington Post, or just become a realisation of HAL operating through the internet of things, let alone, “artificial general intelligence and then, in the not-too-distant future, artificial super intelligence”.
I’m not sure what general intelligence is beyond a capacity to recognise and respond more or less appropriately to a range of different but highly specifc situations, either in the eyes of others or by producing a desired outcome. I think the essence of the approach adopted by so many Chinese developers in taking open source material (and adding to it) to develop the machine capacity to deal with a highly specific range of problems and to generate a range of alternative solutions, applying the most appropriate in real time, and if that doesn’t work applying the alternatives until one does work, and then building on that knowledge and making it available to the world, makes sense. In other words, something cheap, cheerful and possessing specific intelligence capable of providing immediate practical benefits, and not throwing money at an ill-defined problem relying on computing power to gobble up what it can regardless of the IP rights of others in the unrealistic but intuitive hope of generating vast IP profits from monopoly power.
I think the answer to your questions lies in the question above concerning the development of quantum computing. The horizon of “real” AI most probably lies with quantum computing. Penrose speculated that consciousness is a quantum phenomenon. The actual AI we need to be afraid of will come with that super positioning capability.
edit: Nov. elections 2024!
Musk wanting to go to Mars is hilarious. Mars currently is a place that resembles Earth’s likely future if we aren’t careful. All he has to do is wait and bide his time here. The Earth will be ready for his terra-forming project. Though fixing things now would seem like a lot more efficient use of time and resources.
Of course, the old saying that “getting there is half the fun” still applies, so there is that.
SpaceX press statement, June 2030:
[This comment was by repeated violator of site Policies by sock puppeting. All comments will be overwritten or removed. Get another hobby rather than pollute this site]
Musk would love to send other suck… people to Mars. So they can die there and he can get his name on the accomplishment and market it to us here. Because no one is coming back from that journey for generations until we set up the infrastructure there to escape Mars’ gravity well.
Maybe the faster “AI” advances, the faster First World public intellect retards. Certainly US college general education (low bar) is imploding, enabling more time to get drunk and laid.
2026 hails the 2)th anniversary of Mike Judge’s Idiocracy.
We have arrived
When I think about Sam Altman, I am reminded of a story told by Robert Heinlein in one of his scifi novels.He tells of a man who started a rumour that there was gold to be found in Hell. Immediately you had floods of people both young and old in their greed rushing into Hell to seek their fortunes. Whole towns and regions of people were being depopulated as people went charging off into Hell. The man who started this rumour was watching this solid stream of people racing to go to Hell and said to himself that maybe, just maybe, there was something in this rumour of gold to be found in Hell after all and he himself then followed all those people there as well.
I suppose they couldn’t have copped a more appropriate title from the Ken (not Kurt) Russell film Billion Dollar Brain about a supercomputer, because in that one things end rather badly for the oligarch sponsor from Texas…
[This comment was by repeated violator of site Policies by sock puppeting. All comments will be overwritten or removed. Get another hobby rather than pollute this site]
I personally know enough silicon valley computer scientist types to believe that yes they absolutely do believe it.
The wild thing is that none of the specific, measurable, exciting use cases that these guys keep trotting out (e.g. early cancer diagnosis, customized gene therapy, high resolution crop forecasts) require AGI.
Big training and compute is fabulous for massive pattern correlation using large, domain-specific datasets. But we’ve known how to do that for a while – we can just do it on a larger scale now. Large language models that scrape up everything for AGI don’t have any special advantage here.
More to the point, what’s really going on here is an attempt by a half-dozen or so US-based tech companies with a proprietary stranglehold on global networking to extend that dominance to the Next Big Thing. I think that effort may very well succeed in the U.S. and any other country that the U.S. can bully into submission. The rest of the world will focus on open source software and non-U.S. hardware stacks that are way cheaper, plenty good enough, and readily scale to whatever problem you are trying to solve.
This. Well stated, thank you. My only quibble would be that AGI won’t happen. It’s the bait to get investors et alia onboard with the plan for that proprietary stranglehold.
The Rabid Proprietarianism sorts, of various hues, seem bent on owning the future, today is not enough … future income demands it … hence throwing massive sums of today’s value at it whilst the wheels come off functional society in the West.
Yet as of now the most use of AI or its future variants is Military and Industry. On the military it seems Russia and China are creating a whole new multilayered data approach e.g. everything is a sensor and sharing it which creates a real time picture of the battle space spanning thousands of klm. Both Air/Missile forces have a completely different doctrine than the West. Its not a denial of operational space for weapon systems like F22/35, its a denial of all support for any weapons systems. Hence why their air frames are not all about stealth, not that its going to be relevant for long in passive mode. Contra they have speed, range, passive/active counter measures which give them the time needed to select the most critical targets and launch long range hyper-sonic missiles – good bye AWACS/Tankers.
Anywho the whole Idea that AGI will cure cancer et al when they are largely a result of human activities. Would these people even consider what future illnesses might be related to their activities, if not physical health, psychological …
Yes, there is no cure there is largely only prevention.
But that would require a fundamental change in our society.
‘We may not get AGI, but at least we will replace search with narrative generation we can control’
– Anonymous Oligarch
I wonder why none of these techno bros are embracing some of Iain M Banks Culture series names and ideas? Too communist? And those Minds, you really couldn’t corrupt them…
https://theculture.fandom.com/wiki/Mind_(Wikipedia_version)
They don’t even get Dune.
I would not expect them to get that.
There is always some ambiguity in believes. Certain awareness sourced from believe system serves more inspirational role, and often might be misinterpreted as a logical point of view. For people in a position of financial power it’s one of easiest traps: you can’t distinct between what is seems to be logically achievable and what is being promised. Because there is no criteria to determine that – those people are not specialists. Corporate system is not capable to generate fair feedback. Everything what is reported – is a “success”. So, it’s not surprising, that leaders are loosing certainty.
>>One of the things that I have learned continuously is, although we can say the ramp will be very steep, it’s difficult to be very precise that, you know, it’ll happen this month or this year.
I’m pushing myself to learn continuously to make my believe system stronger, because I’m actually not aware that “the ramp will be very steep”; and now I need to invert the phrase on a fly to shift semantics toward “this year precise prediction”; I’m borrowing “this month” and “this year”, because I’m actually bothered by the current situation more than a decade-long time span.
>>But I would certainly say by the end of this decade, so, by 2030, if we don’t have models that are extraordinarily capable and do things that we ourselves cannot do, I’d be very surprised.
As I thought before, decade – is not my priority, but we agreed to claim it as a corporate business target, so 2030, and I need a double negation at this point to combine both: I don’t believe in that, and yes, we don’t have those models.
So, yes, it’s a complete mess in his head, because report system signifies “success”, planning relies upon brute-force – and this scaling quickly becomes visibly vast. Now he is at the state “we missed some details”. 2026 quickly becomes “we’re trying everything”, though everything what we could do we tried in 2025.
Human nature, scientific and corporate culture can’t give us a chance to achieve AGI without decades of sorting out “rights” and “wrongs”. If LLM can speedup this process – cool, but the dry rest of what we were so proud in 20-21 centuries will be ridiculously small. Our general knowledge is a bubble. Maybe, we need to become better humans first, and here chances are tiny. Training data is a bottleneck, not the amount of computational power. Quality of training data – is a quality of our civilization.
Fabulous article, comments, links. Thank you all!
I have always enjoyed Bill McKibben’s writings. I picked up “Enough” (2003) assuming it was about consumerism, thneeds, over-demanding resources and over-producing in a closed loop limited spaceship Earth. Nope— all about Ray Kurzweil and his leading- edge pals, the singularity. Great read.
I believe I witnessed the precursor singularity event in Ft. Collins Colorado (college town) on a Friday afternoon in the 20-teens. It was like something out of the twilight zone.
People were wandering around Old Town (the basis for Disneyland’s quintessentially “american” main street) with their heads bent down over their gizmos. I mean EVERYONE. Walking slowly, completely engaged with the smartphone, oblivious to traffic, cars, stop lights, other pedestrians. It was eerie, like a zombie move. It was about 4 pm, I was trying to get over to a beautiful old city park- Edora Park, for a late afternoon round of disc golf before the 5 pm throng. When I got to the generally attended but never packed park, it was teeming with zombified folks, walking apparently aimlessly- all over the park, alone or in small groups, head bent over, tethered hard to their gizmo screens.
I queried one group, “What in the heck is going on? Did US launch Nukes?”
“Oh, no, the new poke-mon Go! dropped. We are seeking tokens”
From the NC article:
“It’s one thing to try to understand what’s happening on our planet using observation and reason, but the degree of difficulty increases considerably when one realizes that many leading actors are driven by ideologies and even eschatologies that are at best supra-rational and at worse completely insane nonsense.”
https://www.goodreads.com/book/show/199363.Enough
AI seems like a search engine on steroids, demanding obscene amounts of carbon- based energy sucking life-threatening amounts of water.
Maybe we should download the brains of all the power peeps, load ’em onto particles, and laser them off into The Great Beyond of the Universe, and delete their earthly corpus’? 21st Century Guillotines? Freeze, Nationalize, and use their assets to get going on the Real Work of getting the earth, humans, and all other species on some sort of sustainable glide path?
As to the article out into space, here is another fun book: Michio Kaku:
https://www.goodreads.com/book/show/36407347-the-future-of-humanity?from_search=true&from_srp=true&qid=XW3Tn7WwHP&rank=10
Good Luck, Everybody else! (its a not- too- PC Family Guy meme) Substitute Move Fast & Break Things
16 seconds: https://youtu.be/LLuaPZWkvZ0
Hubris. Pride goeth before The Fall?
Thanks for the kind words and the book recs! McKibben on hold at the local library!
“I can go on and on, but AI will fundamentally change our lives in medicine, agriculture, and robotics across the board.”
Who is meant by “our lives”?
and
Will the ‘fundamental changes” be positive or negative? and,
Upon who or what perspective will determine P or N outcomes?
Are humans to govern AI, ASI, or are AI, ASI to govern humans?
I am going down a rabbit hole so, sorry in advance.
To me, It is the most direct path to control all new developement/patents and production in the basics/commodities of life through financial engineering.
Shelter/food/health/water…..all brought to you through corporate power..
the ability to control all investement returns in whatever you invest… be it stocks/ bonds/ futures/ hedges/ CDOs/ Distribution/ production/ building/ patent exploitation/ taxation/ legislation/ farming/ investigation/ framing opposition/ legal/ rights/ prosecutions/ politics………all integrated through AI/ASI etc.. that is what unity means IMHO and, it is the end state (according to my corrupted, logic)
This end state is no longer dynamic or evolving but, stagnated and detached from evolution and all other life…out on a limb…..precarious in the physical and metaphysical both…constrained and bounded by energy and space
On the positive…the static end state can not endure in a dynamic universe …
If all this fundamental change (investment) is done for private profit devoid of the public space (the control of the human specie by human financial/corporate control or AI/ASI financial/corporate control) – then it is a violation of the basic tenents of USA constitution? maybe maybe not?
I wil blindly take the positive position that implosion will lead to explosion of some nature and in some positive direction….maybe not in my life time….but in the knowledge that some future is brighter than no future.
Happy New Year
Modern world has solved survival equation of human life. Ever single person can have food clothing and shelter.
Not everyone getting basic necessities of life is due to human greed and other moral issues. What more abundance AGI will provide which will make human life better? . Why would humans share that extra abundance ?. You will only see 2000 ft yatch with 20 bedrooms instead of 500 ft ones. Human happiness is a mental puzzle and AGi does zilch to solve that
To what extent would any of these AI goals or initiatives fail if the masses reject AI?
I ask because last night I was at a NYE party where tongues were loosened and was surprised that AI not only was a VERY hot topic, but also there is a near universal and quite vicious hatred for it across the political, class and cultural spectrum. Serious anger and rage like I don’t think I’ve seen before, it felt like people hate this far more than they’ve ever hated Trump or Elon. As other groups overheard “AI” they beelined toward the discussion until the whole party was disccussing it, it was astonishing.
It felt like AI is very much being imposed against wills, people are feeling constrained or forbidden from speaking openly on account of corporate workplaces and careers and, feel strongly about it, are needing channels to vent. I imagine this was probably happening across the world last night? And judging from what was said – this kind of animosity isn’t going to be reasoned with, negotiated with or easily overturned, is quite entrenched.
AI maybe technology, may even provide benefits, but is pushed as ideological dogma and this may be its undoing. The corproate gloating and executive assholery might be pushing us toward a perfect storm of sorts.
It’s in part an Ahab-ian thing: their procedures are logical, while their purposes are mad.
The arrogance of these people is staggering – to think that they can program AGI and ASI when we, the human race, don’t even understand ordinary human intelligence very well.
So far, AI is just advanced mimicry, using huge datasets to feed complex algorithms – think of them as advanced search engines that provide formatted human-like answers instead of pages of URLs. AGI just takes it up a notch or two using ever greater amounts of processing power, but will still be far short of real intelligence as we know it in humans. It will just fool a lot more people into believing they are dealing with a human.
ASI, as it is defined, would require sentience, another aspect of human intelligence we don’t understand well enough. We know what it is to us, but we don’t fully understand how it occurs. And, of course, nobody has a clue how to program it.
Meanwhile, science is starting to recognise non-physical aspects of our minds that are as much a part of our ‘intelligence’ as the physical brain. Much of this new-found knowledge, isn’t actually new -it has been known for a long time but not recognised by scientists. That is changing with the discovery of new physics, such as quantum physics that defy understanding, and confirm just how little we really understand reality and life.
So, to think we have any chance of ever developing ASI with our currently known science and technologies is naive in the extreme, as well as being supremely arrogant – something we humans are very good at!