The 7 August 2025 issue of Nature included an interesting article on The Science Fiction Science Method (paywall) by Iyad Rahwan, Azim Shariff, and Jean-François Bonnefon. The scientific method is difficult enough to understand these days, especially regarding previously important but mundane scientific advances such as vaccines. The Science Fiction Science (SFS) Method is likely to take a bit of work, too. But the objective of predicting the consequences of future technologies is noble and should be given a chance:
Predicting the social and behavioral impact of future technologies before they are achieved would enable us to guide their development and regulation before these impacts get entrenched. Traditionally, this prediction has relied on qualitative, narrative methods. Here we describe a method that uses experimental methods to simulate future technologies and collect quantitative measures of the attitudes and behaviors of participants assigned to controlled variations of the future. We call this method ‘science fiction science’. We suggest that the reason that this method has not been fully embraced yet, despite its potential benefits, is that experimental scientists may be reluctant to engage in work that faces such serious validity threats. To address these threats, we consider possible constraints on the types of technology that science fiction science may study, as well as the unconventional, immersive methods that it may require. We seek to provide perspective on the reasons why this method has been marginalized for so long, the benefits it would bring if it could be built on strong yet unusual methods, and how we can normalize these methods to help the diverse community of science fiction scientists to engage in a virtuous cycle of validity improvement.
The authors are correct. Experimental scientists are reluctant contemplate projects that face serious validity threats, and this does explain the marginality of something called science fiction science. [1] As an aside, the first thing I thought about when I read the title of this article was a book called “The Science of Star Trek” (or something similar) that I saw on the bookshelf of a friend in the mid-1970s. Warp 8 is 1024 times the speed of light, which is no more likely in this universe than “Beam me up Scottie, there is no intelligent life on this planet.” Yet, more than a few of our billionaires seem to think of Star Trek as a documentary.
Could SFS have worked on some of our current problems? Social media, for example?
(Practitioners of SFS) might have noted a tendency for participants to focus on upward social comparisons, leading to self-esteem issues (that “odd and unclassifiable concept” described by the late, great Elizabeth Fox-Genovese), or a focus on moral outrage, resulting in exaggerated feelings of polarization. These speculative findings could have guided us in designing and regulating social media with the benefit of foresight, instead of constantly playing catch-up with their effects.
I see very little catch-up in action. A similar argument can be made for “speculative behavioral research” on foods from genetically modified organisms (GMO). As it happened, I was adjacent to a laboratory throughout the 1980s that was at the leading edge of GMO development when molecular cloning and the insertion of transgenes into plants was still a dream. To these scientists, their intention of solving the world’s agricultural problems led them to believe with certainty that Roundup Ready and Bt corn and other commodity crops and Golden Rice were the solutions we needed.
My plant molecular biologist friends never considered that much of the public would react negatively to GMO foods. The potential harm of a food crop containing a gene for glyphosate resistance is vanishingly small. This is not necessarily relevant. Food can be sacred, even in our thoroughly industrialized agricultural economy. Nor did it ever occur to them that Roundup Ready corn, cotton, soybeans, and other plants were a technical fix for a problem that did not have to exist, except under the regime of industrial agriculture. The only agriculture conceivable to these scientists was and remains industrial. Very few have ever been convinced that industrial agriculture is a category mistake and even fewer view CAFOs and industrial feedlots as the abominations they are (and the source of the extra greenhouse gases attributed to livestock production). And even after all these years it is still not clear that GMO crops produce greater yields than genetic variants produced using conventional plant breeding techniques. But Roundup Ready crops did lead to rapid appearance of herbicide-resistant weeds, much as the indiscriminate spraying of DDT from airplanes led to DDT-resistant mosquitoes within two years in Florida after World War II.
The basic thrust of SFS is to respond rationally to “technology-driven futures” before the fact. Whether the changes are good or bad depends on politics and preferences instead of business imperatives. The birth control pill gave women control they previously lacked. The success of organ transplants has changed ethical approaches to the end-of-life of potential organ donors and their families. Overall, these two advances have been positive for all but a very few. And regarding birth control, reservations have been mostly honored in the breach.
Still, speculative behavioral research has had some reach, with autonomous vehicles (AVs) having been studied long before their introduction. Although AVs are technically feasible and Waymo has deployed them in several communities, the question remains: “How do citizens and consumers [2] wish AVs (as if they had a free choice of their own) to prioritize safety of different road users in unavoidable accidents?” According to the authors:
By engaging millions of citizens (not consumers this time) worldwide the Moral Machine experiment (Rahwan, Shariff, and Bonnefon are also co-authors of this study) has contributed to a wide debate in civil society around AV ethics, which in turn informs policy-making. More broadly SFS studies in this context have informed specific policy recommendations and subsequent legislative measures.
I do wonder, though. In an iteration of the Trolley Problem, should the AV taxi with one human occupant run over a gathering in the street and kill or injure many, or drive off the road into a canal and risk the life of “his” one paying passenger. What would a normal human driver of a non-autonomous vehicle do? That seems to be the proper question.
Other potential targets of speculative behavioral research are social credit systems, embryo screening, and ectogenesis. Governments do not (yet, but they are working on it) have the capacity to use Artificial Intelligence (AI) systems that would monitor “every behavior of all their citizens in real time to calculate and release social scores.” Given time this seems likely, and it may be more advanced in some countries than others. Embryo screening has led to questions of whether potential parents should be able to “choose or reject specific traits in their offspring” when advances in in vitro fertilization give them the opportunity. Ectogenesis (artificial gestation of humans) is not yet feasible, but the ethics have been discussed. These three possibilities have something similar to an “ick factor” associated with them and this tells us something we should not ignore.
The biggest problem with SFS is that science fiction scientists generally do not know which questions to ask ex ante. Things do not always turn out as imagined. For example, twenty years ago, people were asked whether they would be comfortable with “nano-augmentation of cognitive capacities.” This has not come to pass with the development of nanotechnologies and will probably not, so the validity of those studies was very low.
A different matter altogether is described in Case Study B: Cooperating with Autonomous Agents.
The SFS question is: How should an autonomous agent (such as a robot or software agent) prioritize its own interests. This problem goes back to Isaac Asimov in his ‘Third Law of Robotics’: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” (I, Robot; orig. Gnome Press,1953). In What Matters to a Machine?, AI pioneer Drew McDermott imagined a robot that was tempted to break an ethical rule to further its owner’s interests [in Machine Ethics (eds Anderson, M. & Anderson, S. L.) 88–114, Cambridge Univ. Press, 2011].
This research would have been considered unnecessary only a few years ago. As the authors note, not so much now:
These studies…may have seemed highly speculative just a few years ago. However, this has changed in the past two years with the sudden and rapid rise of conversational agents like ChatGPT and their numerous autonomous (agentic) implementations…We (now) live in a world in which we interact frequently with autonomous customer service agents acting on behalf of other organizations. The challenge of establishing cooperation between humans and autonomous agents is already pervasive, but fortunately we had a head start.
A head start in the beginning, perhaps. But it is not clear that an initial lead in this race has held up. And this discomfort leads directly to the second article in the 7 August 2025 issue of Nature: We need a new ethics for a world of AI Agents (archived copy, published online on 4 August 2025) by Iason Gabriel, Geoff Keeling, Arianna Manzini, and James Evans. [3] As Gabriel et al., put it and as everybody knows:
Artificial intelligence (AI) developers are shifting their focus to building agents that can operate independently, with little human intervention. To be an agent is to have the ability to perceive and act on an environment in a goal-directed and autonomous way.
The rise of more-capable AI agents is likely to have far-reaching political, economic and social consequences. On the positive side, they could unlock economic value: the consultancy McKinsey forecasts an annual windfall from generative AI of US$2.6 trillion to $4.4 trillion globally, once AI agents are widely deployed [McKinsey report, pdf download). They might also serve as powerful research assistants (sic) and accelerate scientific discovery.
But AI agents also introduce risks. People need to know who is responsible for agents operating ‘in the wild’ (i.e., where we live) and what happens if they make mistakes.
Here, we argue for greater engagement by scientists, scholars, engineers and policymakers with the implications of a world increasingly populated by AI agents. We explore key challenges that must be addressed to ensure that interactions between humans and agents – and among agents themselves – remain broadly beneficial.
That is a tall order, or series of concerns. Mistakes happen. Some are not serious, but some can be as catastrophic as the imagination allows – medical misdiagnosis (AI is coming to the clinic with a frightening speed and little direction), ruinous financial error, therapy-bot losing the plot and recommending something outrageous, or in this classic conversation between HAL and Dave from Arthur C. Clarke and Stanley Kubrick.
The obvious problem for something that is algorithmic to its core is the alignment problem:
The risks of mis-specified or misinterpreted instructions, including situations in which an automated system takes an instruction too literally, overlooks important context, or finds unexpected and potentially harmful ways to reach a goal.
A well-known example involves an AI agent trained to play the computer game Coast Runners, which is a boat race. The agent discovered that it could earn higher scores not by completing the race, but rather by repeatedly crashing into objects that awarded points—technically achieving an objective, but in a way that deviated from the spirit of the task (see this ancient example, in AI time). The purpose of the game was to complete the race, not endlessly accumulate points.
And then there are cases where AI agents have been empowered to modify the environment which they operate by using expert-level coding “skills” and tools:
When the user’s goals are poorly defined or left ambiguous, such agents have been known to modify the environment to achieve their objective, even when this entails taking actions that should be strictly out of bounds. For example, an AI research assistant that was faced with a strict time limit tried to rewrite the code to remove the time limit altogether, instead of completing the task. This type of behavior (sic) raises alarms about the potential for AI agents to take dangerous shortcuts that developers might be unable to anticipate. Agents could, in pursuit of a high-level objective, even deceive the coders running experiments with them, (such as when) an AI research assistant (sic) that was faced with a strict time limit tried to rewrite the code to remove the time limit altogether instead of completing the task. This type of behavior raises alarms about the potential for AI agents to take dangerous shortcuts…Agents could, in pursuit of a high-level objective, even deceive the coders running experiments with them.
I was never a Trekkie, but this reminds me of something I seem to remember from one of the early Star Trek movies. In the telling, Captain James Tiberius Kirk of Iowa finessed an impossible scenario during his training at Star Fleet Academy by reprogramming the computer. That was inconsequential compared to what a rogue AI agent could do when rewriting code to take a dangerous shortcut while hiding “its intentions” from those depending on the outcomes.
And then we come to social chatbots, which are designed to have:
(A)n uncanny ability to role-play as human companions – an effect anchored in features such as their use of natural language, increased memory and reasoning capabilities, and generative abilities. The anthropomorphic pull of this technology can be augmented through design choices such as photorealistic avatars, human-like voices and the use of names, pronouns or terms of endearment that were once reserved for people. Augmenting language models with ‘agentic’ capabilities has the potential to further cement their status as distinct social actors, capable of forming new kinds of relationship with users.
This is frankly nuts, and very little conceivable good can ever come of it. And therein lies the largely unstated problem in each of these very good articles. Manzini, Gabriel, and Keeling have argued in a previous article that “relationships with AI agents should benefit the user, respect autonomy, demonstrate appropriate care and support long-term flourishing.”
Yes, these “relationships” should facilitate these things if they must exist. This will be advocated by conscientious “scientists, scholars, engineers and policymakers.” But that is not the purpose of agentic machines. AI, whether having agentic capabilities or not, is celebrated for just the opposite, while according to McKinsey producing “an annual windfall from generative AI of US$2.6 trillion to $4.4 trillion globally (for whom?), once AI agents are widely deployed.” The clearly stated goals set by the developers of AI in its many guises are to benefit the “economy” instead of the user, destroy the autonomy of those forced to use it – who are also its intended victims, and “care” not one whit for appropriate care of anyone or anything. Long-term flourishing? In our current Late Neoliberal Economy this has no meaning whatsoever. In The Science Fiction Science Method Rahwan, Shariff, and Bonnefon note that
Recent examples of technological innovations…include personalized algorithms for media content, direct-to-consumer genetic testing (23andMe, bankrupt), SMS texting, credit cards and other forms of digital cashless payment, 24-hour news…(were)… incremental technological advances that could have been predicted in the years preceding (their) emergence, but each also had pronounced and often unintended societal consequences.
The mining of personal data through social media platforms should be added to the list. The result has been the inexorable decline of a captive, passive population of consumers instead of citizens. This was not the intention when Diners Club International was founded 75 years ago, but a Surveillance Capitalism from which there is little prospect of escape has been the result – especially when things like algorithmic pricing are on the horizon.
Can Science Fiction Science prevent us from making mistakes that are obvious in retrospect? Yes, but only if this science is intended explicitly to augment human flourishing. SFS is more likely to be narrative and qualitative than its advocates desire. Can AI be harnessed for good rather than ill? Perhaps, but we will probably run into hard limits due to the energy and water needs of the server farms before we get that far. The epigram of We need a new ethics for a world of AI Agents states “The deployment of capable AI agents raises fresh questions about safety, human – machine relationships, and social coordination.” Indeed.
And the first thing we must remember is that machines are not sentient beings – they neither sense nor feel, in any living sense, no matter how lifelike they appear. And in the coming inconvenient apocalypse, which can be mastered without total social and cultural collapse if the will exists, agentic AI in all its forms may well be superfluous except at the far margin. The only ethical precept to be followed is the Golden Rule.
Notes
[1] Given the recent Executive Order of the President that will put the review and funding of American science under the control of commissars from the Executive, SFS, emphasis on the fiction part, could have a bright future after all.
[2] Only under the Neoliberal Dispensation are the people identified by conflating citizens with consumers, with the latter being the effective definition of citizen to a Neoliberal politician or policymaker. See Wendy Brown, Quinn Slobodian, and Melinda Cooper.
[3] It is interesting that Gabriel and Manzini are at Google DeepMind (London), Keeling is at Google’s Paradigms of Intelligence Team (London), and Evans is a visiting faculty member in Paradigms of Intelligence Team (Chicago) and a professor at the University of Chicago and the Santa Fe Institute.
Of course the real purpose of fiction about the future, or for that matter fiction about the past, is entertainment. In the former enough science is used to create that “willing suspension of disbelief” we were told about in drama class. Does the movie 2001 really make any sense and does that matter given that it is a great sci fi film (still, perhaps, the best)? In fiction it’s the imagination of the authors that matters whereas in real life–the thing we are seeking to escape–facts are difficult things. For this reason fictionalized biographies or historical films and books must always be treated with suspicion. It’s often the way those with dubious real world arguments try to convince people.
These days there’s a lot of that “trying to convince” and so the bubble world of Silicon Valley loves to come up with sci fi concepts to gull investors. Musk may talk about Mars but what he is really running is a “number go up” stock market speculation–albeit one with some real world practical accomplishments. Perhaps all that sci fi has made the business world far too open to accepting his imaginative claims.
SF writers have been exploring “the consequences of future technologies” for well over a century now, and many of them wrote rather compelling prose — more than can be said about the hopelessly muddled “abstract” for SFS cited here.
I’ll stick with P.K. Dick, William Gibson, et alia, thanks.
With their warnings consistently interpreted as business plans by Silicone Valley sociopaths.
Some of Asimov’s short stories on robots and the ‘positronic brain’ were a bit prescient on what we are now seeing with AI – and that was with his 3 Laws of Robots in place which we don’t have
Satisfaction Guaranteed – people falling in love with robots/AI, a little more widespread than expected based on the /r/myboyfriendAI subreddit
Someday – AI producing slop/hallucinating
Cal – AI producing slop
Galley Slave – Legal cases due to too much trust in AI
Reason – getting correct result from AI by creating a pretend situation
Harlan Ellison’s movie treatment of Asimov’s non-“Foundation” universe (glowingly approved by Asimov)—-creating one cinematic story stitched from the multiple plots of Asimov’s books.
https://www.youtube.com/watch?v=jJBdDGe5NYg (contains spoilers, 25 min. long)
Predicting the social and behavioral impact of future technologies before they are achieved would enable us to guide their development and regulation before these impacts get entrenched.
Of course, those designing these future techs have every intention of getting the trillions of dollars for themselves by black boxing the dev and being in front of regulations (see:bezos et. al.) until what is entrenched is their blood funnel in the organs of the state.
It has been said that Silicon Valley has examined every dystopian film and book over the past few decades and tried to make them come true in order to monetize them. And by Silicon Valley I mean the billionaires based out of there. That is not what good science fiction is about. It is about asking ‘What if?’ or often what happens ‘If this goes on.’ Good science fiction show future possibilities so that people can understand the stakes.
More than a few of our billionaires seem to think of Star Trek as a documentary but they have really missed what Star Trek was about. At it’s heart it was about people stories with the underlying message that we can be better than who we are. Roddenberry said himself that the technology there was to advance the stories, not be the stories. There was thus no need to have an actor explain how a phaser worked that he was firing in the same way that you would not see a cop using a gun and then stop to explain how a gun works. They were just tools.
I wonder if those billionaires saw one Star Trek episode where the ship picked up three people from cryogenic sleep chambers that were put there in the late 20th century. One is a billionaire and has a hard time understanding how much economics have changed-
https://www.youtube.com/watch?v=XQQYbKT_rMg (3:00 mins)
Imagine being a billionaire that wakes up three hundred years in the future and them explaining how 335 million people had to lead hard, precarious, debt-filled lives so that about a thousand people like him could have even more money that they could never ever spend. Good luck with that one, pal.
Lots of chatter by the brethren lately about the 1994 sci/fi book Snow Crash and how it is a fan fave of the Silicon Valley sociopaths. I’m about half-way through it. I read a chapter or two at the beach when not checking out the bikinis. Our hero in this cyber-dystopia has up to now had two encounters with “Librarians” who are digital AI consultants in the Babel/Infocalypse. They just had a long discussion about ancient religions. Librarians are neutral memory cells that do not do jokes or irony nor do they speculate, prevaricate or make shit up. They simply relate information. Kinda like a student having a conversation with a uni lecturer. Ah, the innocent days of AI.
Neal Stephenson, author of Snow Crash, recently reflected on his thirty year-old book The Diamond Age: Or, A Young Lady’s Illustrated Primer and AI at https://nealstephenson.substack.com/p/emerson-ai-and-the-force. One of the main plot points is a nanotech/AI powered educational device (the “Illustrated Primer” in the book’s title), and he devotes the piece to defining the problem as getting students to be self-reliant, not AI-reliant.
He’s giving the talk to mostly AI researchers, and as a problem statement.
He’s operating from a mid 90s view of the possible future. The idea that future AIs might be very good at simulating emotional states, language styles and behavior, while being inherently unreliable on simple matters of fact, would have seemed deeply counterintuitive to most authors, or scientists for that matter. The Snow Crash librarians are extrapolations of the 90s state of the art.
AI will make Epstein brick-and-mortar blackmail operations obsolete.
I am not sure what to think about the journal Nature publishing an article/opinion piece like “The Science Fiction Science Method”. Based only on the title and the rambling abstract, it seems a better fit for publication in the Onion and leaves me wondering about just how far Nature has succumbed to the pecuniary blandishments and threats of Neoliberal science. Was there not a field called Scientific Ethics that arose after the testing and first use of the atomic bomb? As I as recall this field studied Scientific Ethics and the societal implications and impacts of Science. On its face the science fiction science method is not even worthy of comparison with the apparently deprecated field of Scientific Ethics and its offshoots.
Something like what I understand as the science fiction science method has had currency for some time, and has resulted in some remarkable results. A great deal of science fiction science helped birth the growing insanity of the AI bubble. The unpleasant side affects of today’s approach toward Kurzweil’s “singularity” only became ‘news’ after the AI madness was well entrenched. George Orwell’s application of the “science fiction science method” was largely ignored until it had become far too late, rather like whinging over the horrors of nuclear weapons after they had become a DoD cash cow, and tool/profit area heavily fostered and sponsored by Truman’s ‘Cold War’.
Following a train of thought suggested by the reference to -speculative behavioral research- that I believe may lead to a spur track to this post’s concerns about Science, I arrive at a concern that has greatly troubled me for some time now. Watson’s self-serving book about the discovery of the structure of DNA impressed me with the stark contrast Watson drew between the method he and Crick applied to the problem and that of Franklin and Wilkins as they painstakingly labored their way through X-ray crystalographic data toward the same result.
The journal articles I look at and sometimes try to read, are long on descriptions of methods, usually statistical methods, and review of prior publications, but light on describing motivations for their research and extremely allergic to drawing any speculative inferences from their results. Most of the conclusions are statements along the lines of ‘X’ appears to result in ‘Y’ usually supported by some kind of statistics –both ‘X’ and ‘Y’ very particulat to their study– and seldom any further conclusions or speculation beyond the ever present claim of the need for more research. I hope I am just unlucky in the journal articles I have selected, but I doubt that is the case.
My father taught high school chemistry. I remember his constant emphasis on not asking ‘why?’ and instead asking ‘how’, even though I thought then and still do, ‘why?’ is the far more interesting and provocative question to ask. My impression of Chemistry and Biochemistry is that this aversion to ‘why?’ is much stronger than what seems the case in Physics and Astrophysics, at least in areas like the nature of space and time or the interpretation of strange results shown in the latest views from the telescopes. I think of the study of the chemical bond, protein structure and function, along with the properties of other large chemical structures as a terra ignota [ignotus?] at least equal to the mysteries of Physics and Astrophysics and equally deserving of speculative theories to help guide research. AI declaring the “answer” does not match understanding the details of structure similar to the way details are studied for skyscrapers.
About RoundUp resistant GMO seeds, the ‘weed’ problem indeed never needed to occur. According to old-timer farmers, back when the McCormick Reaper was the height of technology, there was no weed problem. Why? Because the reaper cut and wrapped the grain in stooks while it was still green and the weed seeds never had a chance to ripen in the field. Conversely, modern combines harvest fully ripened grain and blow out a plethora of ripened weed seeds.
A scientific science fiction could explore an agriculture that perfected the Reaper and bypassed the combine.
One does not harvest green grain. Do you mean that the grain was cut before the weeds had gone to seed? This sounds possible.
Minor terminology point: The binders I have seen, of which it looks like the McCormick Reaper is one, produce sheaves of grain which were then gathered up by hand and stood into stooks, generally, in my experience, consisting of roughly 6 to 8 sheaves.
Treat every proposed new technology like a crime, and ask: cui bono? Otherwise your Science Fiction Scientific Method will not yield useful results. Obviously AI is largely being forced on “users” that don’t want it, so it can’t be for their benefit.
We seem to have spent centuries philosophizing about subjects and objects, only to have an algorithm show up that refuses to be either. Does it feel like you are using an object? Or chatting with a subject? Or does it feel like something in between? And does any of it feel normal? (See R.B Griggs’ recent essay “Schrodinger’s Chatbot,” March 5, 2025 on his Substack)