Advances in Mind-Decoding Technologies Raise Hopes (and Worries)

Yves here. The mind-reading technology is deeply troubling. We already have too many tech created and enabled losses of privacy. The researchers are troubled too but that is not stopping them from pressing forward with their research, albeit with some handwaves about the need for legal protection.

By Fletcher Reveley, a freelance writer based in Tucson, Arizona, and a senior contributor to Undark. Originally published at Undark

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

In the last 10 years, the field of neurotechnology has proliferated at an astonishing pace. According to a report by NeuroTech Analytics, an industry research firm, annual investment in the sector increased more than 20-fold between 2010 and 2020, rising to more than $7 billion per year. Over 1,200 companies have crowded into the space, while large-scale government efforts, such as former president Barack Obama’s BRAIN Initiative, have unlocked billions in public funding. Advances in the field have proved life-changing for individuals living with conditions like Parkinson’s, spinal cord injury, and stroke. People who cannot speak or type due to paralysis have regained the ability to communicate with loved ones, people with severe epilepsy have significantly improved their quality of life, and people with blindness have been able to perceive partial vision.

But in opening the door to the brain, scientists have also unleashed a torrent of novel ethical concerns, raising fundamental questions about humanity and, crucially, where it may be heading. How society chooses to address the ethical implications of neurotechnology today, scientists like Yuste argue, will have profound impacts on the world of tomorrow. “There’s a new technology that’s emerging that could be transformational,” he said. “In fact, it could lead to the change of the human species.”

For Huth — a self-confessed “science fiction nerd” — the expanding frontiers of BCI technology are a source of great optimism. Still, in the weeks and months following the decoder experiments, the unsettling implications of the device began to nag at him. “What does this mean?” he recalled thinking at the time. “How are we going to tell people about this? What are people going to think about this? Are we going to be seen as creating something terrible here?”


Yuste knows well the feeling of being unsettled by one’s own research. In 2011, more than a decade before Huth and Tang built their decoder, he had begun experimenting on mice using a technique called optogenetics, which allowed him to turn specific circuits in the animal’s brains on and off like a light switch. By doing so, Yuste and his team found that they could implant an artificial image into the mouse brains simply by activating brain cells involved in visual perception. A few years later, researchers at MIT showed that a similar technique could be used to implant false memories. By controlling specific brain circuits, Yuste realized, scientists could manipulate nearly every dimension of a mouse’s experience — behavior, emotions, awareness, perception, memories.

The animals could be controlled, in essence, like marionettes. “That gave me pause,” recalled Yuste, later adding, “The brain works the same in the mouse and the human, and whatever we can do to the mouse today, we can do to the human tomorrow.”

Yuste’s mouse experiments came on the heels of a remarkable decade for neurotechnology. In 2004, a quadriplegic man named Matthew Nagle became the first person to use a BCI system to restore partial functionality; with a small grid of microelectrodes implanted in the motor cortex of his brain, which, among other things, is responsible for voluntary muscle movements, Nagle was able to control his computer cursor, play pong, and open and close a robotic hand — all with his mind. In 2011, researchers at Duke University shared that they had developed a bidirectional BCI that allowed monkeys to both control a virtual arm and receive artificial sensations from it, all through stimulation of the somatosensory cortex, which processes senses including touch. This paved the way for prosthetics that could feel. The types of movements possible with BCI-controlled robotic arms also improved, and by 2012 they could manipulate objects in three dimensions, allowing one woman with paralysis to sip coffee simply by thinking about it.

A genetically engineered mouse in Rafael Yuste’s lab exhibits a surgically implanted headplate to record and manipulate neuronal activity. Yuste uses techniques such as optogenetics to conduct experiments that turn specific circuits in the animal’s brains on and off like a light switch, allowing them to be controlled, in essence, like marionettes.Visual: Kitra Cahana for Undark

Meanwhile, other researchers were beginning to investigate the possibilities of using BCIs to probe a wider range of cognitive processes. In 2008, a team led by Jack Gallant, a neuroscientist at the University of California, Berkeley, and Huth’s former adviser, made a first step toward decoding a person’s visual experience. Using data from fMRI scans (which measure brain activity by assessing changes in blood flow to different regions), the researchers were able to predict which specific image, out of a large set, a study participant had seen. In a paper published in the journal Nature, the team wrote: “Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.”

Three years later, a postdoctoral researcher in Gallant’s lab, Shinji Nishimoto, went beyond Gallant’s prediction when he led a team that successfully reconstructed movie clips from recordings of participant’s fMRI scans. “This is a major leap toward reconstructing internal imagery,” Gallant said in a UC-Berkeley press release at the time. “We are opening a window into the movies in our minds.” Just a year later, a Japanese team led by Yukiyasu Kamitani threw that window open fully when they successfully decoded the broad subject matter of participant’s dreams.

But as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada, China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH had envisioned for the 12-year span of the BRAIN Initiative itself.

Yuste and the others had made progress in developing an ethical framework for these emerging technologies. But in the clamor of innovation, the question became: Would anyone pay attention?


When Huth and Tang’s semantic decoder began to yield results in the University of Texas experiments, Huth had two conflicting reactions. On the one hand he was gleeful that it worked and that it held promise as a communication aid. But it also stirred deep apprehensions about the misuse of such technology. His mind leapt to dystopian scenarios: thought police, forced interrogations, unwilling victims strapped to machines. “That was the first thing we were kind of scared of,” he said.

Like Yuste before them, Huth and Tang began a period of deep introspection about the ethics of their work. They read widely on the topic, including the Morningside Group’s 2017 article in Nature and a 2020 paper by a team led by Stephen Rainey, a philosopher at Oxford University. Although future uses of such technologies would perhaps be beyond their control, it nevertheless became clear to them that certain practices should be completely off limits — decoding from a resting state, when a subject is not actively performing a task, for example, or decoding without the participant’s knowledge. Brain decoding should not be used in the legal system, they determined, or any other scenario where fallibility in the process could have real-world consequences; in fact, it should only be used in situations where decoded information could be verified by the user. (People with locked-in syndrome, for example, should be asked yes or no questions to verify the decoded information is correct.) Furthermore, Huth and Tang concluded that employers should be prohibited from using brain data from their employees without consent, and that it was essential for companies to be transparent about how they intend to use brain data collected through consumer devices.

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties.) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails were put in place.

But while Huth and Tang were grappling with the ethical consequences of their work, Yuste, halfway across the country, had already gained clarity about one thing: These conversations had to move out of the theoretical, the philosophical, the academic, the hypothetical — they needed to move into the realm of the law.


On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

Over the next several years, Yuste traveled to Chile repeatedly, serving as a technical adviser to Girardi’s political efforts. Much of his time was spent simply trying to raise awareness of the issue — he spoke at universities, participated in debates, gave press conferences, and met with key people, including, Yuste said, one three-hour sit down with Chile’s then-president, Sebastián Piñera. His main role, however, was to provide guidance to the lawyers crafting the legislation. “They knew nothing about neuroscience or about medicine, and I knew nothing about the law,” Yuste recalled. “It was a wonderful collaboration.”

Meanwhile, Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

The resounding legislative victory in Chile was an encouraging first step for the incipient neurorights movement. But Yuste and Girardi also realized the limitations of legal protections at the national level. Future technologies, Girardi explained, would easily traverse borders — or exist outside of physical space entirely — and would develop too rapidly for democratic institutions to keep apace. “Democracies are slow,” he said. It takes years to pass a law and “we’re seeing the rate at which the world is changing. It’s exponential.” National regulations could provide some useful legal guardrails, Yuste and Girardi realized, but they would not be sufficient on their own.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

The advantage of extending interpretation of existing laws, Genser explained, is that signatories to those treaties would be obligated to immediately bring their domestic laws into compliance with the new interpretations — a way to stimulate action on neurorights at the international and national levels simultaneously. In the case of the ICCPR, Genser said, “there would be a clear implication for all states — 170-plus states party to that treaty — that they now need to provide a domestic right of mental privacy in order to comply with their obligations under the treaty.”

But even though Genser believes this avenue would provide the most expedited path towards enshrining neurorights in international law, the process would nevertheless take years — first for the various treaty bodies to update their interpretations, and then for national governments to wrestle their domestic laws into compliance. Legal guardrails always lag behind technological progress, but this could become especially problematic with the accelerating pace of neurotech development.

This lag is deeply problematic for people like Girardi, who question whether institutions are capable of withstanding the changes to come. How, after all, can the law keep up when humans are living in the world at the speed of light?


But while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

The results suggest that for now, at least, nightmarish scenarios of nonconsensual mind-reading remain remote. With these ethical concerns attenuated, the scientists have shifted their focus to the positive dimensions of their invention — its potential, for example, as a tool to restore communication. They have begun collaborating with a team from Washington University to research the possibility of a wearable fNIRS system that is compatible with their decoder, perhaps opening the door to concrete medical applications in the near future. Still, Huth readily admits the value of dystopian prognosticating, and hopes it will continue. “I do appreciate that people keep coming up with new bad scenarios,” he said. “This is a thing we need to keep doing, right? Thinking of ‘how could these things go wrong? How could they go right, but also how could they go wrong?’ This is important to know.”

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September, neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

The energetic pace of Yuste’s advocacy work is perhaps motivated by a conviction that the window to act is rapidly closing, that the world of tomorrow no longer looms on some faraway horizon. “They used to ask me, ‘When do you think we should get worried about mental privacy?’” he recalled. “I’d say ‘Five years.’ ‘And how about worrying about our free will?’ I said ‘10 years from now.’ Well guess what? I was wrong.”

Huth agrees that now is the time for action. These technologies may still be in their infancy, he explained, but it is far better to be proactive in establishing mental protections than to wait for something terrible to happen.

“This is something that we should take seriously,” he said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

Print Friendly, PDF & Email

27 comments

  1. ciroc

    Just before my grandfather died late last year, he tried to talk to my mother and me, but we could not understand what he was saying. If we could have extracted his thoughts from his EEG, we would have known what his last wishes were.

  2. Steve M

    What a truckload of possibilities. Like science fiction indeed.

    My mother has dementia. Would that be a comedy or horror?

    How fun would it be to aim that thing at the candidates during a debate?

    How many times have I finished a sequence of events with the words “What the f**k was I/the other thinking?” So I’ll believe it when I see it.

    After 150 years or so, they still can’t get my car to run properly. So, what’s this latest technology?

    Et cetera

    1. Randall Flagg

      Bringing a type of “Minority Report” so to speak, to all of us.
      As the tag line goes, “If you see something, say something.”
      This time, “If they think it, report it”…

      And as Steve M suggests above, what fun to get into the heads of any of our political leaders.

      Can’t wait until the “South Park” guys do an episode on this…

    2. JBird4049

      >>>What could go wrong?

      Being as I believe that Edward Murphy was a sunny optimist, I believe that it is much more than we can imagine that can go wrong. Anything I write will be an understatement.

      What is important to understand is that it is not what the Brain Reader will be able to do, it is what everyone will be bamboozled into believing it did. Drug tests, DNA tests, fingerprints, gunshot detectors, and AI are routinely misunderstood, manipulated, misused, or just falsified to convict the innocent often by state sanctioned “experts.” Claims of being able to mind read, especially if they were true, are horrifying, but any results, even failed ones, would be used by the state to prosecute and convict people regardless of their guilt or innocence.

  3. SocalJimObjects

    The possibilities are endless: Inception Part 2, Snow Crash, The Matrix, Neuromancer, etc. Forget ChatGPT, we are going to pipe the entire human knowledge into the brain, creating the ultimate intelligence, heck if they can get this working before the election, Biden might actually sound intelligent for once.

    I think I will still bet on the Peak Energy guys being right eventually and putting a stop on all this nonsense soon enough.

    1. Late Introvert

      Any technology that requires huge amounts of energy will fail in the coming years. I don’t have links, that’s just my gut, and my gut has been right more than wrong in my 5 decades.

  4. Xquacy

    That is no ‘mind-reading’ tech anymore than a video camera is a ‘physical law detecting’ tech. It is a device that pattern matches MRI scans with vocalized speech. It’s effectiveness is contingent on MRI scans being generalizable across every individual brain, which is promptly denied by the findings, and upon there being a linear correlation between brainscans, vocalized speech and thought:

    First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training.

    The above quote casts doubt on even a correlation between the first two, let alone all three. It is entirely likely that such engineering approaches will itself become a benchmark of what is happening in the brain, absent any other technique to determine correlation between brain activity and speech. But that is hardly a serious effort to advance an understanding of the mind.

    What is interesting about engineering applications to monitor, emulate or control human behaviour (say including LLM’s and the like) is not the technology itself – which are rather underwhelming in their results – but how enamoured many people are by them—with an almost mystical belief in their their power, which is suitably accompanied by standards of expectation one finds among members a religion rather than among scientists. Suits the authorities just fine, who can razzle dazzle the rabble with such technology and act like its the cutting edge of truth finding, while the rabble, still dazed and stupefied, accept the most flexible interpretation of the facts in administration of guilt, for to go against such high tech verdicts would be to go against technology itself!

    In short, I see such applications of technology for social engineering in the same way I see the use of dense and obtuse language in the fields of economics and law.

    1. lyman alpha blob

      Thank you. I had a similar take on the article – it reminded me of one I read years ago with claims that dolphin language had been decoded. They had basically hooked up tapes of recorded dolphin sounds to one of those party light machines. Certain dolphin sounds caused the lights to flash in certain ways, and you could then find patterns in the lights. When you got into the details, in no way whatsoever did the patterns of light correlate to definite dolphin “words”. It was all just a partly trick that somehow made it through whatever review process the Science Daily website had at the time. I’m sure that dolphins do think and communicate with each other, but human beings are far from understanding it.

      I’m quite sure I could develop a “machine” (which could be a cardboard box with some knobs drawn on it) that could determine my own thoughts, with me to verify its accuracy. I’m also quite sure I could not do it for you or anyone else.

  5. vidimi

    going to be interesting to see the innovations to beat these brain-reading algorithms. Creating eclectic, jumbled thoughts will become an artform.

    1. vao

      In 1937, famous French author André Maurois published the short story “La machine à lire les pensées” (the mind-reading machine).

      In it, he described a situation where the exact kind of device discussed in the article becomes a commodity — everybody can buy a portable one and decipher the thoughts of people near-by. Of course, mayhem and paranoia ensue.

      For a while. People’s thoughts are mostly disordered and jumbled, and when not, generally mediocre, confused, pedestrian, and uninteresting. Furthermore, a whole market of books and techniques to resist the mind-reading machines springs up with such advice as to read or think about boring things, or to solve riddles and the like.

      The devices end up falling into disuse, being switched on occasionally only for entertainment.

    2. caucus99percenter

      Tenser, said the tensor …

      A jingle used by a murderer to block telepathic reading of his mind. See Alfred Bester’s classic sci-fi novel The Demolished Man

    3. t

      I suppose atypical people might be less concerned than the average bear. But I pity those with intrusive thoughts – which I think is most people, at least sometimes.

  6. Goat_farmers_of_the_CIA

    The whole problem with neuroscience’s approach to reality, its epistemology, is not far from KLG’s criticism of modern medical research: its evidence based, instead of science based. In other words, its all about correlations without figuring out the underlying processes and systems. Neuroscience has one big elephant in the room that sometimes feels like it’s really a bête noir: the nature of consciousness. That’s one concept and phenomena that goes much, much deeper than correlations between phonemes and morphemes and brain waves, which just like LLM’s, never go beyond the merely formal, superficial level of language. But because neuroscience, like much medical research, has become profoundly unethical due to an MBA-like focus on the bottom line, it can’t go beyond that surface as it doesn’t provide immediate profits and/or research grants. Xquacy’s comment above was right on target and just confirmed what I suspected. Yet another instance of neuroscience vaporware.

  7. Susan the other

    I can see it now. A huge new market for the Thought Dildo. Obviously, since you can’t stick it in your head, everyone will be sticking their heads up it. What a profit center. This sounds like “full steam ahead and damn the complexities.” and etc.

  8. New_Okie

    While the mind reading potential is certainly the stuff of dystopian sci-fi, I am most curious about the potential to implant memories in mice and manipulate them “like marionettes”. Do these memories last after the device is removed? Or what about conditioning applied while the device was on? Would that not last long after it was removed?

  9. GDmofo

    ‘ dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.” ‘

    I can already see the Amazon patents based on this rolling in. I wonder if they’ll integrate into the wage-cage?

  10. BrooklinBridge

    What could possibly go wrong (especially as long as they create laws to take care of it)?

    In a world where genocide is never used, never mind thought of by the very people whose parents and grandparents were subjected to it, it’s comforting to know that no one would ever ever misuse mind reading/influencing technology. We just need a nice soothing sounding framework as it were in which it is specified when and where such invasive capacity should and shouldn’t be used and some ambiguous laws thrown in for good measure, and presto voila, safe and sound that no one will succumb to the desire for revenge or profit or power. For proof, you can ask the Gazans about the moral and ethical issues involved and you can give a quick phone call to a big company such as Experion (they will pick up before you finish dialing) about the iron clad technical side of never ever, EVER loosing their customer’s sensitive data, including SS numbers of millions. AND, we can be comforted that if it DID somehow happen that these players could argue they had their fingers crossed behind their backs when they made the promises.

    So why worry…

  11. Susan the other

    So how will we, one day far away, actually create synthetic thoughts from the memory traces of sensory and extrasensory inputs and from thence to vocal sounds originating in strange combinations of primal warning noises and thence to images scratched in wood and rock and thence to a code of scratches which gradually permute in complexity to a lexicon of logical reactions which grow to become a vast etymology which spirals in on itself under too much confusion and dissonance to become contradictory and absurd? And we can actually see and hear this absurdity and it makes us laugh, but our ability to create the proper word seems to stop at that point. But miraculously, in unison, we all recognize what was funny.

  12. Tom B.

    A pretty good recent book on the topic:
    “The Battle For Your Brain”
    https://us.macmillan.com/books/9781250272966/thebattleforyourbrain
    It’s not a hyper alarmist klaxon call, but the technology discussed is much further along than I had realized. I can see much potential for abuse by the usual suspects. Real-time adaptive advertising, mandatory employee attention monitoring (already emerging in some industries like mining, construction and transport), even more annoying user interface gimmicks.

  13. WillD

    There is no question that this technology will be used by governments and other state-sponsored actors for nefarious purposes – just like with all other advanced tech.

    No laws, courts, or elected officials will be able to stop it – simply because they are powerless when it comes to even knowing about such weapons, and even if they knew and legislated to control their use, the people behind them act totally outside the law, just as they always now.

Comments are closed.