Tech Companies Are Getting Into Neurotechnology. Should We Worry?

Yves here. The photo for this article shows a man wearing virtual reality goggles. The article makes clear that the data sources come from neuronal activity, as in using neuromotor signals, presumably captured via skin monitors (I think of a less intrusive version of the stickies and leads used for EKGs).  Of course, these developments were long anticipated in science fiction, see Neuromancer and many tales having brain implants and related human capability enhancement as a major plot device.

The concerns raised here are of yet more individual data capture and sale and loss of privacy. Paranoid Luddites like me cannot fathom why so many are cavalier about this sort of thing.

Back to the goggles. The fact that one use case is better VR, make me wonder if people who don’t have binocular vision (as have no depth perception1 and can’t use VR and therefore would not be included in datasets with VR type applications) will be excluded from some of these “advances,” at least for a while.

By Michael Nolan, a science and technology writer. His writing covers neurotechnology, data privacy and emerging neuroscience research. Originally published at Undark

The past few decades of neuroscience research have produced a wide array of technologies capable of measuring human brain activity. Functional magnetic resonance imaging, implanted electrode systems, and electroencephalograms, or EEGs, among other techniques, have helped researchers better understand how our brains respond to and control our bodies’ interactions with the world around us.

Now some of these technologies — most notably, EEG — have broken out of the lab and into the consumer market. The earliest of these consumer-facing neurotechnology devices, relatively simple systems that measured electrical signals conducted across the skull and scalp, were marketed mostly as focus trainers or meditation aids to so-called “biohackers” seeking to better themselves through technology. However, tech industry giants have lately taken notice, and they are exploring inventive new ways to make use of the inner electrical conversations in our brains.

In 2019, Meta, then still known as Facebook, paid nearly $1 billion to purchase CTRL-Labs, a startup whose flagship product was a wristband that detects neuromotor signals, allowing the wearer to manipulate a computer system using a range of forearm, hand, and finger movements. Last year, Snap, the parent company managing Snapchat, spent an undisclosed sum to acquire NextMind, whose headset uses EEG technology to let a user “push a virtual button simply by focusing on it.” Even Valve, the video game publisher that manages the massive Steam video game store, has partnered with brain-computer interface developer OpenBCI, with an eye toward integrating brain-computer interfaces into virtual reality headsets.

The promise of these systems is to give users a new, potentially more widely accessible way to control computers — an alternative to standard interfaces such as mouses, handheld controllers, and touchscreens. What is sure to appeal to tech industry behemoths, however, are the troves of real-time data that these devices collect about a person’s neuronal activity. This latest revolution in neurotech could conceivably yield a windfall for companies like Meta and Snap, which have built their business models around data-driven advertising. For the average consumer, however, it may portend a new kind of threat to data privacy — one that regulators seem woefully unprepared to corral.

Companies like Meta and Snap make substantial profits by collecting data on users’ web activity, using those data to identify highly specific target demographics for advertising clients, and selling access to user information to third-party businesses and researchers. A key tenet of this model is the idea that, with enough information about individuals and their habits, developers can divine, with fine-tooth specificity, how a certain person will respond to certain advertisements. To that end, companies might use feedback surveys to try to determine whether or not an ad was successful, or track people’s online interactions with ads through measures such as clickthrough rates or the time a person spends hovering their mouse pointer over a given image or video.

Tracking a person’s brain activity in real time, however, could in theory offer a more reliable, more precise, and personalized representation of an ad’s effectiveness. In laboratory experiments, researchers have shown that certain EEG signals can be used to accurately detect when a person has seen a strong sensory stimulus, or suddenly starts paying attention to something new. These signals, called event-related potentials, can in turn be used to gauge user interest and assess advertisement effectiveness. For platforms like Snapchat and Meta, it could herald a faster, more accurate way to get feedback about ad performance.

The practice of measuring neurological activity to gain insights into consumer behavior, known as neuromarketing, has been around since the early 1990s. Neuromarketing methods have so far been deployed only in controlled research environments, and it’s unclear how well, if at all, they will work in the wild. Still, the recent moves by ad-revenue-driven social media platforms to develop brain-computer interface technology suggest that neuromarketing might be on the cusp of going mainstream. With companies like Meta and Snap already investing billions of dollars into virtual and augmented reality, it is not a stretch to imagine them integrating EEG signal collection into the suite of user data already being collected through head-mounted VR and AR devices. In fact, OpenBCI, which is collaborating with Valve, has already integrated EEG into its Galea VR headset.

Social media firms have long aggregated user data for the purpose of targeted advertising, but the prospect of including neurological data in this brokerage represents an uncharted territory that is laden with risks.

For one thing, it’s not clear what neuromarketing would mean for the user experience. Neuromarketing metrics are produced from measurements of basal electrochemical reactions in a person’s brain — they are less a genuine measure of whether someone is interested in a product than they are the neurological equivalent of a knee-jerk reflex test. Algorithms that optimize advertising content based on neuromarketing metrics could potentially lead developers to pepper users with the most eye-catching stimuli possible, turning EEG-integrated VR use into a bombardment of weapons-grade annoyance.

Large-scale neuromarketing could also have unforeseen negative consequences on data privacy. If platform companies like Meta and Snap were to connect even rough measurements of a person’s brain activity with the already dauntingly large stores of data they already record — including information on users’ location, buying habits, and online activity — it could provide them with a much more complete image of their users than the average person might be comfortable handing out. Although capabilities of EEG and other neurotechnologies fall far short of mind reading, they capture sensory reactions that users have little if any control over, and that could in theory reveal attentive responses to intrusive environmental stimuli a user didn’t intend to focus on.

Algorithms linking heightened neural responses to a world of distractions may erroneously flag arbitrary interactions as important or meaningful.

Meanwhile, laws and regulations of neural data privacy are not just behind the curve — they are nearly nonexistent. Legislation such as Europe’s General Data Protection Regulation gives individuals some control and protection over their own digital footprint, and at least two states in the U.S. have enacted biometric privacy laws that protect people from unknowingly being subjected to physiological measurements in public spaces. But some experts have argued that neural data privacy is a special case that requires a new regulatory approach. So far, technology firms looking to build out neuromarketing efforts and other neural data monetization schemes have largely been left to police themselves.

That should be enough to give all of us pause.

____

1 Those with no depth perception function pretty normally because they can judge distance by motion v. a background. But sports like golf, where you have to “see” how far away the ball is while keeping you head still, are probably not on.

Print Friendly, PDF & Email

19 comments

  1. fjallstrom

    I think this is mostly vapor ware, which is it own set of problems.

    I actually have a neurological implant that helps managment of a particular pain situation. It is very much not smart tech, and I am glad it is that way. Simply put, it puts an electromagnetic field over nerves that are involved in the pain, and for many patients it works. Now, I am a curious person and I have an education that included an understanding of electromagnetic fields, so I have tried to look into how it works. And far as I can tell, nobody really knows (but if it works, it works). Understanding a nerve signal for pain should be much simpler than brain activity. This is mature technology with an established market, and a larger potential market if the mechanisms were better understood.

    There is also the case of the dead fish MRI study years ago, that indicated that much of the brain activity goes on in the head of the researcher, not the object (as dead fish probably don’t think).

    This is not to say that there won’t be problems. But I think they will be more akin to what we see today in neurological implants: vapor ware leading to unnecessary suffering for test animals and test persons, side effects (including nausea and migraines), abandoned technology with patents that hinders health care from using the technology and makes care unnecessarily hard. With a side dish of wasted resources and regulatory changes that just so happens to benefit capital and surveilliance.

    Like the Metaverse, I simply don’t think the technology is there to make consumer products that people actually want to buy.

      1. podcastkid

        Just in case you could hazard a guess (or may know), RonR, if for instance the area’s say on the left side of back sort of under the belt, do you think their adhesives are necessary, or would you say two LARGE bandages might work just as well? For worker who moves all day long all over the place.

        The problem’s a matter of vertebral facets; don’t even know if this tech would touch the pain.

  2. John R Moffett

    As a neuroscientist I find the merger of capitalism and neuro-interfaces a bit disturbing, because the ends are not at all benevolent. When used to help quadriplegics, this type of interface is very helpful, but that is not what we are talking about here. The described uses here are just one more abuse that capitalism wants to impose on us. I predict this won’t work they way they claim it will, and will probably fade into obscurity after many failed attempts to use these methods for marketing purposes.

    1. GramSci

      Thank you. And in the case of quadraplegics, the use case is more one of the human brain learning to use the prosthetic than the technology “reading” the brain.

      But if you want to try DIY brain reading, there’s a kit you can buy

    2. Mildred Montana

      Too much money today chasing too few opportunities. Ergo, much hype. Ergo, much wasted money.

      Throwing money at a problem doesn’t necessarily solve it. The history of science shows that, in combination with a decades-long accretion of knowledge (and a helpful dose of serendipity) answers to perplexing problems are only then found. Money is a minor factor—if a factor at all.

      But big tech is forever coming up with new answers To quote a financial writer I respect: “Innovations are like genetic mutations. Most are mistakes. Most fail.”

  3. Michaelmas

    How could this vacuuming up of neurological data play out? One way is like this —

    Advertising in Dreams is Coming: Now What?
    https://dxe.pubpub.org/pub/dreamadvertising/release/1

    ‘Coors recently announced a new kind of advertising campaign. Timed for the days before Super Bowl Sunday, it was designed to infiltrate our dreams [1]. They planned to use “targeted dream incubation” (TDI) [2] to alter the dreams of the nearly 100 million Super Bowl viewers the night before the game—specifically, to have them dream about Coors beer in a clean, refreshing, mountain environment—and presumably then drink their beer while watching the Super Bowl. Participants in what Coors called ‘the world’s largest dream study’ would get half off on a 12 pack of Coors; if they sent the link to a friend who also incubated their dreams, the 12 pack was free. With this campaign, Coors is proudly pioneering a new form of intrusive marketing. “Targeted Dream Incubation (TDI) is a never-before-seen form of advertising,” says Marcelo Pascoa, Vice President of Marketing at Molson Coors [3] ….’

    Meta’s New Headset Will Track Your Eyes for Targeted Ads
    https://www.gizmodo.com.au/2022/10/metas-new-headset-will-track-your-eyes-for-targeted-ads/

    ‘This week Meta revealed the Meta Quest Pro, a new virtual reality headset that costs about as much as a pre-inflation mortgage payment. It’s a sleek device, with upgraded hardware, advanced features — and cameras that point inward to track your eyes and face.

    ‘To celebrate the $US1,500 ($2,082) headset, Meta made some fun new additions to its privacy policy, including one titled “Eye Tracking Privacy Notice.” The company says it will use eye-tracking data to “help Meta personalise your experiences and improve Meta Quest.” The policy doesn’t literally say the company will use the data for marketing, but “personalizing your experience” is typical privacy-policy speak for targeted ads’

    1. Terry Flynn

      Eye tracking is (unfortunately?) real and has been successfully shown to predict real life decisions….Another neurological/physiological outcome that is more easily captured “outside the lab or without VR headsets” is speed of decision making. It has gone from “traditional basic animal research” all the way up to in the field research and predicts real life purchasing decisions well.

      I’m going to refrain from commenting on ethical issues….. Partly because I have a major declaration of interest: I’m a co-author on some of the key papers that first showed choice modelling and real life decisions could be predicted based on how fast the respondent clicked on options in online (structured) experiments. References available in my Google scholar profile if you’re interested: Terry N Flynn (not the marketing one based in Canada).

      My involvement ended when I exited academia but various neurological and physiological outcomes were taking off around then and found to be surprisingly good at predicting real life human decisions.

  4. Sarah Henry

    “Targeted dream incubation” sounds like something out of a horrifying dystopian novel…that being said, I doubt there’s any truth behind the claim that the contents of people’s dreams are manipulable by conscious will in the way the ad industry (among others) might think. In the Coors “study”, for example, there’s no way to tell if the participants were influenced by some mysterious breaking-and-entering into their personal unconscious, as opposed to the up-front offer of a 50% product discount for being a study participant, or the promise of a totally free 12-pack for as low-effort an activity as sending a friend a video link. Designing a silly and unscientific “study” like this for marketing purposes merely adds the vague aura of “science” and “innovation” to an otherwise boring promotional discount program. Coors didn’t need to actually infiltrate anyone’s dreams for their marketing purpose in all this to be accomplished. They get all the attention they want just by associating themselves with the emotionally charged idea that they could.

    Also, shame on Dr. Barrett for lining her pockets with this steaming pile of [family blog].

  5. TomDority

    “unknowingly being subjected to physiological measurements in public spaces”
    Given the long time that politicians have had extensive PR staff at every level for decades and the vast amount of marketing research and development going on in this time – it is no surprise that “laws and regulations of neural data privacy are not just behind the curve — they are nearly nonexistent.” because that would just be killing the goose that lays golden eggs.

  6. ArvidMartensen

    It’s all about crowd control. Everything that capitalism and government does is about imposing the will of a very few on the billions of many.
    I think the ultimate dream is that understanding the brain in full will enable the few to have full control of everything we think and do. We will do as we are told!

  7. Anders K

    Speaking as someone without binocular vision, I can say that VR glasses do still give a better impression of “being there” – but this is due mostly to better correspondence between movement and change of perspective in the game. It is better than a flat screen plus eye tracking (though that does approximate a portal to me, since I’ve never had depth perception to my knowledge).

    Hopefully my lazy eye plus lack of binocular vision will help me by making the data gathered about me less usable in the general models used, but who knows, maybe at some point I just won’t be compatible with VisionLogin or whatever they come up with for phones and computers.

    1. Terry Flynn

      Eye tracking was something my colleagues at Manchester Uni were doing….. Not me. However from what I understood, lack of binocular vision was “just” a complication and not a total impediment to eye tracking in terms of ascertaining what the respondent valued most…… So “good” or “bad” depending upon its use….. Since the sensor only had to track where in the x, y plane your eye was looking…… Depth of field was irrelevant.

      Katherine Payne led the group….. In case you want to chase current clinical articles…..

  8. WillD

    Capitalism is not known for its restraint or self-discipline, so that if a technology can be exploited for multiple and profitable purposes – it will be, regardless of the concerns of some users. In recent decades we’ve seen how invasive technology has become, not just intruding on our privacy, but sucking up vast amounts of personal information for profit, and more disturbingly for surveillance purposes.

    This will be no different, and while it may be highly beneficial to a number of people suffering illness, it will also be more widely sold to healthy people. It will simply increase the amount of highly personal information available to tech companies, and to governments!

  9. Matthew G. Saroff

    The problem here, as shown by Musk’s implants, is that these guys ignore the rules with impunity.

    Unless and until regulators and prosecutors are willing to frog march senior executives out of their offices in handcuffs, this remains a menace.

  10. m-ga

    Neuroscientist here. Getting usable data with EEG or NIRS is difficult with either research or clinical equipment. Such equipment is many times more sensitive than anything which will find its way into a VR headset.

    With VR kit you might, at most, get some crude measure of neural oscillations using EEG. I took part in an experiment like this. I had to play a 3D shark game using VR goggles, and there were a couple of sensors which were clamped onto my frontal lobes as part of the goggle attachment. The electrode placements were off and data were junk. Collect enough of this type of data, and you might get some gimmicky correlation. But it’s not telling you anything useful about the person playing the game.

    Eye-tracking is a totally different story. Those data are detailed, very useful, and plausibly collected from VR goggles. Little benefit to the user (and super-creepy), but highly beneficial to the companies collecting the data, e.g. for marketing purposes. That said, I can see applications in 3D gaming. For example, in first person shooters the game could be coded to create enemies where you are or are not looking, depending on story dynamics. Or else, storylines could change depending on whether you’ve actually read some text which was displayed earlier on in the game. Maybe users will agree to eye tracking if it makes games more exciting.

    1. m-ga

      Just thought to add – one can get useful data from pupil size as well. For example, it correlates reliably with attention:

      https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0102463

      I can totally see a technology company implementing both pupillometry and eye tracking in a VR headset. They could probably do so better than current generation lab equipment, which is quite fiddly. If so, we could use the technology in scientific research. Augmented reality goggles would be ideal, so that participants can see through to the experimenter. This could be a very affordable way for many labs to upgrade facilities.

      Anyway, I’d guess it’s eye tracking and pupillometry which is really in play with these VR headsets (i.e. rather than a brain imaging technology such as EEG or NIRS).

  11. Phichibe

    Very interesting topic, excellent article, and as usual some very informative comments from the NC commentariat. I’d add just a couple of points. First, a few years ago a research team at Caltech announced that they had been able to successfully estimate the images shown to a cat into whose brain had been inserted a number of electrodes. The signals thus collected had been used to train a neural network program and the model worked. Obviously requiring surgical implants will impede rapid adoption but I strongly suspect that it is the only a matter of time and Moore’s law before the signal processing is powerful enough to allow surface electrodes to read the signals accurately. Ten doublings of processing power yields a thousand fold increase in power and will take between fifteen and twenty years so that’s probably the time frame we’re looking at before “Neuromancer” style tiaras are effective and probably not too expensive.

    Last point: my guess is that long before then neuroscience will allow very reliable lie detectors to be built using this tech. Not thought reading but a simple binary estimation problem- is the subject telling the truth or are they fabricating lies? When this threshold is crossed I expect we’ll see some very thorny issues of legal admissibility and self incrimination come to the forefront. Imagine if a defendant is presented with the choice of a “brain reading” to decide truthfulness or the court drawing a negative inference. Then we’ll move from William Gibson territory to Philip K Dick’s – another great of the SciFi canon. Buckle up, buckaroos, it’s going to be a bumpy ride

  12. podcastkid

    I appreciate your general conclusion on all this, Arvid. I was writing out some general impressions myself this morning (these won’t be germane specifically to tech Companies getting into neurotechnology, but it seems they may jive with your impressions…have to apologize for their possibly coming across very general)…

    Unfortunately it seems that the Fed, the Street, and the MICIMATT have arrived in a mutual cooperation relationship that works for them, and, like that train that derailed in E Palestine, will be hard to stop [like all bomb trains for that matter]. Count’em, 8 “institutions.” With “industry” [what it’s thought to be now] and media you can see especially how really the whole thing’s in sixth gear. The whole thing together to me seems to be attempting (as if it had a mind) to be re-wiring how we actually think…which I will call a “hassle.”

    Industry is maybe trying to tell us ISMW the laid off tech workers will all go to making chips? Obviously this won’t happen…at least not soon enough. Bad relations with China and not-soon-enough-chip-foundaries here are two reasons I don’t think consumers will get what they want [btw, I question what consumers want]. Blackouts in CA, TX, LA, & FL are one reason the present consumer mindset won’t ever get what it wants. Stopped reading Ellul books too soon as more and more came out; I’m suspecting “The Technological Bluff” covers most of this.

    Ever since the AIDS thing I’m wary of Gary Null. However, Michael Hudson just shared two interviews Null did in one podcast [Null probably still remains up on supplements IMO]. Everything Hudson said jives with “the situation” I outlined above, and where it will go. What I find amazing about “the situation” is that it’ll remove customer service jobs at Target (Hudson mentions), but at the same time hire people to take out inappropriate words in Roald Dahl’s books. This is what I mean by mess’n-with-our-minds. Also, living by the app (or searching for decent news rather than having it right there) is messing up our minds by having us focus on small symbols too much of the day…rather than getting outside, or to gym, or to pool, or in yard. Doesn’t matter if jobs that are necessary are disappearing [and train wrecking the whole “real” economy]. The only thing that matters is that WE AGREE to get a chip in index finger and chip in thumb. We will be tormented by doubt after agreeing, and it seems that’s where “the situation” mainly wants us to be…tormented/hassled.

    You could actually make it HAMICIMATT…hassle allocating?

    https://archive.org/details/technologicalblu0000ellu/page/n9/mode/1up

    things strike me like the strike O’Donnell https://www.nakedcapitalism.com/2022/12/dethroning-the-dollar-why-the-alternatives-are-not-ready-for-prime-time.html#comment-3822600

Comments are closed.