Why a Focus on “Fake News” and Facebook Misses the Internet’s Real Problems – and Solutions

Yves here. Needless to say, I have little sympathy for the handwringing over “fake news,” and worst, the plans to regulate content provision on the Internet. There is no evidence that “Russian” campaigns on social media has any impact on election results, and political scientists like Tom Ferguson have debunked the idea. In addition, Marina Bart explained why Cambridge Analytica’s claims about its ability to sway voters were hogwash.

In recent years, Google has repeatedly changed its search algos to downgrade alternative sites and we now barely appear on searches when our original reporting used to dominate results.

By Jennifer Cobbe, the co-ordinator of Cambridge University’s Trustworthy Technologies strategic research initiative, which researches trust, computer and internet technologies. She researches and writes on law, tech and surveillance issues. Originally published at openDemocracy

Yesterday morning, the House of Commons Digital, Culture, Media and Sport Select Committee published its long-awaited final report into disinformation and ‘fake news’. The report – which follows a long and at times dramatic investigation – is full of interesting and insightful details about political microtargeting (the targeting of political messaging to relatively small groups of people) and the spread of disinformation.

But the report’s myopic focus on one company – Facebook – means that it misses the bigger picture – including the internet’s dominant variety of capitalism.

It is of course welcome that attention is being paid to these problems, and there is much in the Committee’s report that’s good. The report is undoubtedly right to find that Britain’s electoral laws are woefully inadequate for the age of the algorithm and are badly in need of reform. Its recommendation that inferences drawn from analysis of other data about people should be more clearly considered to be personal data likewise seems eminently sensible.

Is It OK to Manipulate People to Extract Their Money, Just not for Politics?

But there are also clear shortcomings. Focusing on disinformation itself as a target for regulation brings an obvious problem. By calling for interventions based on ‘harmful’ content, the report asks the Government to step into the dangerous territory of regulating lawful political conversations between people. Are private companies to be mandated to police these communications on the Government’s behalf? There are numerous good reasons why this is deeply undesirable (not to mention incompatible with human rights laws).

The biggest oversight, however, is in diagnosing disinformation as essentially a problem with Facebook, rather than a systemic issue emerging in part from the pollution of online spaces by the business model that Facebook shares with others: the surveillance and modification of human behaviour for profit.

‘Surveillance capitalism’, as it’s known, involves gathering as much data as possible about as many people as possible doing as many things as possible from as many sources as possible. These huge datasets are then algorithmically analysed so as to spot patterns and correlations from which future behaviour can be predicted. A personalised, highly dynamic, and responsive form of behavioural nudging then seeks to influence that future behaviour to drive engagement and profit for platforms and advertisers. These targeted behaviour modification tools rely on triggering cognitive biases and known short-cuts in human decision-making. Platforms and advertisers extensively experiment to find the most effective way to influence behaviour.

Without looking at surveillance capitalism, it’s impossible to understand microtargeting in its wider context. It’s impossible to understand the desires for profit and market position driving these practices. And it’s impossible to understand that the same behaviour modification tools are sold to advertisers, political parties, and anyone else who’s willing to pay. Without considering these practices within surveillance capitalism more generally, the report seems to implicitly accept that manipulating people through psychological vulnerabilities is fine if you’re doing it to extract their money, but not if you’re doing it for politics.

Notably, both Google and YouTube, its subsidiary, were largely omitted from the report. They get the odd mention, but it’s clear that the Committee was too fixated on Facebook to pay them sufficient attention. Google invented surveillance capitalism and remains arguably its foremost practitioner, with significant influence over the world’s access to information. And YouTube (also running on a surveillance business model, naturally) has serious problems of its own in terms of promoting violent extremism, disinformation, and conspiracy theories. This led the academic Zeynep Tufekci, writing in the New York Times last year, to describe YouTube and its video recommendation system as “[maybe] one of the most powerful radicalizing instruments of the 21st century”.

It’s not “Fake News” That’s the Problem, It’s the Algorithms That Disseminate It

This brings us to the second aspect missed by the Committee: the increasingly prevalent algorithmic construction of reality. Take disinformation. As noted above, the report focused on false content itself. This seems to have missed one of the key routes by which an individual piece of content can become a systemic problem worthy of attention. In the grand scheme of things, a YouTube video about a wild conspiracy theory doesn’t really matter if it’s only seen by 10 people. It matters if people watching relatively innocuous content are driven towards it by YouTube’s recommendation system. It matters if it’s algorithmically promoted by YouTube and then seen by 10 million people.

Platforms might argue that they can’t be held responsible for the content they host or for the actions of their users (outside of things which are clearly illegal). But recommending content is not simply hosting it, and it is not a neutral act. Platforms selectively target content (including advertising) through their recommender systems so as to show us what they think will keep us engaged with their services, bring them revenue, and help them build market share. Make no mistake – through these platforms we do not get a true picture of what’s going on in the world. The spaces we inhabit online are viewed through the lens of corporate desires. What we encounter is algorithmically mediated to suit the platforms’ interests. While microtargeting is increasingly recognised as manipulation, this is a softer, perhaps more insidious form of corporate algorithmic influence.

Unsurprisingly, various actors have learned how to game these systems to boost the audience for their content, including conspiracy theorists and extremists. And bots and other fake accounts are often being used to take advantage of the algorithmic construction of online space to manipulate content rankings. This allows them to game trending topics so as to shape discourse more generally, and drive fringe ideas into the mainstream (a common misconception of bots holds that they are usually intended to change the opinion of real users with whom they come into contact).

The influence wielded by surveillance platforms through personalisation gives them significant means to shape the online public sphere. They are, of course, motivated by profit and duty to shareholders rather than by public good and duty to wider society. You might think that this is fine – they are, after all, private corporations. But while television, mass media, and the advertising industry have long shaped our world, never before have private companies had such influence over the construction of the everyday reality we inhabit. Never before have they exercised such influence over the private activity of individuals talking to other individuals about their lives. They do so without any democratic legitimacy, and with little transparency over their processes or accountability for their actions.

To properly address the problems of manipulation, disinformation, and violent extremism fermenting on online platforms, future regulation must properly acknowledge the role of surveillance capitalism – not just through targeting tools but in the algorithmic construction of online spaces. Future regulation should recognise that content isn’t necessarily the problem in and of itself. It must consider the active role of platforms in promoting content, and establish minimum standards for doing so (in the form of paid-for advertising or otherwise). This approach benefits from largely sidestepping much of the content regulation debate. Regulating the use of technical systems by corporations rather than intervening in communications between individuals means that people should still be free to post, view, or share anything that is not illegal. Freedom of expression demands nothing less.

Surveillance companies’ exorbitant profits and their influence on the construction of our reality is in large part driven by their use of recommender systems. That must come with responsibility in some form for what they’re algorithmically disseminating. They will argue that being more careful with recommender systems could result in lower revenues. In 2018 Google brought in $136 billion; Facebook took $56 billion. They can afford to take the hit. Perhaps that should be understood as the cost of doing business in future. This industry wouldn’t be the first to have its practices and its profits reined in by regulation for the good of society.

Because of its restricted focus, the usefulness of many of the solutions proposed in the DCMS Committee’s report is somewhat limited. That’s disappointing. But all is far from lost, and there are other directions for progress. To get there, we need to think bigger than Facebook. It’s time to acknowledge the role of surveillance capitalism in these systemic issues. It’s time to recognise that the problem isn’t just content – it’s dissemination and amplification by algorithm to maximise profit at all costs.

Print Friendly, PDF & Email

32 comments

  1. Ian Perkins

    You say “the report seems to implicitly accept that manipulating people through psychological vulnerabilities is fine if you’re doing it to extract their money, but not if you’re doing it for politics.”
    Exactly.

  2. Carolinian

    This isn’t very convincing. The proposed premise

    while television, mass media, and the advertising industry have long shaped our world, never before have private companies had such influence over the construction of the everyday reality we inhabit. Never before have they exercised such influence over the private activity of individuals talking to other individuals about their lives. They do so without any democratic legitimacy, and with little transparency over their processes or accountability for their actions.

    And while that may be true what the article doesn’t get is that it’s still “television, mass media, and the advertising industry” that are shaping our world with very little accountability or legitimacy. For example will a bogus Youtube video have greater impact because some algorithm promotes it to the top of a Youtube recommendation list or because the BBC and NY Times vouch for it as being authentic? It’s quite possible that the public isn’t nearly as gullible as academic writers like this one assume and that they, in the main, reject rumors and conspiracy theories until they see someone on their televisions (people still watch for hours a day) promoting them. What the fake news scaremongers are really saying is “your fake news is crowding out our fake news.”

    Undoubtedly fake news on the internet is an issue–a minor one. Fake news from our respected news outlets is a huge issue or should be. Let’s deal with the huge problem before getting in a tizzy about the minor one

    1. barefoot charley

      I take your point and agree, Carolinian. What the report doesn’t address is that predictive algorithms don’t just pander to you, they steer you. Their offerings mash you deeper into your quest. This means in porn, for example, a boy searching oral sex will not repeatedly find the same cheerful girls, but rather ever deeper, darker devolutions climbing up through his searches. That’s the algorithmic manipulation: ‘predictive’ powers are simply projective (I’m tempted to say ‘projectile’); they project consumers toward vulnerabilities of consumption on behalf of advertisers and eyeball-capture. They may worsen your desires. The most classic example of this, of course, is our on-line 2016 election. I appreciate her essential point that surveillance capitalism’s business plan operates in our politics as it does everywhere, and everywhere it’s no good.

      1. Carolinian

        I readily admit that the internet and the SV bigs set out to psychologically manipulate but my point was that TV and advertising have been doing this for decades and arguably a lot more effectively. The web may sometimes grease the wheels of terrorism or crime but the MSM often openly grease the wheels of state terrorism and then minimize the consequences. Perhaps human editors working for the corporate media are a lot more harmful than the algorithmic versions.

        To be sure the personalization possible on the web adds a new wrinkle to Mad Av’s toolkit. But if manipulation is the issue then there are bigger concerns imo. Decades ago people worried quite a lot about television and it’s influence on children or politics. Lately, not so much.

        1. steve

          This is bollocks Carolinian. You can’t equate media platforms that had to optimize for the interests of millions of viewers with algorithms that can do it for an audience of one. It’s disingenuous to suggest otherwise but not surprising since you seem to think the BBC being fake news is a bigger problem. Maybe the reason people worry less about television is that the younger folks aren’t watching it? They are online instead. 44% decline since 2012. https://www.weforum.org/agenda/2018/05/consumers-will-spend-more-time-using-online-media-than-watching-tv-in-2018/

          1. Carolinian

            And those young people are so in charge versus what Lambert aptly terms the “withered gerontocracy”? It’s dubious whether this older cohort’s interest in the internet extends much beyond Twitter whereas the national newspapers and cable shows are where they want to be and what they follow. Wars fought, millions of refugees sent packing based on MSM misinformation strikes me as a pretty big problem, perhaps bigger than teen addiction to their smartphones.

            Plus even conceding your point there seems to be considerable doubt about how effective online advertising is or how to measure that. It could be that the people Google and Facebook are really scamming with their personalization are the ones who buy the ads.

            What I’m basically claiming is that the politicians and MSM pundits are scaremongering about the internet for fear that it will become more powerful than they are and a medium that they can’t control. This genie needs to be stuffed back in its bottle.

            1. Young

              One needs to consider the significant difference between MSM clowns and SV monsters:

              In the old days, MSM fed the fake news through a tube to a group of people gathered around a TV on a somewhat limited time period during the day. There would have been a discussion about the news whether it was fake or not.

              Now, the personalized fake news delivered to my pocket non-stop until I give up and believe it is real. Besides, it happens when nobody knows. Otherwise, I would have a chance to hear the opposing argument.

      2. Anarcissie

        ‘What the report doesn’t address is that predictive algorithms don’t just pander to you, they steer you.’

        But do they? Is there any science about this that I can read? There doesn’t seem to be any specifically mentioned in the article or the comments so far. It would compare the behaviors of populations upon which predictive algorithms had been used with those upon which they had not been used and noticed aggregate differences. Material behaviors, material metrics — not just someone’s interpretations about what some other people may be doing or thinking.

  3. ChiGal in Carolina

    Thanks for sharing this excellent piece. The money quote:

    Regulating the use of technical systems by corporations rather than intervening in communications between individuals means that people should still be free to post, view, or share anything that is not illegal. Freedom of expression demands nothing less.

    But does it need tweaking, given that even NC is incorporated I believe? That is, what we read and post here is communication between a corporation and individuals, not just individuals.

  4. Susan the Other

    If it’s just profit we are talking about, that’s one thing. If it is broad-spectrum surveillance of your preferences to determine what your needs and expectations are for political manipulation and crowd control, that’s entirely different. Why is Bezos raking in huge contracts from US intelligence? This is obvious, and we don’t need to think too hard about it. If it is for surveillance capitalism it could be controlled. “Triggering cognitive and subconscious biases to promote impulse buying” is like a digital form of pandering. Which is easy to debunk. All those “Selected just for you!” flash up ads could be required to do a simultaneous opposition ad. That’s what German TV did in the 60s. I remember one – an ad about beer and immediately following it there was a cute 10 second cartoon of a happy little drunk burping up bubbles. Just enough to de-fuse the impulse. When it comes to more sinister manipulation, the best defense is good information and that requires an open internet. Open to all sides. Same theory, but the internet is now an infinite space. In the end, just like in a court of law, reasonableness itself is expected to be the winner.

    1. Off The Street

      Repeal of the Fairness Doctrine some 30ish years ago seemed like an invitation to roll out Pavlov 2.0, bigger, badder and uncut as the next neo-lib iteration of a social model. Ads got more weaponized in the next decade as enterprising dot-commers realized that anything could be monetized, limited only by their imaginations and not bounded by their scruples. Say what you will about attorneys but at least they have a hint of ethics training that is lacking elsewhere.

  5. skk

    Because the author mentions recommender systems often, then it seems that when she talks of algorithmic reality, promotion of content, regulation of technical systems then she means recommender systems. She’s picking the wrong target. Recommender algorithms is indeed about increasing engagement,increasing profit but its NOT by disseminating content that the corporation wants you to see, its by promoting content that the algorithm calculates as something YOU’d want to see. So if I watch a 9/11 CIA did it vid, which I do ( and they did too :-) ) then youtube recommends a whole host of other 9/11 conspiracy vids, but also Kennedy assassination, RFK assasination, various mysterious flight crashes and so on.

    The algorithm “learnt” values for its hyperparameters from general past history, not just me and the values are those that lead to highest engagement. That’s the proof required for a recommender system that using it results in higher customer engagement.

    There is a huge problem with the surveillance of our activities online of course. Here’s a suggestion for fixing the surveillance marketing aspect. The article is historical and I agree with the history and its in the linuxjournal so you’ve gotta trust that surely.

    1. Carolinian

      That’s an interesting link with its suggestion of a “Badtech Industrial Complex.” It could be that the dirty secret re mass surveillance is that it doesn’t work. Perhaps having too much poorly targeted data can create a picture just as cloudy as one made from no data at all. The NSA’s much discussed “haystack problem” comes to mind.

      1. coboarts

        I’ve been trying to ‘teach’ youtube for years to suggest the right music videos – it’s pathetic, all techno hype – oo ai.

  6. Inton

    The elephant in the room: trust.

    This is basically a debate over how to make sure that the known-untrustworthy sources favored by established interests maintain a dominant mindshare.

    If simply diminishing the mindshare of untrustworthy sources in general were the issue it would mostly just be a matter of making sure that trustworthy sources were widely available, and building up a track record of reliable reporting so that people will come to trust those sources in preference to bogus ones. But it was the neoconservatives who set up the current order of things–and being good little Plato fanatics, they were and are fervent believers in the Royal Lie. Ergo to go about earning the public’s trust simply by consistently telling the truth is right out. And so down the totalitarian rabbit hole we go…

  7. thesaucymugwump

    “Platforms might argue that they can’t be held responsible for the content they host or for the actions of their users (outside of things which are clearly illegal). But recommending content is not simply hosting it, and it is not a neutral act.”

    Unfortunately Section 230 of the Communications Decency Act grants pretty much zero liability for content, something the rest of the world does not like. The problem will continue until Section 230 is repealed.

  8. thesaucymugwump

    “Google has repeatedly changed its search algos”

    Google is an active manipulator of behavior, with the best example being the “Islam is” episode. In early 2010, Google’s search results for “Christianity is” (and other religions) was the expected: BS, nonsense, not a religion, wrong, not what you think, etc. However, “Islam is” returned nothing, which would be impossible for an unmodified, unbiased algorithm. Google should have been regulated by the government immediately after that episode, especially given its 2/3 share of the search market.

    1. notabanker

      I’ve been using this for about a year and I’m about ready to bin it. It spits out pages and pages of SE optimized crap. Sure it may not track you, but it directs you to online marketing garbage. And the maps are horrible. I am by no means a google fanboi, but I’ve about had it with duck duck go.

      1. Arizona Slim

        Gaming the search engines at DuckDuckGo? And here I thought I was the only one who noticed it.

        OTOH, I was doing a bit of searching this morning. I’ll admit that I was looking for dirt on a company that I had dealings with last year. Couldn’t find any. And said dirt used to be all over the search results. Methinks that the company hired one of those outfits that removes negative results from search.

      2. Fred

        use the !m modifier and it takes you to Google maps. I have not noticed many garbage pages, can you supply an example?

    2. Off The Street

      or try some other direct or meta options for variety:

      Dogpile.com
      Lite.qwant.com
      Searx.me
      Startpage.com
      Unbubble.eu
      Yandex.com
      Yippy.com

  9. clarky90

    Some history-rhymes…..to keep us awake at night! yikes

    On March 5, 1946, Winston Churchill delivered his “Iron Curtain” speech at Westminster College in Fulton, Missouri, in which he said: “From Stettin in the Baltic, to Trieste in the Adriatic, an ‘iron curtain’ has descended across the continent, allowing police governments to rule Eastern Europe.”

    https://en.wikipedia.org/wiki/Jussie_Smollett

  10. jfleni

    Nobody but some with a tendency to be morons pays any attention to
    BUTTBOOK which is just a silly mailing list after all, so much of the
    attention is both spurious and useless!

    1. Jonathan Holland Becnel

      Idk. I post radical shit all the time and my non political friends like it all the time.

      Ive been on Facebook since the beginning in 2004 with my lsu.edu account posting FACTS so my friends wont be total morons.

      If the Arab Spring happened cuz of social media, surely it is a good thing.

      #NationalizeFacebook

      1. Lambert Strether

        > If the Arab Spring happened cuz of social media

        It didn’t. The organizers came out of the Egyptian labor movement (there was a brief manufacturing renaissance in the last years of the Mubarak regime). Social media only played a part after the Tahrir Square events got international attention, and there was still a good deal of communication in person, on paper, and in the coffee shops and mosques. (There was also a shadowy involvement of color revolution ideas, but whether that included funding and organizing, I don’t know.)

  11. Steven Greenberg

    I think we should depend on the cure we have always depended on, diversity of opinion. If anything is a new threat it is the level of monopoly that we now allow. The courts and regulatory agencies seem to think that if something keeps consumer prices down, then a monopoly is not a problem. The naivete of that belief is in itself a problem. It has always been true that when somebody wants to become a monopoly, they may undercut prices until they drive everybody else out, and then they raise prices – the drug companies who are raising their prices by thousands of percents are perfect current examples of this. Other monopoly tactics are using status to pressure suppliers, or to pressure wages and working conditions, Also insisting on tax breaks is another monopoly benefit as well as being too big to jail.

    The problem with monopolies is power, whether or not it is good or bad for consumer prices. If we can’t get rid of a monopoly because it is a natural monopoly, then maybe the government needs to provide the service without a profit motive or a control motive. Well that might be a little hard to imagine. So better to just break up monopolies and prevent them from forming.

    1. Steve

      Well said. The ability to wield power beyond the provision of a product or service is the core problem. We all know power ultimately corrupts.

  12. level

    This piece of fluff can be summed up as, “Advertising Doesn’t Work.”

    Bullshit. Advertising works. You really think Pepsi spends 3 Billion Dollars each year for something that does not work?

Comments are closed.