By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She now spends most of her time in Asia and is currently researching a book about textile artisans. She also writes regularly about legal, political economy, and regulatory topics for various consulting clients and publications, as well as scribbles occasional travel pieces for The National.
To those who’ve been paying attention to the chatter that Mark Zuckerberg is considering a run for President, yesterday’s Facebook announcement outlining measures the company intends to implement against “information operations” that attempt to manipulate public opinion will come as no surprise.
The company’s white paper, Information Operations and Facebooking implicitly endorses the true-Blue Team Democrat narrative on the 2016 election. Rather than recognising that the dogs just didn’t want to eat the dog food and pull the lever for Hillary, the problem is traced to evil machinations on the part of foreign governments.
A couple of disclaimers up front. This post is not intended to be the last word on this subject. Nor wilI I parse Facebook’s intended counter-measures against these nefarious information operations in any detail. Instead, I serve up these thoughts alongside some criticism of how the issue has been covered, with the intention of sparking discussion among the commentariat about the broader issues raised.
Electoral Impact of Information Operations
Permit me to quote at length from the section of Facebook’s white paper rather snoozingly headed “A Case Study of a Recent Election”:
During the 2016 US Presidential election season, we responded to several situations that we assessed to fit the pattern of information operations. We have no evidence of any Facebook accounts being compromised as part of this activity, but, nonetheless, we detected and monitored these efforts in order to protect the authentic connections that define our platform.
One aspect of this included malicious actors leveraging conventional and social media to share information stolen from other sources, such as email accounts, with the intent of harming the reputation of specific political targets. These incidents employed a relatively straightforward yet deliberate series of actions:
• Private and/or proprietary information was accessed and stolen from systems and services (outside of Facebook);
• Dedicated sites hosting this data were registered;
• Fake personas were created on Facebook and elsewhere to point to and amplify awareness
of this data;
• Social media accounts and pages were created to amplify news accounts of and direct people to the stolen data.
• From there, organic proliferation of the messaging and data through authentic peer groups and networks was inevitable.
Concurrently, a separate set of malicious actors engaged in false amplification using inauthentic Facebook accounts to push narratives and themes that reinforced or expanded on some of the topics exposed from stolen data. Facebook conducted research into overall civic engagement during this time on the platform, and determined that the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared during the US election.
In short, while we acknowledge the ongoing challenge of monitoring and guarding against information operations, the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues.
Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017. (citations omitted).
Just a couple of points here. First, notice that in the paragraph alleging “malicious actors leveraging conventional and social media to share information stolen from other sources, such as email accounts, with the intent of harming the reputation of specific political targets”, the white paper conveniently fails to mention that the information being shared just happened to be true. As far as I’m aware, no one has challenged the veracity of compromising information released prior to the election, such as that contained in releases by Wikileaks of material including John Podesta’s emails. Because surely if the information obtained were bogus, we would have seen the rebuttals by now.
Do you see the words “fake news” anywhere here? Or any equivalent? I didn’t think so.
Second, even in a white paper that presents a rationale for why Facebook has decided it must take action against “false amplification”, the authors want to have it both ways and concede “the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues. “ So, Facebook, is false amplification a problem, or isn’t it? And if it is a problem, how serious is it? Is it sufficiently serious to warrant offsetting measures? Announced measures such as cracking down on false accounts seems sensible, but I do worry about the precedential implications of Facebook’s decision to take countervailing measures in a situation where the truthfulness of information being shared is not disputed– especially in light of all the high-minded rhetoric in the rest of the white paper about preserving the proprieties of civic engagement.
Politics ain’t beanbag, and why shouldn’t political discourse– which frequently concerns what William Connolly in his The Terms of Political Discourse has called “essentially contested concepts”– be intense? After all, life and death issues are at stake. Connolly’s work implies that the relentless quest for bipartisan consensus may be a fool’s errand. Given what’s up for grabs in political decisions, should we expect political debate to be decorous? To put it another way, is a tamping down of disagreement necessarily a virtue?
And then we have the final suck-up to the Team Blue narrative (cue spooky music here):
Facebook is not in a position to make definitive attribution to the actors sponsoring this activity. It is important to emphasize that this example case comprises only a subset of overall activities tracked and addressed by our organization during this time period; however our data does not contradict the attribution provided by the U.S. Director of National Intelligence in the report dated January 6, 2017.
In other words, to state the obvious, readers are expected to fill in the blanks and conclude that the Russians did it– even though, as regular readers know, there are multiple problems with that story.
Amplifying Facebook’s Pro-Team Blue Message
As was no doubt intended, the Facebook white paper is being spun to amplify the Team Blue message. Somewhat amusingly, Facebook’s announcement that it’s going to wage war on false amplification is itself being amplified to reinforce Team Blue’s narrative.
Let me provide a flavour from some accounts.
BBC. See, for example, the BBC’s take. First we get the hysteria:
Facebook has admitted that it observed attempts to spread propaganda on its site, apparently orchestrated by governments or organised parties.
The firm has seen “false news, disinformation, or networks of fake accounts aimed at manipulating public opinion”, it revealed in a new report.
“Several” such cases during the US presidential election last year required action, it added.
Some of the activity has been of a “wide-scale coordinated” nature.
We need to read to get to the next, the fourth graph (excluding the bolded sub-head) before the BBC tells us: “Fake accounts were created to spread information stolen from email accounts during the 2016 US presidential election, the firm noted, though it said the volume of such activity was ‘statistically very small’– and even this quote omitted the crucial qualifier (taken from the white paper text reproduced above) “compared to overall engagement on political issues”. Moreover, the BBC also failed to share the inconvenient truth that the stolen information was highly relevant to the election and just happened to be true.
CNET. Let’s turn to CNET now. Again, bear with me as I quote the first four paragraphs:
Facebook is beefing up its attack on fake news by targeting coordinated campaigns aiming to use false information to sway political opinions on the social network.
Facebook has evolved into a forum for political debate in recent years, but some organizations have used the network to distort political sentiment for a specific geopolitical outcome, including during the recent elections in the US and France, Facebook said Thursday in a white paper (PDF). The social media giant said it has a responsibility to keep its community safe for authentic civic engagement, free from the influences of what it calls “information operations.”
Facebook explained, “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.”
The abundance of fake news on the internet in the lead-up to President Donald Trump’s victory last year has become a hot-button issue, entangling tech giants like Facebook and Google. Numerous allegations say the fake news shared on the social networks helped Trump win (links omitted).
Now, I read this as suggesting at minimum that the claim “fake news shared on the social networks helped Trump win” is consistent with the Facebook white paper. But that’s not at all what the relevant section said now, is it?
I could go on in a similar vein but I’ll stop here, since I said at the outset the post is by no means intended to be comprehensive, and I think risk trying the patience of readers by continuing to quote other examples in a similar vein.
Before I conclude, I’d like to make one final point. Sources such as CNN Money (as well as the Reuters and CNET accounts quoted above) are reading the latest Facebook news as a major reversal:
Facebook owning its role in shaping current events is a big change from what CEO Mark Zuckerberg initially said after the U.S. election.
“I think the idea that fake news on Facebook … influenced the election in any way is a pretty crazy idea,” Zuckerberg said at a conference in November.
He later walked back those comments, and said in February that it was Facebook’s responsibility “to amplify the good effects and mitigate the bad.”
Now, perhaps I’m missing something here, as this is not an issue about which I can honestly claim I’ve immersed myself in all relevant details. So in what I write here I’m relying instead on the plain words of the white paper. And I reiterate that the relevant text of that paper devoted to the 2016 US presidential election says nothing about fake news, but instead discusses at length information operations that included “malicious actors leveraging conventional and social media to share information stolen from other sources, such as email accounts, with the intent of harming the reputation of specific political targets.”
Instead, another section of the white paper identified the core problem here as “False amplification, which we define as coordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g., by discouraging specific parties from participating in discussion or amplifying sensationalistic voices over others). We detect this activity by analyzing the inauthenticity of the account and its behaviors, and not the content the accounts are publishing (my emphasis citations omitted).” Notice, however, that what’s being discussed here isn’t “fake news” at all but amplifying news that is anything but.
So I’m not sure how much of a reversal the latest action actually represents. While the wink and nod to the report saying the Russians did it is clear (as quoted above), that’s certainly not the same thing as saying fake news swayed the election.