By embracing biometric surveillance, governments across the West are hurtling down a path that could lead us all to a very dark place.
The Chinese Communist Party’s widespread use of facial recognition programs and other forms of artificial intelligence (AI) technologies to track, monitor and, where possible, evaluate the behavior of members of the public has received ample attention from Western media in the past few years. Myriad reports, some overblown, have been published on China’s creeping introduction of a Social Credit System. By contrast, far less attention has been paid to the increasing use of many of the same highly intrusive technologies by so-called “liberal democracies” in the West.
In 2019, IHS Markit released a report showing that while China may be leading the way when it comes to using surveillance cameras to monitor its population, it is far from an outlier. According to the report, the United States had almost as many surveillance cameras as China, with roughly one for every 4.6 people (compared with China’s one for every 4.1). The UK was not far behind with one for every 6.5. The report also forecast that by 2021 there would be more than one billion surveillance cameras on the planet watching us.
EU Plays Catch Up
Now the EU is on the verge of building one of the largest facial recognition systems on planet Earth, as reports the recent Wired UK article, Europe Is Building a Huge, International Facial Recognition System:
The expansion of facial recognition across Europe is included in wider plans to “modernize” policing across the continent, and it comes under the Prüm II data-sharing proposals. The details were first announced in December, but criticism from European data regulators has gotten louder in recent weeks, as the full impact of the plans have been understood.
“What you are creating is the most extensive biometric surveillance infrastructure that I think we will ever have seen in the world,” says Ella Jakubowska, a policy adviser at the civil rights NGO European Digital Rights (EDRi). Documents obtained by EDRi under freedom of information laws and shared with WIRED reveal how nations pushed for facial recognition to be included in the international policing agreement.
The first iteration of Prüm was signed by seven European countries—Belgium, Germany, Spain, France, Luxembourg, the Netherlands, and Austria—back in 2005 and allows nations to share data to tackle international crime. Since Prüm was introduced, take-up by Europe’s 27 countries has been mixed.
Prüm II plans to significantly expand the amount of information that can be shared, potentially including photos and information from driving licenses. The proposals from the European Commission also say police will have greater “automated” access to information that’s shared. Lawmakers say this means police across Europe will be able to cooperate closely, and the European law enforcement agency Europol will have a “stronger role.”
The UK, while no longer part of the EU, is hurtling down a similar path. And it is targeting the most vulnerable and impressionable members of society: school children. As I reported in October, nine schools in the Scottish region of North Ayrshire started using facial recognition systems as a form of contactless payment in cashless canteens (cafeterias in the US), until a public outcry put paid to the pilot scheme.
But the Tory government is doubling down. According to a new report in the Daily Mail, almost 70 schools have signed up for a system that scans children’s faces to take contactless payments for canteen lunches while others are reportedly planning to use the controversial technology to monitor children in exam rooms. This time round, however, the government didn’t even bother informing the UK Biometrics and Surveillance Camera Commissioner Fraser Sampson of the plans being drafted by the Department for Education (DfE):
Professor Fraser Sampson, the independent Biometrics and Surveillance Camera Commissioner, said his office was unaware the DfE was drafting new surveillance advice in response to the growing trend for cameras in schools.
‘I find out completely by accident a couple of weeks ago by going to a meeting that the Department for Education has drafted a code of practice for surveillance in schools which they are about to put out to the world to consult,’ he told The Mail on Sunday.
‘And they [DfE] said. “What do you think of it?” And I say, “What code?” We had no idea about it. And having seen it, it would have benefited from some earlier sharing.’
Similar facial recognition systems have been used in the US, though usually as a security measure.
Privacy advocates have warned that the growing use of facial recognition systems in school settings could have a much more goal: to condition children to the widespread use of facial recognition systems and other biometric technologies. In my book Scanned I cite Stephanie Hare, author of Technology Ethics, who argues that it is about normalizing children to understand their bodies “as something they use to transact. That’s how you condition an entire society to use facial recognition.”
The Battle for Privacy
As the EU prepares to pass its own long-awaited AI Bill, which will set out the rules for the development, commodification and use of AI-driven products, services and systems across the 27-member bloc, a battle has broken out between proponents of the technologies and privacy advocates. The former, which include intelligence agencies, law enforcement bodies and tech companies, argue that emerging biometric technology such as facial recognition programs are necessary to catch criminals. The latter have called for an outright ban on the technologies due to the threat they pose to civil liberties.
They include Wojtek Wiewiorowski, who leads the EU’s in-house data protection agency, EPDS, which is supposed to ensure the EU is complying with its own strict privacy rules. In November 2021 Wiewiorowski told Politico that European society is not ready for facial recognition technology: the use of the technology, he said, would “turn society, turn our citizens, turn the places we live, into places where we are permanently recognizable … I’m not sure if we are really as a society ready for that.”
In a paper published in March, the EPDS raised a number of concerns and objections about Prüm II:
- The Prüm II framework does not make clear what sort of circumstances will warrant the exchange of biometric data such as DNA or the “scope of subjects affected by the automatic exchange of data”.
- “The automated searching of DNA profiles and facial images should only be possible in the context of individual investigations of serious crimes and not of any criminal offense, as provided for in the proposal.”
- “The alignment of the Prüm framework with the interoperability framework of the EU information systems in the area of justice and home affairs” requires careful analysis of its implications for fundamental rights.”
- “The EPDS considers that the necessity of the proposed automated searching and exchange of police records data is not sufficiently demonstrated.”
Flaws in the System
The problem is not just about privacy; it is about the inherent flaws within facial recognition systems. The systems are notoriously inaccurate on women and those with darker skin, and may also be inaccurate on children whose facial features change rapidly. Wired magazine reported in 2019 that “US government tests find even top-performing facial recognition systems misidentify blacks at rates five to ten times higher than they do whites.”
There are also concerns about who the data will end up being shared with or managed by and just how safe it will be in their hands. In December 2021, the civil liberties group Statewatch reported that the Council of the EU, where the senior ministers of the 27 Member States sit, not only intends to extend the purposes for which biometric systems can be used under the EU’s proposed Artificial Intelligence Act but is also seeking to allow private actors to operate mass biometric surveillance systems on behalf of police forces.
In February Statewatch published another report titled “Building the Biometric State: Police Powers and Discrimination.” In it the group warned that the EU’s rapid expansion of biometric profiling at borders and identity checks within the 27-member bloc risk is likely to lead to increased racial profiling of ethnic minority citizens and non-citizens.
The report comes at a time when the EU’s ‘interoperability’ initiative and border policies are placing a renewed impetus on the biometric registration of foreign nationals and an increase in identity checks within the EU, in the hope of detecting individuals lacking the correct documentation. It argues that the treatment of skin colour as a proxy for immigration status, the existence of a huge database holding data solely on foreign nationals (the Common Identity Repository, currently under construction) and explicit policy instructions to step up identity checks to enact removals are likely to exacerbate the racist policing and ethnic profiling that are endemic across the EU.
Meanwhile, UK and US police forces have been piloting live facial recognition systems, which allow authorities to compare live images of passersby on a street with those on a specific watch list. Some jurisdictions, also including the U.S. and the UK and soon the EU if Prism II becomes reality, are using retrospective facial recognition, which is arguably even more controversial since it enables police to check against a far broader range of sources, including CCTV feeds and images from social media.
“Those deploying it can in effect turn back the clock to see who you are, where you’ve been, what you’ve done and with whom, over many months or even years,” Jakubowska, told Wired magazine, adding that the technology “can suppress people’s free expression, assembly and ability to live without fear.”
One major problem, as with the application of many surveillance technologies, is mission creep. As Sky News reported last week, police in the UK have been cautioned for employing live facial recognition technology not only to identify suspects but also potential witnesses and victims to a crime.
Fraser Sampson, the biometrics and surveillance camera commissioner, has described the idea as a “somewhat sinister development”. It “treats everyone like walk-on extras on a police film set rather than as individual citizens free to travel, meet and talk,” he warned.
The commissioner was responding to guidance from the College of Policing which suggested victims of crimes and potential witnesses could be placed on police watchlists.
“How commonplace will it become to be stopped in our cities, transport hubs, outside arenas or school grounds and required to prove our identity?” asked Mr Samspon.
The former officer for West Yorkshire Police and the British Transport Police said: “The ramifications for our constitutional freedoms in that future are profound.
“Is the status of the UK citizen shifting from our jealously guarded presumption of innocence to that of ‘suspected until we have proved our identity to the satisfaction of the examining officer’?
The public backlash against facial recognition software is growing. As the Wired UK article notes, dozens of cities across in the US have even banned police forces from using the technology. The European Parliament is debating whether to ban the police use of facial recognition in public places as part of its AI Act, even as the Commission lays the ground for more “automated” access to biometric data. In the UK, the civil liberties group Big Brother Watch warns that Police and private companies “have been quietly rolling out facial recognition surveillance cameras, taking ‘faceprints’ of millions of people — often without you knowing about it.”
The U.S. company Clearview, which received seed funding from Palantir founder and CEO Peter Thiel, is facing several lawsuits in the United States for scraping people’s photos from the Internet without their permission. At least 600 law-enforcement agencies in the U.S. have used Clearview’s services but the company’s use of people’s photos without their consent has been declared illegal in Britain, Canada, France, Australia and Italy. It was also recently fined €20 million by Italy’s data protection agency and instructed to delete any data on Italians it holds and banned from any further processing of citizens’ facial biometrics.
Another Avenue of Opportunity: War
But Clearview has found a whole new avenue of opportunity: the war in Ukraine, as the New York Times recently reported:
In the weeks after Russia invaded Ukraine and images of the devastation wrought there flooded the news, Hoan Ton-That, the chief executive of the facial recognition company Clearview AI, began thinking about how he could get involved.
He believed his company’s technology could offer clarity in complex situations in the war.
“I remember seeing videos of captured Russian soldiers and Russia claiming they were actors,” Mr. Ton-That said. “I thought if Ukrainians could use Clearview, they could get more information to verify their identities.”
In early March, he reached out to people who might help him contact the Ukrainian government. One of Clearview’s advisory board members, Lee Wolosky, a lawyer who has worked for the Biden administration, was meeting with Ukrainian officials and offered to deliver a message.
Mr. Ton-That drafted a letter explaining that his app “can instantly identify someone just from a photo” and that the police and federal agencies in the United States used it to solve crimes. That feature has brought Clearview scrutiny over concerns about privacy and questions about racism and other biases within artificial-intelligence systems.
When the use of Clearview by Ukraine’s government was first announced, the company said it had scraped more than 2 billion images from Russian social media platform VKontakte. According to a Reuters report, Ukrainian soldiers could use the technology not only to identify fallen Russian soldiers but also weed out Russian operatives at checkpoints. Privacy campaigners are rightly up in arms. One of their biggest concerns is that facial recognition technologies are prone to making mistakes. And in war mistakes can have deadly consequences.