UK Home Secretary’s Shocking Admission About the Emerging AI Surveillance State

“When I was in justice, my ultimate vision… was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon.”

UK Home Secretary, Shabana Mahmood, the person nominally in charge of the UK Police, justice system and MI5, sat down for an in conversation event with the former Prime Minister (and current Labour government’s eminence grise) Tony Blair, organised by the Tony Blair Institute for Global Change. In that conversation, made a chilling admission about the government’s ultimate goal of AI surveillance.

From the Daily Telegraph, the only mainstream newspaper (besides Scotland’s The National) to cover the story:

“AI and technology can be transformative to the whole of the law and order space.

“When I was in justice, my ultimate vision for that part of the criminal justice system was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.

“Similarly, in the world of policing, in particular, we’ve already been rolling out live facial recognition technology, but I think there’s big space here for being able to harness the power of AI and tech to get ahead of the criminals, frankly, which is what we’re trying to do.”

Bentham, an 18th-century philosopher and social theorist, promoted the Panopticon as a circular prison with a central inspection tower from which a single guard could observe all inmates all the time while unseen.[1]

Mahmood’s admission is shocking, not so much in terms of its actual message but rather the candidness with which it was conveyed. Seasoned NC readers will not be surprised that the British government, like many other governments, is trying to build an AI-enabled panopticon. Over the years we’ve covered in some depth the totalitarian thinking behind Bentham’s Panopticon, as well as the myriad threats posed by the construction of a digital panopticon. [2]

That said, this is probably the first time that a senior government official of a major Western nation has come out and openly admitted that they are building a digital panopticon in order to maintain total, constant surveillance of criminals. Such a system could be quickly expanded to the broader population — indeed, in the case of the UK it already is, through the nationwide rollout of live facial recognition systems, reports The Telegraph:

As justice secretary, Ms Mahmood proposed a major expansion of GPS tagging of criminals to create “virtual prisons” for offenders punished in the community. Since moving to the Home Office, she has announced a planned nationwide rollout of police-operated live facial recognition cameras.

Most senior government officials tend not to boast, in public at least, about building a digital panopticon (even if that’s exactly what they’re doing) for an obvious reason: it’s totally dystopian, the Stasi on steroids. But as we’ve been warning, the Starmer government has turbocharged the UK’s slide into dystopia, including by scaling back trial by jury, escalating its attacks on lawful speech and rolling out digital identity despite massive public opposition.

Like most governments in the West, the Starmer administration is also pushing for online age verification, which is essentially a Trojan Horse for digital identity. As the member of the House of Lords Claire Fox warns in the clip below, “this is a threat, potentially at least, to adult civil liberties and the right to privacy, and effectively means we’ll have to digitally verify to participate in the public square.”

Yet there is scant reporting in the legacy media on these sweeping changes. As Jonathan Cook notes in the tweet below, Mahmood is effectively telling the British public that they are prisoners and that the government quite literally plans to become Big Brother using AI. Yet there is barely a word about it in the media (with the notable exceptions of The Telegraph and The National). This, of course, is also a feature, not a bug.

The panopticon, in its original conception, is so controversial that it has barely been properly tried (though elements of it have been incorporated into the design of some prisons). The closest thing, the Presidio Modelo complex in Cuba, built in 1920, was so plagued by corruption and cruelty that it was abandoned. As Collingwood explains in the above tweet, the psychological effects of such living were considered too cruel for even prisoners to endure:

Many indeed consider the entire concept of the Panopticon the foundation of the theory of totalitarian regimes in operation: if it could seek to arrange society so [that] every citizen may be watched at any time but cannot know whether they are being watched or not (e.g. the telescreens in Nineteen Eighty-Four’s Oceania) a regime could force all citizens to act as though they were being watched at any given time.

And this is what our Home Secretary—the office in charge of the police and MI5 and the justice system—wants to impose on us. This is her dream society. Not even joking or embellishing.

It is as if people like Mahmood read Orwell’s 1984, Huxley’s Brave New World, Phillip K Dick’s Minority Report and a host of other dystopian novels and came away with an instruction manual. Meanwhile, their big tech paymasters came away with new business models.

Here’s Oracle’s Larry Ellison, the Tony Blair Institute’s largest donor, bragging about how an AI surveillance state will ensure that people are always on their best behaviour:

What often gets ignored in what little public discussion that is allowed on this topic is the fact that AI surveillance systems are as prone to errors as most AI systems. The biases in facial recognition systems are already well documented. Just last month, the UK Home Office admitted that facial recognition technology is more likely to incorrectly identify black and Asian people than their white counterparts on some settings.

Yet the systems are still being rolled out nationwide, inviting the question: feature or bug?

There was also the recent controversy over West Midlands Police’s handling of its AI search for data about alleged Israeli football fan violence, which generated false claims of a non-existent match. The ensuing scandal undermined public confidence in officers’ use of the technology, reports The Telegraph.

Perhaps more disturbing of all was Mahmood’s response to the scandal: she told MPs that AI was an “incredibly powerful tool that can and should be used by our police forces” but said it needed to be regulated in a way that was “always accurate”. In other words, for AI systems to function correctly, all that is needed is the right sort of regulation. It’s a perfect example of why bureaucrats should be kept as far away as possible from these technologies.

Meanwhile, the energy and water demands of the data centres needed to run these AI systems are growing exponentially, as BlackRock Chairman and CEO Larry Fink laid out at Davos two years ago. Fink, who is now co-interim Chair of the World Economic Forum, said that data centres would need “massive power”, which is a huge “investment opportunity”.

But it won’t just be big tech companies and financial giants that will be footing the bill; it will be us, as Yves noted in her pre-amble to a recent cross-posted article by Jomo Kwame Sundaram on the growing global revolt against data centres:

I find it remarkable that data center, as in AI, new energy demands are being foisted onto all customers. Those demanding incremental and higher cost energy should pay more. But instead the population at large is subsidizing AI, both through said energy costs plus also bearing the cost of additional pollution.

And this is perhaps the darkest part of all (as well as the greatest insult): not only do governments and the corporations whom they serve and represent plan to use AI to imprison and enslave us; they want us to pick up the tab along the way.

 


[1] According to a 2015 article in The Guardian, it wasn’t actually Bentham himself who came up with the idea:

“The panopticon wasn’t originally Bentham’s idea. It was his brother’s,” says Philip Schofield, professor of the History of Legal and Political Thought and Director of the Bentham Project at UCL.

“His brother Samuel was working in Russia on the estate in Krichev and he had a relatively unskilled workforce, so he sat himself in the middle of this factory and arranged his workforce in a circle around his central desk so he could keep an eye on what everyone was doing.”

Bentham went to visit his brother in the late 1780s, saw what he was doing, and decided the centralised arrangement could be applied to all sorts of different situations – not just prisons but factories, schools and hospitals.

Bentham managed to persuade the prime minister, William Pitt the Younger, to fund a panopticon National Penitentiary, but a stream of problems eventually meant the project was abandoned. Bentham never saw a panopticon built during his lifetime.

[2] In 2011, former NC stalwart Phillip Pinkerton wrote in an article on Jeremy Bentham’s theory of Marginal Utility:

The original theorist of utility was, of course, Jeremy Bentham who, as we have noted, also came up with the Panopticon. The Panopticon, as noted, was a totalitarian prison system wherein every prisoner was to be watched constantly by a central observer who monitors their behaviour. Bentham thought that this model could be extended to a variety of social institutions, giving rise to a terrifying vision of a totalitarian hell which was later to be captured in 20th century novels such as Orwell’s ‘1984’ and Huxley’s ‘Brave New World’.

Bentham’s vision was downright paranoid, of course. But it says a lot about the psychology behind his theorising. This man was not a prophet of human freedom and actualisation. No, he was the harbinger of a dark vision of totalitarian control. And his theory of utility was but another manifestation of his own slightly villainous technocratic tendencies.

In a January 2025 article on the Swiss government’s decision to table a second referendum on digital identity adoption, four years after two-thirds of Swiss voters rejected the government’s proposed e-ID system, we wrote:

One of the main reasons why there is (seemingly) no alternative to digital identity is that there are simply too many powerful interests aligned behind it. It is the keystone of the new panopticon of digital public infrastructure (DPI) being constructed around us.

For governments and national security agencies, the benefits are clear: expanded power and control at a time when economic conditions are about to get significantly worse for the vast majority of the population. For big tech companies, it will mean new opportunities to amass even more data over our lives, which they will then be able to transform into even more revenues and profits. For central banks and the TBTF banks whose interests they predominantly serve, it will mean even more centralised financial power in their hands.

In May of the same year, we cross-posted a piece by John P. Ruehl, a world affairs correspondent for the Independent Media Institute, primarily on China’s evolving social credit system but which also explored the “sprawling and mostly unregulated personal scoring” systems that have been quietly built by US companies. The article finished with this stark warning:

Though promoted as tools to encourage good behavior and deter bad conduct, these [social credit] systems amplify social pressure and push societies toward a digital panopticon — a state of constant surveillance driven by government and commercial incentives. These models will continue to mature and become more dangerous in the U.S. and other countries that lack adequate data protection. Without strict limits on surveillance by both governments and corporations, fears of AI misuse, algorithmic bias, false correlations, and harmful feedback loops will only grow as these scoring systems govern more of everyday life.

 

Print Friendly, PDF & Email

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *