Join any debate about the potential use of facial recognition technology by the police, and you’ll invariably end up imagining the pursuit of a suspected terrorist gone to ground. This shouldn’t be surprising. Often accused of being ineffective and intrusive, facial recognition has always been a controversial technology – as such, maintaining a sense of proportionality when considering its use is vital. But, when a suspected terrorist escapes from prison and needs to be recaptured, one wouldn’t be alone in agreeing that the ends, at least in that scenario, justify the means.
Such is the case with Daniel Khalife. Arrested for terrorism offences earlier this year, Khalife was awaiting trial at Wandsworth Prison, a Category B facility when on Wednesday he escaped by clinging to the underside of a delivery van. An ex-soldier, the Metropolitan Police have warned that Khalife may use his “military ingenuity” to avoid detection. As such, officers are now watching ports and airports in case the suspected terrorist attempts to leave the UK, and have mounted at least one search of Richmond Park in case Khalife has chosen to sleep rough in the area.
Using facial recognition to help catch Khalife as he moves through London, then, would seem to make practical sense – combining the technology with the city’s extensive CCTV network would automate a task that is now occupying the time of hundreds of police officers. Khalife is also considered to be dangerous to the public. Police therefore may consider themselves under a moral and even a legal obligation to use facial recognition technology to aid in their pursuit of their suspect.
Is this a situation where the use of facial recognition by police is justified?
To answer this question, we must consider the question of proportionality. To identify a suspect, police facial recognition technology naturally needs to look at every single face that passes within view of the camera lens, criminal or not. This means that everyone in the shot has surrendered some measure of personal privacy. The hunt for a suspected terrorist would also necessitate the massive deployment of facial recognition systems.
In his evasion of law enforcement, it is suspected that Khalife will attempt to leave the UK at the earliest opportunity. It would make eminent sense, therefore, to use facial recognition at all logical exit points, like Heathrow Airport or the Port of Dover – locations visited by thousands of individuals every day, all of whose biometric data would suddenly be parsed, analysed and stored by British law enforcement.
It might help for the government to explicitly state that, in such cases where a dangerous criminal needs to be apprehended, the police are well within their rights to deploy facial recognition systems to catch them. But how dangerous would that criminal need to be? It isn’t entirely clear whether Khalife is dangerous enough to justify the massive surveillance dragnet that would be required to catch him.
According to the Metropolitan Police’s head of counter-terrorism, it has “no information which indicates, nor any reason to believe, that Khalife poses a threat to the wider public”. This risk assessment is seemingly corroborated by the fact that he was being detained in a Category B prison, rather than the Category A facility used to house some of the UK’s most dangerous criminals.
The efficacy of the technology itself must also be considered. Stories have abounded in recent years of facial recognition algorithms failing to accurately identify individuals with darker skin tones, potentially elevating the risk of false positives. Not only is this potentially unlawful in obvious ways, but the very perception of its differential impact will damage public trust and confidence in the use of surveillance generally.
Police also have to remain accountable for their use of facial recognition technology. Practically speaking, that means maintaining accessible and effective channels wherein government and law enforcement agencies can respond to grievances about the technology’s use expressed by the public. This is why, for example, researchers at Sheffield Hallam University have designed an accountability framework specifically for the use of AI in law enforcement, in partnership with Europol and the Fundamental Rights Agency. That project has been shared with the Alan Turing Institute and the government’s public consultation on AI regulation.
We will see more policing scenarios where people will ask what contribution biometric technology might have made, should have made or did make. How accountable the police and their technology partners are for the answer is a pressing surveillance question facing governments around the world.