The European Parliament this week called for a ban on the use of facial recognition by police. Such a ban could open a new point of divergence from the UK, where the use of live facial recognition in public spaces by police and businesses is legal, albeit subject to strict conditions. And while privacy campaigners in the UK have called for an outright ban, they may have a harder time affecting policy than their European counterparts.
The growing sophistication of facial recognition technology has given rise to real-time applications, in which individuals can be identified en masse in live CCTV footage. This has many potential uses, such as law enforcement, private security and marketing, but privacy campaigners warn that it also presents many dangers. These include the accumulation of biometric data and the risk of misidentification and racial discrimination.
On Wednesday, MEPs voted overwhelmingly in favour of a resolution calling for a “moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification,” as well as “a permanent prohibition” on automated analysis of other biometrics such as gait, fingerprints, DNA, voice, and behavioural characteristics.
The resolution also calls for “a ban on the use of private facial recognition databases in law enforcement”. It makes particular mention of Clearview AI, a US company that claims to have collected a database of three billion facial images. In August, Buzzfeed News revealed that law enforcement agencies around the world, including nine in the UK, had trialled or used ClearView AI’s services
Although the vote is non-binding, the resolution gives an indication of how the European Parliament may vote on the EU’s upcoming AI Act. The draft act prohibits “real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement unless certain limited exceptions apply”. However, campaign group European Digital Rights (EDRi) group has argued that stronger restrictions are needed.
Police facial recognition in the UK
In the UK, meanwhile, human rights organisations including Big Brother Watch and Liberty have called for an outright ban on live facial recognition by police and business. “Police and private companies in the UK have been quietly rolling out facial recognition surveillance cameras, taking ‘faceprints’ of millions of people — often without you knowing about it,” Big Brother Watch says in its campaign materials. “That’s biometric data as sensitive as a fingerprint.”
In 2019, the ICO published an opinion on the use of live facial recognition by police forces. It clarified that its use is governed by Part 3 of the Data Protection Act 2018, which covers data processing by law enforcement. This asserts that biometric data processing must be "fair", "based on law" – meaning that its legal justification must be clear and precise, and must have either explicit consent of the subject or be ‘strictly necessary’ for law enforcement purposes. The police must also have an ‘appropriate policy document'.
But the legality of police use of live facial recognition was tested in a case against South Wales Police last year. Human rights campaigner Ed Bridges challenged the force's use of facial recognition, which it had used to scan up to 500,000 people on 60 occasions since 2017. Bridges argued that this infringed those citizens' right to privacy. An initial ruling in favour of the police was overturned last August by the Court of Appeal, which found that the legal framework that South Wales Police relied on to justify its use of the technology did not offer sufficient privacy protection.
“It is time for the government to recognise the serious dangers of this intrusive technology," Liberty lawyer Megan Goulding said at the time. "Facial recognition is a threat to our freedom – it needs to be banned.”
The Court of Appeal concluded a clearer legal framework for the police use of live facial recognition (LFR) is needed. The Home Office and College of Policing are updating the relevant guidance, according to the ICO.
Meanwhile, the UK's Surveillance Camera Commissioner has since updated its best practice guidance for police use of surveillance cameras. And in August, the government launched a consultation on revisions to the Surveillance Camera Code of Practice, which applies to police and local authorities.
Notwithstanding these adjustments, the UK's regulatory position on facial recognition has been consistent in allowing LFR under strict circumstances, says Edward Machin, an associate in the data, privacy and cybersecurity practice at law firm Ropes & Gray. "The Court of Appeal ruling and the ICO's guidance make it clear that the rules are very strict here, and you can't use facial recognition wherever you want."
While human rights groups Liberty and Big Brother Watch have scored some important victories, Machin believes the political momentum against live facial recognition is stronger in the EU. "Citizen rights groups just have much more power there," he says. "There's a groundswell of opinion in the EU that has allowed legislators to ride that wave a little bit easier than in the UK, where Big Brother and Liberty are a loud voice and a good voice, but they are pushing against a much stronger tide to get that change."
This means that live facial recognition could become a point of divergence between the UK and the EU. The UK government has indicated that it plans to move the country's data protection rules away from GDPR, to create a more 'pro-innovation' regime. Privacy campaigners "are very focused on the new data strategy... and their view is that it waters down rights," says Machin.
Meanwhile, UK police forces are continuing to implement facial recognition technology. In August, the Mayor of London's Office approved a £3m "Retrospective Facial Recognition" system that will allow the city's Metropolitan Police to compare new footage of faces against an archive of images. The force says it is consulting with London Policing Ethics Panel (LPEP) "about the required governance controls for the system's use" and has completed a draft equality impact assessment.
But the system could "suppress people's free expression, assembly and ability to live without fear”, EDRi policy advisor Ella Jakubowska told WIRED last month.