The UK government should introduce standardisation, regulation and transparency around the use of advanced technologies in police and law enforcement, a House of Lords committee says. The Lords described the current haphazard way in which such systems have been implemented as a “new Wild West”, and believe there is a need for a more joined-up approach to avoid miscarriages of justice.
In a report, ‘Technology rules? The advent of new technologies in the justice system‘, published today, the Lords’ Justice and Home Affairs Committee acknowledges that there are “benefits” to using artificial intelligence (AI) and facial recognition technology (FRT) in law enforcement, but says that AI technologies can have “serious implications for a person’s human rights and civil liberties”.
The committee also expressed surprise at the “proliferation of AI tools” being used without proper regulation or oversight, particularly by police forces across the country. Rather than finding “scrutiny”, which they considered essential, the Committee found a “landscape in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.”
The government has until May 30 to respond to the report.
Police must be transparent about use of advanced technologies
In its report, the committee recommended that to “facilitate scrutiny” the government needs to create a “mandatory register of algorithms” used by the police and justice system as well as introducing a “duty of candour” on the police. This would ensure full transparency over the police’s use of AI “given its potential impact on people’s lives, particularly those in marginalized communities”.
It has also recommended that a national body be established to set “strict scientific validity and quality standards” that new technologies would need to adhere to.
“While we found much enthusiasm about the potential of advanced technologies in applying the law, we did not detect a corresponding commitment to any thorough evaluation of their efficacy,” says the report.
“Proper trials methodology is fully embedded into medical science but there are no minimum scientific or ethical standards that an AI tool must meet before it can be used in the criminal justice sphere,” it continues. It also says that most public bodies “lack the expertise and resources” to carry out evaluations and that procurement guidelines do not address their needs.
“As a result, we risk deploying technologies which could be unreliable, disproportionate, or simply unsuitable for the task in hand,” the report concludes.
Police training in advanced technologies needs to be mandatory
The committee has also called for mandatory training of technology users and the development of local ethics committees within police forces. This comes after evidence was given stating that there is “little mandatory training for policing”, let alone FRT and AI.
Individuals should be “appropriately trained” in the limitations of the tools they are using: “They need to know how to question the tool and challenge its outcome and have the correct institutional support around them to do that,” the report says.
As it stands, there is no obligation for the “consistent training of officials” in the use of AI and FRT systems, which includes the police. The report makes it clear that officers and officials need training on the use of the tools as well as “general training on the legislative context” plus the possibility of bias and the need for “cautious interpretation of the outputs.”
Local ethics committees have also been recommended by the Committee due to the “law enforcement community [having] particular powers to withhold liberty and to interfere with human rights”.
Committee ‘terrified’ by evidence provided on policing and advanced technologies
The report has been produced after a ten-month inquiry, which heard experts from the fields of AI, law enforcement, MPs, such as Home Secretary Priti Patel, and UK policing minister Kit Malthouse.
Speaking to Tech Monitor, chair of the committee, Baroness Hamwee, expressed how the committee felt “terrified” early on in the process due to the feeling that AI and other new technologies were “outpacing” user knowledge of the systems. “If it messes up, it has really serious repercussions,” she says.
Better training to challenge the “culture of deference” to technology is one of the most important objectives of the report, Baroness Hamwee says. She argues there is an assumption that the technology must be right, but the evidence presented to them during their inquiry said that this wasn’t the case. “It’s easy to think that the computer must be right, click on a button on the screen and just keep going,” she says. “Training has got to get over that.”
She also compares the use of advanced technologies with the use of DNA in evidence, saying it took a while to “bed down.” But as advanced technologies would be used in the same way in court proceedings, training for everyone was needed across law enforcement and the justice system.
Regulation in advanced technology in policing is needed, not guidance
Last week, the College of Policing released guidance on the use of live FRT, but Baroness Hamwee says there is a need for more mandatory regulation rather than guidelines. “I don’t think any of us are great enthusiasts for having lots of legislation, but you’ve got to have guidance which is mandatory,” she says.
This is why the committee has recommended a central register for AI systems and a certification scheme that enables police forces to know what they are purchasing meets the quality standards required. These recommendations aim to tackle concerns the committee has with procurement of technology.
“We heard of some horror stories from the United States about programs being sold as loss leaders,” Baroness Hamwee explains. In some instances, an agreement would be made but the OpEx costs would increase due to updates. There were also concerns about who “owned” the data, she added.
Transparency is essential for scrutiny
In a report published by the Centre for Data Ethics and Innovation (CDEI) this week, it was found that only 13% of respondents of over 4,000 respondents felt they could offer a full explanation of AI.
Respondents expressed they had "high trust" in the police compared with social media companies. But if the police forces are not trained in the technology and make an ill-informed decision based on the algorithm, it could lead to people being wrongly convicted.
"What must it be like to be charged and convicted of a criminal offence and imprisoned on the basis of a technology which you don't understand and you can't question?" asks Baroness Hamwee. This is why, she says, there needs to be more transparency about the advanced technologies the police and other law enforcement agencies are using; so they can be scrutinised.
What is the industry reaction?
Director of the UK policing think tank, Police Foundation, Rick Muir, says the report will be "very welcome."
"Their basic point that we need to make sure that this whole area of AI in relation to the justice system is much better regulated, and there is much stronger oversight, is something that I strongly agree with," he told Tech Monitor. He also agrees the number of police forces using different technologies is a concern.
Muir believes that the police would also welcome the report as they have been left in an "unsatisfactory position" of being advocates for new technologies being used. He says there needs to be more debate and discussion on a national level about whether new technologies are appropriate to be used. This would help avoid situations where individual forces get into difficulties, such as when South Wales Police was found to be acting unlawfully when using live FRT.
Further, Muir says that there is a case for more national procurement in policing of AI and other technologies and that he is supportive because it would be a "more effective way of doing procurement". However, he would opt for a national framework rather than a national body regulating procurement.