Client-side mobile scanning, which provides a back door to look into people’s phones and other devices, has received the backing of two UK cybersecurity chiefs who say the method could be used to find images of child abuse. In a research paper, Ian Levy, technical director of the National Cyber Security Centre (NCSC), and Crispin Robinson, technical director for cryptanalysis at GCHQ, write that they see no reason why the controversial technique cannot be implemented safely.
In response to the paper, academics and researchers have raised concerns that the use of client-side scanning could create a mass-surveillance culture and infringe upon people’s privacy, as well as creating security risks. Others argue it will simply drive criminals to use other methods to share child sexual abuse material (CSAM).
Levy and Crispin wrote their research paper on how client-side scanning techniques could help tackle child abuse if implemented safely. They say that while more work is needed, there are “clear paths to implementation” that would have the requisite effective privacy and security properties. The paper provides analysis of two harm archetypes, which the authors claim, demonstrate that it’s possible to provide strong user safety protections, which ensure “that privacy and security are maintained for all.”
The authors wrote that while issues of security are often raised by those opposed to client-side scanning, “we do not believe that the techniques necessary to provide user safety will inevitably lead to these outcomes”.
They use the example of hash matching, where a unique identifier is given to abuse material so that it can be automatically detected and removed from online platforms. “Hash matching and other related technologies will identify exact or, in the case of perceptual hashes, near matches to previously seen content (usually images or video) that has been classified by a trusted source (typically one or more NGOs) as illegal,” the authors write. They go on to say that this type of approach has very high precision and matches are usually subject to human review before being sent to authorities.
“There is a small risk that the child safety NGO may have misclassified an image, but the human review step mitigates the consequences of this, along with the impact of false positives from the detection algorithm,” they say.
In addition to this, Levy and Robinson say that machine learning can be used to classify content to identify previously unseen CSAM or conversations, whether they are offender to offender or offender to child, that are likely to be related to child sexual abuse. In practice, they write, these are deployed with parameters that give high precision though the technique will “always produce significant false positives”, hence human moderation is needed.
However, in an environment which is end-to-end encrypted, vendors would have to scan content, the experts say – this is known as client-side scanning.
What is client-side scanning?
Client-side scanning is a broad term that refers to systems that scan message contents such as images, videos, text and other files for matches against a database of objectionable content before the message is sent to the intended person or device. It is used by anti-virus software to find and disable malware on computers.
Law enforcement agencies have argued this method is required to access messages and other data in order to help identify and prevent sharing of objectional content. But opponents say its implementation would render end-to-end encryption, which offers a higher degree of privacy, ineffective.
In summer 2021, Apple released a feature on its iOS called ‘NeuralHash‘ which sought to detect known child sexual abuse images by running on the user’s device rather than on the company’s servers. This was an algorithm which used a convolutional neural network (CNN) to compute an image’s hash and identify those similar to an abuse image, such as cropped, rotated and resized images.
NeuralHash sparked a backlash from privacy campaigners, and when technical details of the system were released several scripts appeared demonstrating how to exploit it to launch cyberattacks on devices. Subsequently, Apple announced it was delaying implementation of the technology.
The Internet Society, a non-profit organisation which says it is dedicated to building an open, secure and trustworthy internet, believes that client-side scanning would compromise the privacy and security that users assume and rely on. “By making the contents of messages no longer private between sender and receiver, client-side scanning breaks the end-to-end encryption trust model,” it says.
Will vendors implement client-side scanning?
Apple’s quick shift away from ‘NeuralHash’ following criticism is an example of vendors’ reluctance to push something that would make their devices less secure, and thus impact their reputation and profits, says Professor Alan Woodward from the Surrey Centre for Cyber Security at the University of Surrey.
He told Tech Monitor that companies like Google and Apple will not implement client-side scanning of their devices if the wider ecosystem is not willing to accept it. He says that Google and Facebook, for example, have opted for end-to-end encryption so that they cannot provide message contents to government agencies, even with a warrant, because it would be bad for business.
He continued that while citizens might want to tackle child abuse, he doesn’t think they would want Big Tech companies scanning their phones as they wouldn’t know where it would be reported it and whether it would be misused. He adds that if Apple and Google opted to build and implement client-side scanning in their iOS and Android mobile operating systems, then they would lose clients and so wouldn’t “go down that route”.
Woodward added that there are other ways to monitor for CSAM, such as using metadata. He explained that Facebook uses this method to monitor for patterns of behaviour associated with child sexual abuse offending. “Rather than blanket surveillance, you can actually start to zero in and can conduct a more targeted form of surveillance on people that look like they’re going to be of interest,” he said.