Apple’s latest plan to combat the spread of abusive images online will involve automatically scanning devices for evidence of child sexual abuse material (CSAM). The move has pleased some governments and worried privacy advocates, while businesses may suffer a knock-on effect, with employers potentially finding themselves at risk of being implicated by their employees, or being in breach of client privacy regulations.
The new scanning technology will be implemented by Apple later this year. It will detect and report child abuse material to law enforcement agencies. Many cloud service providers, such as Dropbox or Microsoft, already scan content once it has been sent to the cloud. The difference here is that Apple will be using its CSAM detection tech, called NeuralHash, to access a user’s device directly, rather than waiting for potentially harmful content to be uploaded.
How will Apple scan user devices
According to Apple, NeuralHash will allow the company “to detect known CSAM images stored in iCloud photos.” Before an image is stored in iCloud photos, “an on-device matching process is performed for that image against known CSAM hashes (image codes).” If an image code of a picture on a user device matches one from the CSAM database, provided by the National Centre for Missing and Exploited Children (NCMEC), the image, and the user, will be flagged. It will initially be rolled out in the US, but GDPR may make it difficult to implement in the UK and Europe.
Loosening data privacy for the protection of children and vulnerable people online has long been a goal for governments around the globe. “Apple has always been pressured by law enforcement to open up more things,” says Lynette Luna, principal analyst at GlobalData. “It is walking a tightrope of protecting privacy but also fighting terrorism and other crimes like [CSAM] and child sex trafficking.” Late last year governments from the ‘Five Eyes’ security alliance – the US, UK, Canada, Australia and New Zealand – and other countries wrote an open letter outlining the need for such openness, stating that: “Enabling law enforcement access to content in a readable and usable format where an authorisation is lawfully issued, is necessary and proportionate.”
News of Apple’s new CSAM detection initiative was leaked last week when Matthew Green, a cryptography professor at Johns Hopkins University spoke about the new technology in a series of tweets. He was clear in his opinion that this level of scrutiny on user devices will be a “bad idea” adding that “initially I understand this will be used to perform client-side scanning for cloud-stored photos. Eventually it could be a key ingredient in adding surveillance to encrypted messaging systems.”
Privacy advocates are similarly worried. Online rights charity Privacy International described the move as “a technical approach [that] can be widened to include other categories of images and content”. According to the charity, it has already been applied for counter-terrorism purposes, and there are consistent calls for tech companies to use such an approach to identify copyright infringements.
Apple privacy scanning: could this affect businesses?
All Apple devices would potentially face the same level of scrutiny, including those issued by employers to staff. This could put some at risk as many companies cannot legally share the level of access that Apple appears to be expecting. “Professional services like a law firm would have to notify clients,” explains Toni Vitale, partner at Gately Legal. “It’s the same thing that you would do in a privacy notice; to tell the client who is going to have access to data about them. I’ve seen lots of privacy notices. I don’t think I’ve seen any that say IT service providers like Apple and Facebook have access to your data and we may share that data with them.”
The technology also runs the risk of wrongfully implicating someone says Paul Bernal, professor of IT law at the University of East Anglia. “The problem is people potentially getting caught out when they’ve not been doing anything wrong,” he says. “For example, if an image is caught that isn’t actually a child abuse image.”
So-called ‘shadow IT’, where staff use personal devices for work purposes, also increases the risks that businesses may be implicated in any nefarious behaviour, Bernal says. “People will be using home devices for much more than than they would use work devices for,” he explains. “They’ll be browsing the net, they’ll be researching about holidays. There’s more of a chance that they could accidentally do something that potentially gets them caught up in this dragnet.”
Such risks could lead companies to carry out heightened surveillance of their own, warns Bernal. “If a company feels they might be held liable, they might think that they should be watching employees more carefully,” he says. “I think that’s bad for employer-employee trust.”
Vitale adds that making Apple a de facto regulator of online content is problematic in itself. “With a government regulator, or even a charity that’s associated with a government, they’ve got to maintain their charitable goals, they are answerable to their trustees,” he argues. “They aren’t answerable to their shareholders and don’t have to make a profit. There is no democracy in Apple appointing itself as a regulator.”