The Information Commissioner’s Office (ICO) has issued social media company Snap, which runs Snapchat, with a preliminary enforcement notice over what it believes is a failure to properly assess risks posed by its generative AI chatbot.
An ICO investigation has provisionally found that the company failed to “adequately identify and assess” the risks to several million users of the My AI chatbot in the UK, including children aged 13-17. The notice sets out steps that the ICO might require Snap to undertake, though it is also dependent on the company’s representations.
A final enforcement notice could be adopted, which could require Snap to stop processing data in connection with the My AI chatbot. This would also mean that the product could not be offered to UK users pending an “adequate risk assessment.”
The company has told Tech Monitor it will work with the government organisation to assure it of its dedication to privacy through My AI.
My AI programmed to abide by rules
According to Snap, My AI is a chatbot that can answer questions for users of Snapchat. It launched in February 2023, and example use cases listed on its support page include trivia, advice on buying presents or planning for a hiking trip. It is powered by OpenAI’s ChatGPT technology, which the company says has “additional safety enhancements and controls unique” to the app.
“We’re constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content,” the company says on its website. Snap encourages users to send feedback from My AI to help train it. As of May 2023, Snapchat had 21 million monthly active users in the UK.
Snap told Tech Monitor that My AI is programmed to abide by certain ‘guidelines’ to help ensure the information provided to users is age-appropriate and not harmful. It also said that parents and guardians of children had the ability to know whether their children had been communicating with My AI in the last seven days through Snap’s Family Centre.
Back in August, Snap announced updates to the app to comply with the European Union’s Digital Services Act. As part of the upgrades, the company said that it was giving users the ability to control what they see on the app, as well as allowing them to opt out of personalised discover content.
Snap also said that it would be building integration to the European Commission’s Transparency API, which they said would provide certain information about enforcement decisions that are made about EU-based accounts or content. For children aged 13-17, it said targeting and optimisation tools would “no longer be available for advertisers to personalise ads for Snapchatters in the EU and UK under the age of 18.”
Risk assessment is not up to scratch for data protection
According to the ICO, Snap’s risk assessment for the chatbot, which it conducted before it launched, did not adequately assess the data protection risks posed by the generative AI technology, specifically concerning children. The findings of the ICO are provisional and no conclusion has been drawn of whether there has been a breach of data protection law.
However, John Edwards, the information commissioner, said that the provisional findings suggest a “worrying failure” by Snap: “We have been clear that organisations must consider the risks associated with AI, alongside the benefits,” Edwards said. “Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”
Snap is one of many tech vendors rushing to incorporate generative AI into its products after interest in the technology boomed. Tech leaders in large organisations have also been grappling with the benefits and risk of using AI systems, with some banning their staff from using generative AI altogether. Earlier this year the ICO offered advice to businesses planning to use ChatGPT and other generative AI chatbots, warning that companies must “innovate in a way that respects people’s privacy”.
In response to the provisional notice, a Snap spokesperson told Tech Monitor: “We are closely reviewing the ICO’s provisional decision. Like the ICO we are committed to protecting the privacy of our users.
“In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.”