Changes to Meta’s privacy policy enabling the company to use European user data to train its AI platforms may break data protection regulations and ignore legal precedents, according to activists.

Facebook’s parent company has begun to inform European users of intended privacy policy reforms coming into effect on 26 June. The changes would see public information shared by its users, such as Facebook and Instagram posts, photos, captions and comments, used to “develop and improve” the company’s AI models, without the need for user opt-in. The change would impact an estimated 400 million users.

Meta claims “Legitimate Interests”

Meta has said that such a move is necessary in order to make its emerging suite of generative AI products available in Europe and the UK. “These features and experiences need to be trained on information that reflects the diverse cultures and languages of the European communities who will use them,” the company wrote in a blog post. “We’re committed to doing this responsibly and believe it’s important that people understand how we train the models that power our generative AI products.”

However, questions are being raised regarding the legality of such moves under the UK’s Data Protection Act and the EU’s General Data Protection Regulation. Meta has cited the legal basis of ‘Legitimate Interests’ for processing certain first and third-party data in the European Region and UK, removing the requirement for users to opt-in to making their data available. Users can object using a form found in Meta’s privacy centre, an approach the company claims is “consistent with how other tech companies are developing and improving their AI experiences in Europe”. 

Privacy group noyb (None of Your Business) has filed complaints in 11 European countries, asking data protection authorities (DPAs) to launch an urgency procedure to stop this change before it comes into force later this month. DPAs in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain received the requests on behalf of local data subjects, arguing that the lack of any information about the purposes of the AI technology being trained goes against GDPR requirements.

“The opposite of GDPR compliance”

“Meta is basically saying that it can use ‘any data from any source for any purpose and make it available to anyone in the world’, as long as it’s done via ‘AI technology’,” said noyb founder Max Schrems. “This is clearly the opposite of GDPR compliance. ‘AI technology’ is an extremely broad term. Much like ‘using your data in databases’, it has no real legal limit. Meta doesn’t say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalised advertising or even a killer drone. Meta also says that user data can be made available to any ‘third party’ – which means anyone in the world.”

Meta says the development adheres to European law, and does not mark a divergence from the manner in which its Big Tech peers have been developing their AI products. “We are confident that our approach complies with privacy laws, and our approach is consistent with how other tech companies are developing and improving their AI experiences in Europe (including Google and Open AI),” a Meta spokesperson told Reuters. The company previously argued “legitimate interest” in the context of processing of personal data for behavioural advertising, a claim that was rejected by the European Court of Justice.

In the US, Meta AI has access to public user data and private chat conversations on Facebook, Instagram, and WhatsApp, with no way for users to fully opt out of sharing their information.