The parent company of Facebook and Instagram has declared it will delay the launch of Meta AI in Europe, claiming current regulatory demands would mean users receiving “a second-rate experience”. This hardline position directly responds to serious concerns being raised surrounding plans to use public content to train its large language models (LLMs) without requiring user opt-in.

On Friday, Meta’s lead regulator, the Irish Data Protection Commission (DPC), requested a delay to LLM training using European users’ Facebook and Instagram content on behalf of the European DPAs. Meta labelled this request as “a step backwards for European innovation and competition in AI development”, saying the move would only delay bringing the benefits of AI to people in Europe.

Its aggressive stance does not appear to have spooked the DPC, however. It said in a statement: “The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. This decision followed intensive engagement between the DPC and Meta.”   

Meta AI makes its case for European reform

AI at Meta, including Llama, its open source LLM, and the Meta AI assistant are already available in other parts of the world, where the company has come up against fewer regulatory roadblocks. The company argues that to ensure such platforms were ready for European launch, the models underpinning them would need to be trained “on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe”.

Meta had begun informing its more than 400 million European users of intended privacy policy reforms coming into effect on 26 June, enabling its AI models to access Facebook and Instagram posts, photos, captions and comments without the need for user permission. The company cited the legal basis of ‘Legitimate Interests’ for circumnavigating data protection laws, but concerns were raised surrounding the GDPR compliance of such a move. Privacy group noyb (None of Your Business) filed complaints in 11 European countries, asking national DPAs to launch an urgency procedure to halt the Meta reforms.

“We welcome this development but will monitor it closely,” noyb chair Max Schrems said of the DPC’s request. “So far, there has been no official change to the Meta privacy policy that would make this commitment legally binding. The cases we have filed are ongoing and will require an official decision.”

What next for Meta AI in Europe?

With all parties at something of an impasse, it will be interesting to see how long Meta maintains its stance of not bringing its AI products to Europe, a key market for the tech giant. Indeed, some more cynical critics were quick to observe that its announcement came on Friday evening, interpreted as a time to bury bad news and limit impact to share price.

“We are committed to bringing Meta AI, along with the models that power it, to more people around the world, including in Europe,” the company said in a statement. “But, put simply, without including local information we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.”       

The company described the continent as being “at a crossroads” and risked missing out on capabilities and innovations available in other parts of the world: “As Europe stands at the threshold of society’s next major technological evolution, some activists are advocating extreme approaches to data and AI. Let’s be clear: those positions don’t reflect European law, and they amount to an argument that Europeans shouldn’t have access to — or be properly served by — AI that the rest of the world has. We deeply disagree with that outcome.”

Schrems, unsurprisingly, was not entirely sympathetic to such a stance. “The Meta press release reads a bit like collective punishment, he said. “’If one European insists on his or her rights, the whole continent will not get our shiny new products.’ But Meta has every opportunity to deploy AI based on valid consent – it just chooses not to do so.”