Italy’s data protection regulator has warned one of its largest media firms it must adhere to EU data protection laws in its partnership with OpenAI. According to Reuters, Garante per la protezione dei data personali informed GEDI, the publisher of newspapers La Repubblica and La Stampa, that any personal data from the company’s archives shared under a prospective agreement with the ChatGPT creator would likely contravene EU data protection laws.

The warning pertains to a deal between OpenAI and GEDI in September allowing the former’s chatbot services to source and cite Italian-language content from the latter’s media outlets, and improve the overall accuracy of its AI systems. “The digital archives of newspapers contain the stories of millions of people, with information, details and even extremely sensitive personal data that cannot be licensed without due care for use by third parties to train artificial intelligence,” the regulator said in a statement.

The Garante warned that any violations could potentially result in sanctions against GEDI. It emphasised the need for strict oversight of such sensitive information when used by third parties in AI applications.

GEDI responded by clarifying that its agreement with OpenAI does not involve the sale of personal data and said that it hoped that a constructive dialogue about the partnership could continue with the Italian data regulator. “The project has not been launched,” said GEDI. “No editorial content has been made available to OpenAI at the moment and will not be until the reviews underway are completed.”

Regulatory pressures on AI development in Europe

The GEDI-OpenAI case underscores growing tensions between European regulators and major AI firms. The EU’s Artificial Intelligence Act (EU AI Act), which came into force on 1 August 2024, establishes a comprehensive framework for AI systems, categorising them by risk levels and imposing stringent requirements. While proponents argue the Act prioritises transparency, accountability, and privacy, critics claim it imposes excessive restrictions that could stifle innovation.

Advocates for AI companies, including OpenAI and Google, have highlighted incidents like Italy’s temporary ban on ChatGPT earlier this year as evidence of Europe’s increasingly challenging regulatory environment. They argue that strict measures, such as limits on data sharing and constraints on model training, could make Europe less attractive for AI development compared to other regions, such as the US and China.

The Italian regulator’s scrutiny of the GEDI-OpenAI partnership mirrors these broader EU attitudes. While ensuring compliance with the General Data Protection Regulation (GDPR), such interventions are often viewed as emblematic of a more cautious approach to AI innovation on the continent. Critics warn this regulatory caution risks slowing progress in a field where other regions are advancing more rapidly.

In addition, the EU AI Act has drawn criticism from industry leaders for its rigid framework. In May 2023, OpenAI’s CEO, Sam Altman, expressed concerns that the company might “cease operating” in the EU if the regulations proved unworkable. Similarly, in September 2024, executives from major firms, including Meta Platforms, signed an open letter warning that the EU’s strict and inconsistent tech regulations could hinder AI development. They argued that such policies risk making Europe less competitive and innovative than other markets.

Read more: What is the EU AI Act?