View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

ChatGPT blocked in Italy over privacy concerns

OpenAI has come under fire over from industry and privacy bodies over the way it handles and processes user data.

By Ryan Morrison

OpenAI’s successful natural language AI platform ChatGPT has been blocked in Italy. The company has been ordered to cease collecting and processing Italian users’ data until it complies with the personal data protection regulations such as GDPR by Garante Privacy (GPDP), the Italian data protection authority.

ChatGPT has been blocked in Italy over privacy concerns
OpenAI launched ChatGPT in November 2022 and it reached over 100 million active monthly users in January. (Photo by rarrarorro/Shutterstock)

It argues that OpenAI provides a “lack of information to users and all interested parties” over what data is collected, as well as a lack of a legal basis to justify the collection and storage of personal data used to train the algorithm and models that power ChatGPT.

GPDP also raised concerns over the absence of any age-filtering technology to prevent the use of the tool by minors and to ensure they are not exposed to “absolutely unsuitable answers with respect to their degree of development and self-awareness”.

The investigation could be the first of many in the EU, as OpenAI has no legal entity in Europe, so all and any individual national regulator can investigate the impact of data collection. It has 20 days to respond to the order and could face fines of up to 4% of annual turnover.

OpenAI hasn’t disclosed what training data was used to make the latest iteration of its foundation model GPT-4, but previous generations were built on data scraped from the internet including Reddit and Wikipedia. The latest update also introduces a web browser, allowing ChatGPT to find information on the live internet for the first time.

GPDP highlighted a recent data breach in which conversation history titles were leaked to other users which some claim included personal details and payment information, as cause for pausing further data collection.

Potential for significant fines

If it is found that OpenAI has processed user data unlawfully then data protection authorities across Europe could order that data deleted, including data used in the training of the underlying model. This could force OpenAI to retrain GPT-4 and make it unavailable both as the API and within ChatGPT itself. This is all before the EU AI Act comes into force, although that makes little provision for the regulation of foundation and general purpose AI like ChatGPT.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

A translation of the original Italian note from GPDP states that the action is being issued because of “the lack of information to users and all interested parties whose data are collected by OpenAI, but above all the absence of a legal basis that justifies the collection and massive storage of personal data, in order to “train” the algorithms underlying the operation of the platform.”

The Italian regulator has form, blocking the Replika chatbot earlier this month over concerns it posed “too many risks to children and emotionally vulnerable individuals”. The virtual friend tool is not able to process the personal data of Italian users until an investigation is complete.

Italy isn’t the only country critical of these types of tools. In the US the Center for AI and Digital Privacy (CAIDP) has lodged a complaint with the FTC over the way OpenAI uses data. It wants the US regulator to order OpenAI to freeze development on its GPT models, claiming GPT-4 fails to satisfy any of the standards set out by the commission including the need to be transparent, explainable, fair and empirically sound.

“CAIDP urges the commission to initiate an investigation into OpenAI and find that the commercial release of GPT-4 violates Section 5 of the FTC Act, the FTC’s well-established guidance to businesses on the use and advertising of AI products, as well as the emerging norms for the governance of AI that the United States government has formally endorsed and the Universal Guidelines for AI that leading experts and scientific societies have recommended,” the organisation wrote in its FTC complaint.

“The FTC is already looking at LLM and the impact of generative AI; and it’s Section 5 powers clearly apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or primary purpose,” explained Ieuan Jolly, New York-based Linklaters partner and chair of its US TMT & Data Solutions Practice. “Generative AI and synthetic media based on chatbots that simulate human activity fall squarely within the type of tools that have the capability to engage in deceptive practices, for example, software that creates voice clones or deepfake videos.

“We’ve seen how fraudsters can use these AI tools and chatbots to generate realistic but fake content quickly and cheaply, targeting specific groups or individuals through fake websites, posts, profiles and executing malware and ransomware attacks – and the FTC has previously taken action in similar cases. The challenge is how to regulate a product that merely has the capability for deceptive production, as all generative technology can have, while permitting technological progress.”

Could lead to other complaints

This follows calls from more than 1,000 tech leaders and commentators in the US, including Steve Wozniak and Elon Musk, for OpenAI to “pause” development on its next generation of large language model until ethical guardrails can be introduced.

Edward Machin, senior lawyer in the Ropes and Gray data, privacy and cybersecurity practice said it is sometimes easy to forget that ChatGPT has only been widely used for a matter of weeks and only went live in November last year, meaning most users haven’t had time to stop and consider the privacy implications of their data being used to train the algorithm.

“Although they may be willing to accept that trade, the allegation here is that users aren’t being given the information to allow them to make an informed decision, and more problematically, that in any event there may not be a lawful basis to process their data,” he said. “The decision to stop a company processing personal data is one of the biggest weapons in the regulator’s armoury and can be more challenging for a company to deal with than a financial penalty. I suspect that regulators across Europe will be quietly thanking the Garante for being the first to take this step and it wouldn’t be surprising to see others now follow suit and issue similar processing bans.”

Ryan Carrier, Executive Director of ForHumanity said there have been calls, including from OpenAI CEO Sam Altman, for independent audits of AI systems but to date, nothing has happened. “ForHumanity has a GDPR certification scheme for AI, Algorithmic, Autonomous Systems that has been submitted to national data protection authorities in the UK and EU – much of this angst could be avoided by establishing compliance-by-design capacity at OpenAI.”

Read more: UK at odds with Elon Musk and other experts on AI regulation

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.