View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
April 4, 2023updated 06 Apr 2023 8:36am

UK ICO offers advice on generative AI as more European countries mull ChatGPT bans

Many regulators are pondering whether to ban the chatbot after Italy put a block on it over GDPR concerns.

By Matthew Gooding

UK data watchdog the Information Commissioner’s Office (ICO) has warned businesses deploying and developing generative AI systems like ChatGPT to ensure that protecting customer information is central to their plans. The advice comes as more European countries consider whether to ban ChatGPT while its publisher OpenAI answers questions about how it collects and processes data.

The UK ICO has offered advice to businesses using generative AI systems like ChatGPT. (Photo by Giulio Benzin/Shutterstock)

In a blog post published on Monday, Stephen Almond, the ICO’s director of technology and innovation, published a list of eight questions businesses should ask themselves before incorporating AI into workflows where customer data is involved.

“It is important to take a step back and reflect on how personal data is being used by a technology that has made its own CEO ‘a bit scared’,” Almond wrote, referring to comments from OpenAI CEO Sam Altman about his own company’s systems.

He continued: “It doesn’t take too much imagination to see the potential for a company to quickly damage a hard-earned relationship with customers through poor use of generative AI. But while the technology is novel, the principles of data protection law remain the same – and there is a clear roadmap for organisations to innovate in a way that respects people’s privacy.”

Generative AI has enjoyed a boom in popularity since the launch of ChatGPT, OpenAI’s powerful natural-language chatbot which now runs on its recently released GPT-4 large language model (LLM). Microsoft has been incorporating the technology, which can answer questions with detailed and normally accurate prose, into its Office 365 suite, while other companies such as Google and Salesforce have been queuing up to launch their own AI-powered productivity tools based on LLMs.

How should businesses approach generative AI to safeguard data?

However, a backlash has already started against ChatGPT. On Friday Tech Monitor reported that Italy had blocked the chatbot from being used until OpenAI can guarantee that the way data on Italian citizens is collected and stored is compatible with the EU’s GDPR.

Italy’s data authority, Garante Privacy (GPDP), said OpenAI provides a “lack of information to users and all interested parties” over what data is collected, as well as a lack of a legal basis to justify the collection and storage of personal data that it used to train the algorithm and models that power ChatGPT.

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

Almond said that “organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach”.

The “data protection by design and default” approach is part of UK GDPR, and mandates businesses to “integrate or ‘bake in’ data protection into your processing activities and business practices, from the design stage right through the lifecycle”.

Almond added: “This isn’t optional – if you’re processing personal data, it’s the law. Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources.”

The blog goes on to list eight points organisations must consider should they wish to use generative AI or build their own model, covering transparency, unnecessary data processing and the impact of using AI in automated decision making.

It also encourages tech leaders using generative AI to consider their role as a data controller. “If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor,” Almond says.

European countries consider ChatGPT bans

The ICO advice is in line with the UK’s general approach to regulating ChatGPT and other AI systems. Last week the government published a white paper setting out a light-touch, pro-innovation approach to AI, and said it had no plans to launch a dedicated regulator. But other European countries are considering whether to follow Italy’s lead and ban the chatbot.

France and Ireland’s privacy regulators have contacted GPDP to find out more about the basis for Italy’s ban, Reuters reported on Monday. “We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” a spokesperson for Ireland’s Data Protection Commissioner said.

Meanwhile, Germany’s data commissioner, Ulrich Kelber, told the Handelsblatt newspaper that his country could instigate a ban similar to Italy’s.

Potential privacy violations by generative AI are “just a tip of the iceberg of rapidly unfolding legal troubles,” according to Dr Ilia Kolochenko, founder of pen testing platform ImmuniWeb and a member of Europol Data Protection Experts Network.

“After the pompous launch of ChatGPT last year, companies of all sizes, online libraries and even individuals – whose online content could, or had been, used without permission for training of generative AI – started updating terms of use of their websites to expressly prohibit collecting or using their online content for AI training,” Kolochenko said.

“Even individual software developers are now incorporating similar provisions to their software licenses when distributing their open-sourced tools, restricting tech giants from stealthily using their source code for generative AI training, without paying the authors a dime.”

He added: “Contrasted to contemporary privacy legislation that currently has no clear answer whether and to what extent generative AI infringes privacy laws, website terms of service and software licenses fall under the well-established body of contract law, having an abundance of case law in most countries.

“In jurisdictions where liquidated damages in contract are permitted and enforceable, violations of website’s terms of use may trigger harsh financial consequences in addition to injunctions and other legal remedies for breach of contract, which may eventually paralyse AI vendors.”

Read more: ChatGPT is giving the rest of the world AI FOMO

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU