View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
June 19, 2023updated 20 Jun 2023 10:32am

ICO urges companies to use privacy-enhancing technologies 

The data watchdog is pushing for safe deployment of AI and other data technology that puts privacy protections first.

By Ryan Morrison

The Information Commissioner’s Office (ICO) has urged companies to practice caution when dealing with AI and to deploy privacy-enhancing technologies (PETS) when dealing with personal information. The new guidance from the data watchdog comes as the government pushes for faster adoption of AI to grow the economy and boost productivity.

The ICO says privacy-enhancing technologies can make it easier to share information securely. (Photo by Ascannio/Shutterstock)

One of the measures designed to improve data security practices is the introduction of new guidelines for the use of PETs. Published by the ICO, the guidelines are aimed at data, protection offices, and those getting involved in large personal data sets across finance, healthcare, research and government. 

PETs can be used to make it easier to share personal and sensitive information safely, securely and anonymously. They work by creating versions of the data that can be anonymised, shared, linked to, and analysed without having to give direct access to the information itself. An example use case could be for financial institutions, sharing data with a third party that monitors for financial crimes including fraud and money laundering.

PETs are for life, says ICO

John Edwards, UK Information Commissioner, said any organisation that shares large volumes of data, particularly special category data should move towards using PETs over the next five years. “PETs enable safe data sharing and allow organisations to make the best use of the personal data they hold, driving innovation.”

These tools effectively build a secure environment for data from the ground up and allow for as little information to be shared, gathered, and retained as possible. It does so while still complying with data protection laws, and fraud prevention guidelines.

Edwards is meeting with other G7 data protection specialists to discuss how these sorts of techniques can be used to improve the flow of information across borders “Together with our G7 counterparts, we are focused on facilitating and driving international support for responsible and innovative adoption of PETs by researching and addressing barriers to adoption with clear guidance and examples of best practice,” he said.

This includes an exploration of other emerging technologies including the rapid development and deployment of generative AI. The aim is to ensure organisations across the world are innovating in a way that respects people’s information and privacy.

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

ICO warns against AI deployment

This isn’t the first time the ICO has spoken out against the potential risks of AI and large data, gathering in terms of privacy protection legislation. Last week, the watchdog warned businesses need to address the privacy risks of generative AI before rushing to adopt the technology.

Stephen Almond, executive director of regulatory risk at the ICO, said the organisation would be monitoring the situation. This includes regular and tougher checks on whether groups deploying generative AI are compliant with data protection laws. “Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the cost of their services, but they must not be blind to the privacy risks.”

He added: “We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout.”

The announcements come as governments around the world tackle how to handle artificial intelligence in a safe and secure way. Italy and other parts of Europe have previously looked to deploy GDPR against companies like OpenAI and chatbot providers over the way the data is collected and included in both training and output.

The UK has established the Foundation Model AI Taskforce. The £100m group is being chaired by investor Ian Hogarth and has been given the task of developing and exploring new tools for safe AI. Some of this work is likely to include rules around data, security and privacy, similar to work carried out by the ICO.

The government is keen to drive the adoption of artificial intelligence throughout the economy, particularly in public services. Chancellor Jeremy Hunt is said to have been very keen to deploy AI in such a way that it can increase productivity of civil servants without increasing cost. This, according to a report in the FT, would allow him to reduce taxes before the next General Election in 2024.

Read more: UK AI taskforce: Sunak appoints investor and entrepreneur as chair

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU