View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 18, 2023updated 19 Sep 2023 10:07am

Competition watchdog the CMA outlines principles for selling foundation AI models

Artificial Intelligence is already becoming ubiquitous throughout the economy and the CMA hopes to get ahead of the curve.

By Ryan Morrison

The UK’s Competition Markets Authority (CMA) has published principles to govern the sale and deployment of foundation AI models such as those underpinning services like ChatGPT from OpenAI. Rishi Sunak’s government has put regulation of AI in the hands of individual sector regulators and the report from the competition watchdog says it expects developers to be accountable, diverse and transparent when building products with AI.

The CMA principles are designed to put the regulator ahead of the curve before AI becomes ubiquitous throughout the economy. (Photo by Gorodenkoff/Shutterstock)

Regulators around the world are wrestling with how to balance the potential benefits of using foundation AI models against the risks they pose. The new report from the CMA followed months of work including engagement with business development, deploying and maintaining foundation models, as well as academics and industry organisations. This included an analysis of the latest AI research.

The CMA says the principles will inform its wider approach to the development and use of AI once the Digital Markets, Competition and Consumer Bill gains royal assent. It is currently going through parliament and gives the regulator greater powers.

A number of high-profile companies are already deploying foundation models in their products including Microsoft, through its Copilot brand, Salesforce and Google. Much of the discussion around regulation has focused on the next generation of frontier models such as GPT-5, Claude 3 or Google’s currently in-development Gemini model.

The CMA principles focus on current and future use cases for AI. The regulator says they carry significant potential to spur innovation and drive economic growth, transforming the way we live and work. The report highlights several ways people and businesses can benefit from the safe and effective use of foundation models. These include new products and services with easier-to-access information, scientific breakthroughs and lower prices. 

However, the report also warns that “changes can happen quickly and have a significant impact on people, businesses, and the UK economy”. Explaining that without appropriate safeguards and with weak competition people and businesses could be harmed. This could include greater degrees of misinformation or AI-enabled fraud. “A handful of firms could use FMs to gain or entrench positions of market power and fail to offer the best products and services and/or charge high prices,” the CMA added.

The principles don’t consider copyright, intellectual property, online safety, data protection or security as they weren’t in the scope of the initial review, but do expect those areas to be considered in future. Instead, they set out a path developers should follow to ensure they remain within the rules of consumer protection laws. 

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

CMA AI principles have ‘laudable aims’

Many of the principles mirror those set out in the UK government AI white paper published earlier this year as a guide to individual regulators. Among the first is accountability, placing the burden on the developer and deployer of the model for outputs provided to consumers. A need for ready access to key inputs without unnecessary restrictions, as well as ensuring a diversity of business models across open and closed source are also listed as key principles. As is maintaining a sufficient choice for business on how to use the models.

The CMA also draws on a need for fair dealing with end users and businesses purchasing access to models, ensuring no anti-competitive self-preferencing, tying or bundling. This could for example require OpenAI to ensure its API for developers has all the same functionality as its own published products like ChatGPT or DALL-E 2.

One of the core principles of the white paper, and a goal of the upcoming Bletchley AI Safety Summit hosted by the Department for Science, Innovation and Technology is transparency. The CMA says consumers and businesses must be given information on the risks and limitations of models. Sarah Cardell, CEO of the CMA, said the speed AI is becoming part of everyday life for people and businesses is “dramatic”. “There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted,” warned Cardell. 

She said that there “remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy”, and added: “While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary.”

Gareth Mills, partner at law firm Charles Russell Speechlys, described the principles as a “laudable willingness to engage proactively with the rapidly growing AI sector” with the aim of protecting consumers as early as possible. “The principles contained in the report are necessarily broad and it will be intriguing to see how the CMA seeks to regulate the market to ensure that competition concerns are addressed,” he said. 

Mills continued: “The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating against the potential for AI technologies to have adverse consequences for consumers.

“As the utilisation of the technologies grows, the extent to which there is any inconsistency between competition objectives and government strategy will be fleshed out.”

Read more: UK launches ‘AI for Development’ scheme at UN event

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.