View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

CMA launches review into foundation AI models – other UK regulators could follow

The watchdog is basing its review of the AI market on the principles set out in the UK AI framework including safety and fairness.

By Ryan Morrison

The Competition Markets Authority (CMA) has launched a full review into the market for foundation AI models including large language models and chat tools like ChatGPT. It follows the introduction of the UK’s AI framework in March that put authority in the hands of individual regulators, and more investigations into the impact of AI are likely to follow. One expert said its important Big Tech doesn’t completely dominate the sector, and mechanisms are put in place to support UK AI start-ups.

The success of large language models from companies like OpenAI has sparked a review of the market (Photo: Koshiro K / Shutterstock)
The success of large language models from companies like OpenAI has sparked a review of the market (Photo: Koshiro K / Shutterstock)

The CMA says it will measure the AI market leaders against five overarching principles of: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress set out in the AI framework.

The review will run until early September and will guide any future regulations surrounding the use of foundation models including development of consumer and competition protection principles. It has called for evidence to understand how companies develop AI products with consumer protections considering safety, security and transparency.

This is driven by the accelerated growth of foundation model AI. It was a slow trickle of adoption, mainly focused on start-ups before November 2022 but when ChatGPT launched it gave rise to a flurry of major developments. This included Microsoft and Google integrating generative AI into productivity tools and Salesforce going “all in” with Einstein GPT in its CRM software.

While there is a lot of choice in terms of large language models, including self-hosted commercial models from Databricks, the bulk of development is happening on top of a handful of massive models from companies like Microsoft-backed OpenAI and Google.

The CMA says its review will examine how the competitive markets for foundation models and their use could evolve, what opportunities and risks these scenarios could bring for competition and consumer protection and which principles can best guide the ongoing development of these markets.

Other regulators are also investigating the impact of generative AI on their own sector. The ICO confirmed in an email to Tech Monitor it is examining how generative AI models process personal data during training and deployment. “This is also an area we will be considering with our fellow regulators in the Digital Regulatory Cooperation Forum as part of our 2023/24 work plan,” a spokesperson explained.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

Space for start-ups on the AI scene

Accenture’s global lead for Responsible AI, Ray Eitel-Porter, told Tech Monitor the rise of generative AI propelled businesses into faster action on the responsible use of the technology. “We’re seeing that organisations with responsible AI foundations in place have been able to quickly set guardrails for the new risks of generative AI, and we’re also seeing a strong commitment to compliance,” he says.

Accenture recently partnered with Salesforce to create a new “acceleration hub” to help organisations scale generative AI but in a way that is safe and robust. It followed a survey by Gartner that found companies were actively looking to rollout generative AI but were worried about data risks.

Ekaterina Almasque, general partner at deep tech venture capital company OpenOcean told Tech Monitor the review was welcomed as fair competition will always have a positive effect on innovation, warning that “steps must be taken to make it easier for early-stage AI start-ups” to compete with the hyperscale cloud providers, which currently host the most popular AI models on their platforms.

“The current barriers to entry in AI – namely the acquisition of talent, the paucity of open-source LLMs and high costs in server time to train or fine-tune models – risk stifling our domestic start-up ecosystem if we do not find a new way forward. 

“To train AI models, you need three things. A high volume of data, a high quality of data, and the ability to access both without transgressing IP law or individual privacy. Hyperscalers possess enormous amounts of user data from the other sides of their business, granting them a great advantage over start-ups with far more limited access to training data. Steps must be taken to make it easier for early-stage AI start-ups to find their niche, compete, and create models to solve business problems.”

Read more: Slack GPT: Salesforce messaging platform gets AI capabilities

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU