View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
February 8, 2023updated 09 Mar 2023 9:48am

How do you regulate advanced AI chatbots like ChatGPT and Bard?

General purpose artificial intelligence tools will provide new challenges for regulators, which they may struggle to meet.

By Ryan Morrison

“AI will fundamentally change every software category” said Microsoft’s CEO Satya Nadella Tuesday when he announced OpenAI’s generative AI technology was coming to the Bing search engine to offer users what MSFT hopes will be a richer search experience.

The success of OpenAI’s ChatGPT and the upcoming release of Google’s Bard means the debate over AI regulation has ramped up. (Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)

But how to regulate tools, such as OpenAI’s chatbot ChatGPT, that can generate any type of content from a few words, and are trained on the world’s knowledge, is a question that is puzzling policymakers around the world. The solution will involve assessing risk, one expert told Tech Monitor, and certain types of content will need to be more closely monitored than others.

Within two months of launch, ChatGPT, the AI chatbot became the fastest-growing consumer product in history, with more than 100 million active monthly users in January alone. It has prompted some of the world’s largest companies to pivot to or speed up AI rollout plans and has given a new lease of life to the conversational AI sector.

Microsoft is embedding conversational AI in its browser, search engine and broader product range, while Google is planning to do the same with the chatbot Bard and other integrations into Gmail and Google Cloud, several of which it showcased at an event in Paris today.

Other tech giants such as China’s Baidu are also getting in on the act with chatbots of their own, and start-ups and smaller companies including Jasper and Quora bringing generative and conversational AI to the mainstream consumer and enterprise markets.

This comes with real risks from widespread misinformation and harder-to-spot phishing emails through to misdiagnosis and malpractice if used for medical information. There is also a high risk of bias if the data used to feed the model isn’t diverse. While Microsoft has a retrained model that is more accurate, and other providers like AI21 are working on verifying generated content against live data, the risk of “real looking but completely inaccurate” responses from generative AI are still high.

Last week, Thierry Breton, the EU commissioner for the internal market, said that the upcoming EU AI act would include provisions targeted at generative AI systems such as ChatGPT and Bard. “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks,” Breton told Reuters. “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data.”

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

Breton and his colleagues will have to act fast, as new AI rules drawn up in the EU and elsewhere may not be ready to cope with the challenges posed by these advanced chatbots.

AI regulation: developers will need to be ‘ethical by design’

Analytics software provider SAS outlined some of the risks posed by AI in a recent report, AI & Responsible Innovation. Author Dr Kirk Borne said: “AI has become so powerful, and so pervasive, that it’s increasingly difficult to tell what’s real or not, and what’s good or bad”, adding that this technology is being adopted faster than it can be regulated.

Dr Iain Brown, head of data science at SAS UK & Ireland, said governments and industry both have a role to play in ensuring AI is used for good, not harm. This includes the use of ethical frameworks to guide the development of AI models and strict governance to ensure fair, transparent and equitable decisions from those models. “We test our AI models against challenger models and optimise them as new data becomes available,” Brown explained.

Other experts believe companies producing the software will be charged with mitigating the risk the software represents, with only the highest-risk activities facing tighter regulation.

Edward Machin, data, privacy and cybersecurity associate at law firm Ropes and Gray told Tech Monitor it is inevitable that technology like ChatGPT, which seemingly appeared overnight, will move faster than regulation, especially in an area like AI which is already difficult to regulate. “Although regulation of these models is going to happen, whether it is the right regulation, or at the right time, remains to be seen,” he says

“Providers of AI systems will bear the brunt of the legislation, but importers and distributors – in the EU at least – will also be subject to potentially onerous obligations,” Machin adds. This could put some developers of open-source software in a difficult position. “There is also the thorny question of how liability will be handled for open-source developers and other downstream parties, which may have a chilling effect on willingness of those folks to innovate and conduct research,” Machin says.

AI, privacy and GDPR

Aside from the overall regulation of AI, there are also questions around the copyright of generated content and around privacy, Machin continues. “For example, it’s not clear whether developers can easily – if at all – address individuals’ deletion or rectification requests, nor how they get comfortable with scraping large volumes of data from third-party websites in a way that likely breaches those sites’ terms of service,” he says.

Lilian Edwards, Professor of Law, Innovation and Society at Newcastle University, who works on regulation of AI and with the Alan Turing Institute, said some of these models will come under GDPR, and this could lead to orders being issued to delete training data or even the algorithms themselves. It may also spell the end of widescale scraping of the internet, currently used to power search engines like Google, if website owners lose out on traffic to AI searches.

The big problem, says Edwards, is the general purpose nature of these models. This makes them difficult to regulate under the EU AI Act, which has been drafted to work on the basis of risk, as it is difficult to judge what the end user is going to be doing with the technology due to the fact it is designed for multiple use cases. She said the European Commission is trying to add rules to govern this type of technology but is likely to do this after the act becomes law, which could happen this year.

Enforcing algorithmic transparency could be one solution. “Big Tech will start lobbying to say ‘you can’t put these obligations on us as we can’t imagine every future risk or use’,” says Dr Edwards. “There are ways of dealing with this that are less or more helpful to Big Tech, including making the underlying algorithms more transparent. We are in a head-in-the-sand moment. Incentives ought to be towards openness and transparency to better understand how AI makes decisions and generates content.”

“It is the same problem you get with much more boring technology, that tech is global, bad actors are global and enforcement is incredibly difficult,” she said. “General purpose AI doesn’t match the structure of the AI act which is what the fight is over now.”

Adam Leon Smith, CTO of AI consultancy DragonFly has worked in technical AI standardisation with UK and international standards development organisations and acted as the UK industry representative to the EU AI standards group. “Regulators globally are increasingly realising that it is very difficult to regulate technology without consideration of how it is actually being used,” he says.

He told Tech Monitor that accuracy and bias requirements can only be considered in the context of use, with risks, rights and freedoms requirements also difficult to consider before it reaches widescale adoption. The problem, he says, is that large language models are general-purpose AI.

“Regulators can force transparency and logging requirements on the technology providers,” Leon Smith says. “However, only the user – the company that operates and deploys the LLM system for a particular purpose – can understand the risks and implement mitigations like humans in the loop or ongoing monitoring.”

AI regulatory debate looming

It is a large-scale debate that is looming over the European Commission and hasn’t even started in the UK, but one that regulators such as data watchdog the Information Commissioner’s Office and its counterpart for financial markets, the Financial Conduct Authority, will have to tackle. Eventually, Leon Smith believes, as regulators increase their focus on the issue, AI providers will start to list the purposes for which the technology “must not be used”, including issuing legal disclaimers before a user signs in to put them outside the scope of “risk-based regulatory action”.

Current best practices for managing AI systems “barely touch on LLMs, it is a nascent field that is moving extremely quickly,” Leon Smith says. “A lot of work is necessary in this space and the firms providing such technologies are not stepping up to help define them.”

OpenAI’s CTO Mira Muratti this week said that generative AI tools will need to be regulated. “It is important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” she said in an interview with Time.

But beyond the AI vendors, she said “a tonne more input into the system” is needed, including from regulators and governments. She added that it’s important the issue is considered quickly. “It’s not too early,” Muratti said. “It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”

Read more: ChatGPT update will improve chatbot’s factual accuracy

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU