View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
April 12, 2023updated 13 Apr 2023 8:57am

Biden Administration ramps up efforts to regulate ChatGPT

One country's efforts will not contain the language model's misuse unless the regulation is worldwide, however, researchers have warned Tech Monitor.

By Claudia Glover

The Biden Administration has announced new measures to regulate AI tools like ChatGPT, as the complexity and popularity of large language model-based tools continue to boom. While difficult to control, government bodies should regulate the AI industry to best protect the public from its misuse, researchers suggest. In order to effectively regulate the chatbot, however, rules must be implemented on a global scale. 

Biden Whitehouse
Biden Administration vamps up efforts to regulate chatbot ChatGPT. (Photo by Anthony Ricci/Shutterstock)

Dangers introduced by the chatbot include medical self-misdiagnosis, as well as misleading law advice and general misinformation. To mitigate this, information ChatGPT draws from could be curated by sector rather than using the whole web. This would create valuable datasets, however, which would be a cybersecurity risk, researchers warn. 

Biden Administration vamps up efforts to regulate ChatGPT

A request for comment (RFC) has been launched by the US Department of Commerce’s National Telecommunications and Information Administration (NTIA) in a bid to gather information on how AI systems such as ChatGPT could be regulated. Information gathered from the RFC will be used to inform the Administration’s ongoing research into “a cohesive and comprehensive federal government approach to AI-related risks and opportunities,” states a press release.

As the use of large language models (LLM) and other general purpose AI becomes increasingly widespread, human risks to its ubiquity are coming to light. “Companies have a responsibility to make sure their AI products are safe before making them available,” said the NTIA. 

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” explained Alan Davidson, assistant secretary of commerce for communications and information and NTIA administrator. 

Many governments have been trying to organise regulations to direct the sweeping use of the generative AI model towards the benign. The UK made reference to regulating AI-powered search services in its Online Safety Bill, for example. Lead lawmakers in the European Parliament suggested in February that AI systems generating complex texts without human oversight should be part of a “high risk” list, in an effort to stop ChatGPT from distributing misinformation at scale.

Researchers in different industries are reacting with increasing alarm over GPT-4, the latest version of OpenAI’s foundation AI model’s ability to pass high-level examinations with flying colours. In March, it passed a simulated bar examination with a score in around the top 10% of test makers, said OpenAI, the company behind the production of the chatbot. 

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

ChatGPT-4 has scored 60% and higher for the United States Medical Licensing Examination with responses that are coherent and contain real insight, explains a study published on 9 February, in the open-access journal PLOS Digital Health. 

However, aspersions have been cast by medical professionals and AI experts over the use of the LLM for self-diagnosis. Neurologist and ethicist at Yale University Benjamin Tolchin told the Scientific American that patients have started to use ChatGPT instead of Google as a self-diagnosis tool, and that the generative AI has been getting the diagnoses wrong.

What would Biden’s ChatGPT regulation look like?

Chatbots have a number of pitfalls, including uncertainty about the accuracy of the information they give people, or threats to privacy and racial and gender bias in the text the algorithms draw from. Tolchin also questions how patients will interpret the information. There’s a new potential for harm that did not exist with simple Google searches or symptom checkers, Tolchin told the magazine.

Oxford University’s Sharma agreed that this is where governments need to act quickly before the use of ChatGPT is incorporated into the diagnosis process. “GPs in the UK might use ChatGPT as an assistive technology, to do the initial screening for them, for example. That needs to be appropriately regulated so that it’s right, rather than using information from the internet. That’s a critical use case,” Sharma told Tech Monitor.

In order to ensure that a chatbot is a useful tool for practitioners in industries such as medicine and law, they must be trained on tailored information rather than on the whole internet. Bespoke, industry-wide data lakes will be a treasure-trove of valuable data, however, explained Sharma.

“To start with, in the medical use case, ChatGPT should not have any vulnerabilities because that database should not be hacked. It will contain a high level of personally identifiable information,” he said. Open AI released a BugBounty programme yesterday that will reward white-hat hackers who uncover vulnerabilities in ChatGPT with figures of up to $20,000 in order to mitigate such risks. 

This could be why some countries have been banning its use outright, to pause the development of the tool and garner time in considering how it can be used and misused. Italy banned the LLM in March and Germany has reportedly been considering following suit. 

Such measures will be useless unless worldwide regulation comes into play, warns Sharma. “We need to maintain the standard across the planet. Every continent has their own challenges for regulation so there will definitely be chances for misuse. Cybercriminals can embed the ChatGPT API in any social media post or payload. It’s the internet, right?”

Read more: UK AI regulation white paper dodges ChatGPT questions

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU