View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
March 8, 2023updated 20 Mar 2023 3:12pm

OpenAI challenged to enter ChatGPT into new AI regulatory sandbox

OpenAI is on a 'path to AGI' or artificial general intelligence that will require strong guidelines and boundaries, claim AI campaign group ForHumanity.

By Ryan Morrison

Microsoft-backed artificial intelligence start-up OpenAI has been urged to join a new regulatory sandbox to test the limits and potential guardrails on its general AI models and tools such as ChatGPT. The challenge came from ethical AI campaign group ForHumanity in an open letter, declaring “a regulatory sandbox provides the ideal process to share governance widely and fairly.”

OpenAI has been challenged to use a sandbox as a means to complying with the EU AI Act (Photo: Ascannio/Shutterstock)
OpenAI has been challenged to use a sandbox as a means to complying with the EU AI Act (Photo by Ascannio/Shutterstock)

The concept of regulatory sandboxes isn’t new and they have been widely used around the world to test ideas around data security, energy systems and most recently fintech applications. Often run by regulators, they provide a way to run a trial of a new service or product where some rules have been removed or added to test boundaries and viability.

An illustration of a regulatory sandbox in the realm of AI is the one set up by the UK’s Information Commissioner’s Office (ICO) to explore and experiment with products and services related to data protection. The ICO’s regulatory sandbox is targeted at assisting companies in developing innovative data protection solutions while managing privacy risks.

ForHumanity founder and executive director Ryan Carrier emphasised the importance of OpenAI’s engagement with a regulatory sandbox in the open letter, citing its potential to allow for the fair and broad sharing of governance. They believe that this would provide a necessary means of testing the limits and potential guardrails of OpenAI’s general AI models and tools, such as ChatGPT as well as ensure they comply with the upcoming EU AI Act.

The EU AI Act is a proposed piece of legislation aimed at regulating the use of artificial intelligence across the European Union. The act seeks to promote the ethical and trustworthy development and use of AI, while also ensuring safety, privacy, and fundamental rights. The European Commission and European Parliament are working on their own versions of the final drafting. It isn’t clear how the act will handle general AI at this stage although tech companies are lobbying for it to be based on final use, not the model itself.

Published on LinkedIn the open letter follows a research note from ChatGPT-maker OpenAI on the future of artificial intelligence and the “path to AGI” or artificial general intelligence, seen as the point where AI can think like a human across a wide range of cognitive tasks.

AI is a transformative technology

OpenAI founder Sam Altman argues that AGI has the potential to transform many aspects of society and that “careful planning and collaboration is needed to ensure its safe and beneficial development” including the need to engage with a broad community of stakeholders in its development including policymakers and civil society organisations.

Engaging with a third party on creating a regulatory sandbox would fulfil this ambition, says Carrier, including through the adoption of certification systems to demonstrate a tool is compliant with regulation. This would “enable OpenAI to build compliance capabilities for the requirements of the law,” he said.

Content from our partners
A hybrid strategy will help distributors execute a successful customer experience
Amalthea leverages AI and automation to improve yield while minimising waste and costs
How AI is unlocking valuable opportunities in the insurance industry

Tech Monitor has approached OpenAI for comment but had not received a response at the time of writing.

In his research note on the “path to AGI” Altman declares the need for “careful planning and collaboration” if the value of advanced artificial intelligence technology is going to be both widely used and accepted by the public and by enterprise. “To maximise the benefits and minimise the risks of AGI, it is essential to consider its implications for society at large,” he added.

“With the exception of prohibited technologies, ForHumanity supports the beneficial and ethical use of all technology, and our work endeavours to support and enable OpenAI, and others, to maximise risk mitigation, for all stakeholders, through Independent Audit of AI Systems (IAAIS),” wrote Carrier.

Careful consideration of risk

“Engaging in Independent Audit of ChatGPT is a robust solution for navigating massive risks with the very tools that are a likely a portion of the foundation of AGI you referred to in Planning for AGI and Beyond,” the ForHumanity letter says. “In the regulatory sandbox, together, we can test and prove compliance with the EU AI Act.”

As part of its proposal for a sandbox, ForHumanity suggests there would be three key tools including a comprehensive risk management framework that fully integrates leading standards with a range of diverse inputs and multi-stakeholder feedback. This includes human risk assessors responsible for identifying risky inputs and indicators during the design, development and deployment phases of an algorithms lifecycle. It would provide a “robust beginning to governance”.

This would then lead to the second stage, the implementation of an OpenAI ethics committee that is trained in algorithm ethics and operating to a public code of ethics. This, says Carrier, is critical. “OpenAI attracts talented and expert data scientists and model developers to build its systems, but do you have a team of experts governing the ethics that are embedded in ChatGPT and other models?”

The final tool is one developed by ForHumanity known as Systemic Societal Impact Analysis (SSIA) designed to foster self-awareness when it comes to societal impact of products and developments. This is a requirement of the EU Digital Services Act which OpenAI tools have to confirm tool in addition to the upcoming AI Act and GDPR.

“These tools are examples of the comprehensive audit criteria that ForHumanity has established to provide independent auditors the ability to assure and certify compliance for High-Risk AI under the EU AI Act,” Carrier explained. “Working in a regulatory sandbox to test, research and build assured compliance with laws and regulation established by a democratic society (the EU), deploying rules established by “other organisations” that operate globally to advance safety with aligned incentives towards good outcomes seems to agree exceptionally well with your stated goals.”

Read more: This is how GPT-4 will be regulated

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU