View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

OpenAI commits to ‘superalignment’ research

The company predicts the first superintelligent AI will emerge by the end of the decade, and will be able to outthink humans.

By Ryan Morrison

Artificial intelligence lab OpenAI is launching a new “alignment” research division, designed to prepare for the rise of artificial superintelligence and ensure it doesn’t go rogue. This future type of AI is expected to have greater than human levels of intelligence including reasoning capabilities. Researchers are concerned that if it is misaligned to human values, it could cause serious harm.

OpenAI says it is going beyond the threat of AGI and looking to future superintelligences (Photo: Camilo Concha/Shutterstock)
OpenAI says it is going beyond the threat of AGI and looking to future superintelligences (Photo: Camilo Concha/Shutterstock)

Dubbed “superalignment”, OpenAI, which makes ChatGPT and a range of other AI tools, says there needs to be both scientific and technical breakthroughs to steer and control AI systems that could be considerably more intelligent than the humans that created it. To solve the problem OpenAI will dedicate 20% of its current compute power to running calculations and solving the alignment problem.

AI alignment: Looking beyond AGI

OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote a blog post on the concept of superalignment, suggesting that the power of a superintelligent AI could lead to the disempowerment of humanity or even human extinction. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the pair wrote.

They have decided to look beyond artificial general intelligence (AGI), which is expected to have human levels of intelligence, and instead focus on what comes next. This is because they believe AGI is on the horizon and superintelligent AI is likely to emerge by the end of this decade, with the latter presenting a much greater threat to humanity.

Current AI alignment techniques, used on models like GPT-4 – the technology that underpins ChatGPT – involve reinforcement learning from human feedback. This relies on human ability to supervise the AI but that won’t be possible if the AI is smarter than humans and can outwit its overseers. “Other assumptions could also break down in the future, like favorable generalisation properties during deployment or our models’ inability to successfully detect and undermine supervision during training,” explained Sutsker and Leike.

This all means that the current techniques and technologies will not scale up to work with superintelligence and so new approaches are needed. “Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence,” the pair declared.

Superintelligent AI could out-think humans

OpenAI has set out three steps to achieving the goal of creating a human-level automated alignment researcher that can be scaled up to keep an eye on any future superintelligence. This includes providing a training signal on tasks that are difficult for humans to evaluate – effectively using AI systems to evaluate other AI systems. They also plan to explore how the models being built by OpenAI generalise oversight tasks that it can’t supervise.

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

There are also moves to validate the alignment of systems, specifically automating the search for problematic behaviour externally and within systems. Finally the plan is to test the entire pipeline by deliberately training misaligned models, then running the new AI trainer over them to see if it can knock it back into shape, a process known as adversarial testing.

“We expect our research priorities will evolve substantially as we learn more about the problem and we’ll likely add entirely new research areas,” the pair explained, adding the plan is to share more of the roadmap as this evolution occurs.

The main goal is to achieve the core technical challenges of superintelligence alignment – known as superalignment – in four years. This plays to the prediction that the first superintelligence AI will emerge within the next six to seven years. “There are many ideas that have shown promise in preliminary experiments,” according to Sutsker and Leike. “We have increasingly useful metrics for progress and we can use today’s models to study many of these problems empirically.”

AI safety is expected to become a major industry in its own right. Nations are also hoping to capitalise on the future need to align AI to human values. The UK has launched the Foundation Model AI Taskforce with a £100m budget to investigate AI safety issues and will host a global AI summit later this year. This is likely to focus on the more immediate risk from current AI models, as well as the likely emergence of artificial general intelligence in the next few years.

Read more: Japan targets light touch AI regulation

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU