Artificial intelligence lab OpenAI is launching a new “alignment” research division, designed to prepare for the rise of artificial superintelligence and ensure it doesn’t go rogue. This future type of AI is expected to have greater than human levels of intelligence including reasoning capabilities. Researchers are concerned that if it is misaligned to human values, it could cause serious harm.
Dubbed “superalignment”, OpenAI, which makes ChatGPT and a range of other AI tools, says there needs to be both scientific and technical breakthroughs to steer and control AI systems that could be considerably more intelligent than the humans that created it. To solve the problem OpenAI will dedicate 20% of its current compute power to running calculations and solving the alignment problem.
AI alignment: Looking beyond AGI
OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote a blog post on the concept of superalignment, suggesting that the power of a superintelligent AI could lead to the disempowerment of humanity or even human extinction. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the pair wrote.
They have decided to look beyond artificial general intelligence (AGI), which is expected to have human levels of intelligence, and instead focus on what comes next. This is because they believe AGI is on the horizon and superintelligent AI is likely to emerge by the end of this decade, with the latter presenting a much greater threat to humanity.
Current AI alignment techniques, used on models like GPT-4 – the technology that underpins ChatGPT – involve reinforcement learning from human feedback. This relies on human ability to supervise the AI but that won’t be possible if the AI is smarter than humans and can outwit its overseers. “Other assumptions could also break down in the future, like favorable generalisation properties during deployment or our models’ inability to successfully detect and undermine supervision during training,” explained Sutsker and Leike.
This all means that the current techniques and technologies will not scale up to work with superintelligence and so new approaches are needed. “Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence,” the pair declared.
Superintelligent AI could out-think humans
OpenAI has set out three steps to achieving the goal of creating a human-level automated alignment researcher that can be scaled up to keep an eye on any future superintelligence. This includes providing a training signal on tasks that are difficult for humans to evaluate – effectively using AI systems to evaluate other AI systems. They also plan to explore how the models being built by OpenAI generalise oversight tasks that it can’t supervise.
There are also moves to validate the alignment of systems, specifically automating the search for problematic behaviour externally and within systems. Finally the plan is to test the entire pipeline by deliberately training misaligned models, then running the new AI trainer over them to see if it can knock it back into shape, a process known as adversarial testing.
“We expect our research priorities will evolve substantially as we learn more about the problem and we’ll likely add entirely new research areas,” the pair explained, adding the plan is to share more of the roadmap as this evolution occurs.
The main goal is to achieve the core technical challenges of superintelligence alignment – known as superalignment – in four years. This plays to the prediction that the first superintelligence AI will emerge within the next six to seven years. “There are many ideas that have shown promise in preliminary experiments,” according to Sutsker and Leike. “We have increasingly useful metrics for progress and we can use today’s models to study many of these problems empirically.”
AI safety is expected to become a major industry in its own right. Nations are also hoping to capitalise on the future need to align AI to human values. The UK has launched the Foundation Model AI Taskforce with a £100m budget to investigate AI safety issues and will host a global AI summit later this year. This is likely to focus on the more immediate risk from current AI models, as well as the likely emergence of artificial general intelligence in the next few years.