Some of the largest AI labs in the world have launched a new forum to regulate the development of large language models (LLMs). OpenAI, Microsoft, Google and Anthropic are the founding members and will focus on future LLMs, not those in use today.
The companies say the Frontier Model Forum could be a precursor to a new industry body aimed at the safe and responsible development of advanced AI systems. According to the group, this will include collaboration on AI safety research, identifying standards and sharing information with policymakers and the industry.
Like the voluntary agreement made with US President Joe Biden at the White House last week, this new group is focused on future systems at the frontier of research. This will include any model more powerful than those in use today such as GPT-4 from OpenAI, Claude 2 from Anthropic and PaLM from Google.
Since the launch of ChatGPT by OpenAI in November last year, LLMs and generative AI have captured global attention. They are being widely used in enterprise and throughout the economy but are also facing increased scrutiny from regulators and campaigners.
Most of the regulatory focus, from the EU AI Act to the UK’s AI White Paper, is on ways to regulate the AI models in use today, but the founders of the Frontier Model Forum are trying to shift the focus to next-generation systems. This includes those dubbed artificial general intelligence, or models capable of human levels of understanding.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,” Microsoft president Brad Smith said in a statement.
OpenAI says the new industry body won’t engage in lobbying with the government on regulation, but rather would focus on safety research. It isn’t clear whether this lack of lobbying will extend to the companies themselves, as reports suggest OpenAI has been engaged in lobbying the EU to water down parts of its AI Act.
Frontier Model Forum board and working group announced
The forum says the next steps will see the creation of a new advisory board and funding of a working group and executive board to move standards creations forward. “The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards,” OpenAI said.
Other organisations developing so-called frontier AI models have been invited to participate in the forum and contribute towards the research into AI safety. The core objectives include research into safety, responsible development standards that minimise risk and identifying best practices.
While there apparently won’t be direct lobbying, the forum says it will engage in collaboration with policymakers, academics, civil society and companies to share knowledge on trust and safety risks of these next-generation models.
It isn’t clear why Amazon, Inflection or Meta aren’t part of the forum despite being among those signing up for the White House AI principles, which include putting models through vulnerability and security testing before they go live.
Membership criteria for the Frontier Model Forum
Members of the forum must be developing their own frontier models, as well as showing a commitment to model safety and being willing to contribute to advancing the forum’s efforts. A frontier model is defined as a large-scale machine-learning model that exceeds the capabilities of today’s most advanced AI.
Anna Makanju, vice-president of global affairs at OpenAI, said advanced AI technologies hold the potential to “profoundly benefit society” but to achieve this potential requires oversight and governance. “It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” she said. “This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
The group praised the work already being undertaken by national governments on AI safety in establishing guardrails to mitigate risk from advanced AI. However,t further work is needed to evaluate the potential of frontier models and ensure they are deployed responsibly.
Dario Amodei, CEO of Anthropic, said AI has the potential to fundamentally change how the world works, adding: “We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”