US Commerce Secretary Gina Raimondo has announced the creation of the US AI Safety Institute Consortium (AISIC.) According to Raimondo, the new organisation will sit within the existing US AI Safety Institute (USAISI) and convene academics, developers, government officials, civil society figures and industry representatives to advise on the suitability of new guardrails for AI products and services. Over 200 organisations have joined AISIC, including Nvidia, OpenAI, Apple, JPMorgan and Amazon.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said Raimondo. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”
New US AI Safety Institute Consortium part of Biden’s consensus-driven approach to AI regulation
The announcement of AISIC is the latest in a series of AI safety initiatives pushed by the Biden administration. Considered to be relatively light-touch in its regulatory attitude toward AI compared to other jurisdictions like the EU or even the UK, the Biden administration has nonetheless engaged heavily with major technology companies to set common standards on safety and transparency in the sector. These include promoting watermarking as a method for the public to identify AI-created content, and participation in international conferences like last year’s AI Summit at Bletchley Park to agree on international norms for the use of advanced models.
The creation of AISIC is seemingly in keeping with the Biden administration’s broader policy of striking consensus between academia and major technology companies on AI safety norms before pushing ahead with new regulations. “We need to ensure aligned approaches in the development and science of safe and trustworthy AI,” said the director of the US National Institute of Standards and Technology, Laurie Locascio, at an event in Washington D.C. on Wednesday. “The consortium is a critical is a critical pillar of the [US AI Safety] Institute and it will ensure that the Institute’s research… [is] integrated into the broad community.”
The US AI Safety Institute Consortium has already attracted several heavyweight corporate supporters, including Amazon, who also pledged $5m in compute credits to its parent organisation USAISI. This, said the firm’s senior vice-president for global public policy David Zapolsky, would “enable the development of tools and methodologies that organisations can use to evaluate the safety of their foundation models.”