The US Artificial Intelligence Safety Institute, part of the US Department of Commerce’s National Institute of Standards and Technology (NIST), has announced agreements to collaborate on artificial intelligence (AI) safety research, testing, and evaluation with Anthropic and OpenAI.

Formalised through memoranda of understanding, the agreements grant the US AI Safety Institute access to major new AI models from both AI companies before and after their public release.

Furthermore, the agreements will facilitate joint research on assessing capabilities, identifying safety risks, and developing methods to mitigate those risks.

“With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said US AI Safety Institute director Elizabeth Kelly. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Anthropic and OpenAI join growing number of firms committed to safer AI

The US AI Safety Institute also intends to work closely with its partners at the UK AI Safety Institute to provide feedback to Anthropic and OpenAI on potential safety improvements to their models.

According to NIST, these assessments will enhance its efforts in AI by enabling in-depth collaboration and exploratory research on advanced AI systems across various risk areas.

The evaluations carried out under these agreements will support the safe, secure, and trustworthy development of AI. This aligns with the US administration’s Executive Order on AI and the voluntary commitments made by leading AI model developers.

Separately, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) on the same day.

The bill introduces safeguards to protect society from the misuse of AI in conducting cyberattacks on critical infrastructure, developing chemical, nuclear, or biological weapons, or facilitating automated crime.

The new agreements come as the companies face increasing regulatory scrutiny over safe and ethical use of AI technologies. Earlier this month, the UK Competition and Markets Authority (CMA) formally commenced an investigation into e-commerce giant Amazon’s partnership with Anthropic.

Prior to that, in July 2024, the UK sought an investigation into Google parent company Alphabet’s previously announced partnership with Anthropic.

In June 2024, the US Federal Trade Commission (FTC) and the Department of Justice (DoJ) opened an antitrust investigation into Nvidia, OpenAI, and Microsoft.

In a separate development, OpenAI is reportedly negotiating a new funding round that could value the company at over $100bn, according to the Wall Street Journal. The funding round is anticipated to be spearheaded by venture capital firm Thrive Capital, which is expected to contribute around $1bn.

Additionally, Microsoft, already a significant investor in the ChatGPT developer, is also likely to participate in this funding round, the report notes.

Written by Refna Tharayil

Read more: Just how high can Nvidia fly?