OpenAI has announced that its Safety and Security Committee (SSC) will now act as an independent body to oversee the company’s safety and security processes for artificial intelligence (AI) model development and deployment. The decision comes after the firm commissioned the SSC to conduct a 90-day review of safety and security-related processes and safeguards, one of the recommendations of which was to make the committee fully autonomous.

“The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches,” said OpenAI in a statement. This will, it added, include granting the SSC the authority to delay product releases if relevant safety concerns are not satisfied. The committee will also engage regularly with OpenAI’s security and safety teams, said OpenAI, alongside providing “[p]eriodic briefings on safety and security matters… to the full Board.”

Results revealed of SSC’s 90-day review of OpenAI

OpenAI confirmed that it would be adopting all five recommendations made by the SSC after its 90-day review. These also include a general tightening of security measures for AI models, as well as an increase in transparency, collaborating with external organisations, and unifying safety frameworks for model development and monitoring.

Chaired by Machine Learning Department Director at Carnegie Mellon University Zico Kolter, the SSC includes Quora co-founder and CEO Adam D’Angelo, former US Cyber Command Gen. Paul Nakasone, and former Sony executive vice president and general counsel Nicole Seligman.

The committee will receive briefings from company leadership on safety evaluations for major model releases and will work with the board to provide supervision of model launches. The new SSC also now has the authority to delay a release if safety issues are identified.

OpenAI also said it is actively working with government agencies to advance AI safety research, and is considering establishing an Information Sharing and Analysis Centre (ISAC) for the wider AI sector. This centre would share threat intelligence and cybersecurity information among entities in the AI industry to increase resilience against cyber threats. The American company has also formed agreements with the US and UK AI Safety Institutes to work together on research concerning emerging AI safety risks and standards for reliable AI.

According to OpenAI, its GPT-4o system card and o1-preview system card provide details on the safety work completed before each model’s release. This includes results from external red teaming and evaluations within the Preparedness Framework, as well as an overview of the steps taken to address risk areas.

Challenges with o1

One of the latest safety evaluations conducted by the SSC has been for OpenAI’s latest model series, o1. Nicknamed ‘Strawberry’, the series has received mixed reviews, with some experts praising their advanced problem-solving capabilities. Others, however, have pointed out significant shortcomings, including the model struggling with basic reasoning tasks like word puzzles and multi-hop questions that require sequential thinking.

o1’s propensity to pause and consider its answers has also been criticised. Marketed as a unique selling point for the series and a response to market concerns about LLMs routinely hallucinating responses, some critics have pointed out that the feature has significantly slowed down response times to user inquiries, with some customers having to wait almost two minutes for an answer to a simple request.

Read more: OpenAI surpasses one million paid business users for enterprise AI services