Confusion continues to surround the future of OpenAI following the sacking of CEO Sam Altman last Friday and the subsequent rebellion by staff against the company’s board of directors. Altman could yet return to head up the AI lab, but the questions the episode raises about the company’s governance may trouble tech leaders deploying its technology.
Altman’s shock departure from the ChatGPT developer on Friday led to an outpouring of support from staff, with 747 of its 770-strong team signing an open letter calling for the company’s board to quit and for Altman to be reinstated. Meanwhile, Microsoft, which has a multi-billion dollar investment in OpenAI, announced on Monday that Altman and co-founder Greg Brockman would be joining the tech giant to head up a new advanced AI business unit.
However, it has been reported that Altman could still return to OpenAI, and Microsoft CEO Satya Nadella appeared to leave the door open for this to happen in broadcast interviews on Monday evening. “[We’re] committed to OpenAI and Sam, irrespective of what configuration,” Nadella told CNBC.
OpenAI’s board appears to be standing firm in the face of employee opposition, having hired former Twitch chief executive Emmett Shear as interim CEO to oversee the transition period. If staff do decide to leave they will not be short of options, with Microsoft having offered to match the pay of any OpenAI team member who wants to jump ship. Salesforce CEO Marc Benioff, and Mustafa Suleyman, the former Google Deepmind executive now running a new AI venture, Inflection AI, have also issued open invitations to any staff from the lab looking for alternative employment.
Will OpenAI’s customers look elsewhere after Sam Altman’s departure?
Also not bereft of options are OpenAI’s customers, many of whom have reportedly been looking at alternative options in case the AI lab’s team decides to depart en masse.
More than 100 OpenAI customers have contacted AWS-backed rival Anthropic about switching providers since the Altman news broke, according to a report in The Information. Others have been exploring using Google Cloud and another AI vendor, Cohere.
OpenAI has not disclosed how many businesses have signed up for ChatGPT’s paid tier, ChatGPT Plus, or ChatGPT Enterprise, a version for big businesses launched in August. But last week it temporarily suspended new sign-ups to the former due to a surge in usage that followed a raft of new product announcements at its first Devday developer conference.
The current situation presents a quandary for tech leaders according to Beatriz Valle, senior technology analyst at GlobalData. “Some IT buyers, specifically those with live projects already in production have been caught completely unawares and are in a bit of a bind,” she says.
But switching to another company’s AI models is not without its trade-offs, Valle says, as the technologies on offer from Anthropic or Cohere are “simply not as advanced as OpenAI’s”. However, she says, “In terms of governance and privacy they may offer better specific features so it’s simply a matter of choice.”
OpenAI’s travails highlight the need for effective AI regulation
Altman’s fallout with the OpenAI board is thought to stem from disagreements over the pace that the company’s technology is being commercialised, and the increasing importance of its relationship with Microsoft. Its board sits within a non-profit structure, and as a result is “completely independent and not beholden to any shareholders”, Valle says.
She continues: “This is a company that was originally created as a non-profit to develop AI ‘for the benefit of humanity’, and the current standoff with Microsoft was a matter of time – not if but when.”
However the Altman situation is resolved, the problems at OpenAI highlight the need for effective AI regulation, Valle says. Governments around the world are taking differing approaches to policing artificial intelligence, with the European Union developing an overarching AI Act, while other territories such as the UK are pursuing a light-touch approach to try and encourage innovation. Earlier this month, Prime Minister Rishi Sunak launched an AI Safety Institute, which he says will test next-generation AI models to try and pinpoint possible problems.
With the EU AI Act still in development, it is possible the bloc will end up “simply introducing codes of practice that are non-binding for most types of AI”, Valle says. But she believes there is a need for “external regulatory oversight” that supports vendors like OpenAI in “deploying the technology safely without stifling innovation”.
She adds: “Looking the other way and hoping for the best has never been a good approach and I don’t think self-regulation in commercial settings is a realistic prospect; profit-seeking behaviour will always prevail.”