Sam Altman, 38, is by most accounts the leading figure in the world of AI. Like many tech leaders before him whose legacies are well established – Steve Jobs, Jack Dorsey, Mark Zuckerberg, Michael Dell, Bill Gates, to name a few – Altman dropped out of university after just one year of studying computer science at Stanford University. The same year, in 2005, he co-founded the location-based social networking app Loopt, which was sold to banking company Green Dot in 2012 for $43.4m.
Altman then went from partner to president of the start-up accelerator Y Combinator and YC Group. He left the accelerator programme group in 2019 when he became CEO of OpenAI, which he co-founded in 2015 with current company president Greg Brockman. Other founding members included the likes of Elon Musk, Trevor Blackwell, John Schulman and Vicki Cheung.
Altman co-founded OpenAI with a seemingly honourable mission: to ensure that “artificial general intelligence benefits all of humanity”. In other words, it aimed – in theory – to do good with artificial general intelligence (AGI) before any competitors reach a situation whereby the innovation of machines smarter than humans comes to harm us.
Sam Altman and ‘effective altruism’
Unlike most of his AI peers, Sam Altman is – or perhaps was – a relatively trusted public face. Not only does he regularly speak to the press and public, but he also openly expresses concerns about AI ethics and the important balance between innovation and regulation. His AI regulation world tour in the summer of 2023 confirmed his dedication to at least present as a conscientious tech leader.
Altman has, in the past, been accused by some of being a closet “accelerationist” intent on pushing AI research and development as fast as possible with as few restrictions as possible. The OpenAI chief executive denied this recently on the New York Times Hard Fork podcast. “I think what differentiates me [from] most of the AI companies is [that] I think AI is good,” said Altman. “I am a believer that this is a tremendously beneficial technology and that we have got to find a way, safely and responsibly, to get it into the hands of the people [and] to confront the risks, so that we get to enjoy the huge rewards.”
Altman endorses effective altruism (EA), a philosophy based on the idea of “doing good better”, not least by promoting a balance between the innovation and regulation of AI. EA’s core value is to maximise the impact of problem-solving, or, in other words, to adopt a capitalist approach to charity and support. Its effectiveness as a philosophy is open to debate: one of its loudest advocates, crypto entrepreneur Sam Bankman-Fried, was recently convicted of running an $8bn financial fraud scheme.
In his private life at least, Altman is an enthusiastic practitioner of EA’s “quick-and-safe” logic: he is a vegetarian and a prepper. In 2016, he told the New Yorker “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
What happened between Sam Altman and OpenAI?
On paper, it almost sounds like Altman’s view of AI could be the solution to all our problems and sufferings. His recent dispute with OpenAI, however, demonstrated the practical limitations of this argument.
On 17 November, an official statement released by OpenAI announced Altman and Brockman’s sacking by stating that the former “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”.
After five days in exile, they were back as OpenAI leaders under a new board of directors when 95% of OpenAI employees threatened to quit if both weren’t reinstated. Their open letter also called for the resignation of all current board members, as the signatories felt they were “unable to work for or with people that lack competence, judgement and care”.
While the details of the disagreements remain unknown, there are rumours that they could be related to Altman exploring how to set up a chip venture to rival Nvidia.
Another reason that could have carried weight in Altman’s ousting is the safety concerns raised within the company only days before the board’s decision. Some OpenAI researchers reportedly wrote to the board alarmed at the rumoured capabilities of their newly developed AI model Q* – pronounced ‘Q star’ – in solving complicated maths problems. While OpenAI’s interim CEO Emmett Shear – who lasted just four days in the role before Altman returned – denied that Altman’s sacking had anything to do with Q* safety concerns, OpenAI could, again, be put to the test of AI alignment and Altman’s principles. OpenAI has yet to publicly release any details of Q*, or even confirm its existence.
After his abrupt ouster, Altman was initially replaced with CTO Mira Murati, but two days later, the board appointed former Twitch boss Shear as their new CEO. Shear then rapidly became, as his Twitter bio says, “interim ex-CEO of OpenAI”.
What does the OpenAI saga mean for the future of AI?
Earlier this year, at the Wisdom 2.0 conference in San Francisco – an annual gathering dedicated to addressing human well-being in relationship to technological growth – Altman said humans should work together to define the limits of AI. But it seems unclear where he puts that limit.
Although Altman is an accelerationist, he said he recognises the importance of aligning AI, that is, balancing the growth of AGI with humanity’s best interests, usually through slowing down (or at least, not rushing) AI development. However, while he said he agreed that more safety measures should be in place, he nonetheless refused to sign a letter promoted by Elon Musk about pausing AI development.
To judge by the number of CEOs that have been at the head of the most famous AI company in the world over just five days (Altman, Murati, Shear, Altman again), it seems slowing down to avoid chaos is not OpenAI’s priority either.
Consequently, tech journalist Kara Swisher advocates for elected people to regulate the tech industry rather than a small group of powerful people “who have their own self-interest at heart”, she says. For her, “it’s always the people that are the problem, not the machines.”
Beatriz Valle, senior technology analyst at GlobalData, told Tech Monitor that the problems at OpenAI highlight the need for effective international regulation, to support technological development “without stifling innovation”.