View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
December 1, 2023updated 07 Dec 2023 11:08am

Who is Sam Altman, the man who co-founded OpenAI “for the benefit of humanity”?

Sam Altman is a contentious figure in the divisive discourse around the dangers of AI. But between innovation and regulation, AI’s growth is inherently tumultuous – no matter the intentions.

By Livia Giannotti

Sam Altman, 38, is by most accounts the leading figure in the world of AI. Like many tech leaders before him whose legacies are well established – Steve Jobs, Jack Dorsey, Mark Zuckerberg, Michael Dell, Bill Gates, to name a few – Altman dropped out of university after just one year of studying computer science at Stanford University. The same year, in 2005, he co-founded the location-based social networking app Loopt, which was sold to banking company Green Dot in 2012 for $43.4m.

Sam Altman the co-founder of OpenAI for "the benefit of humanity"
Sam Altman co-founded OpenAI “for the benefit of humanity”. (Photo by Dia TV/Shutterstock)

Altman then went from partner to president of the start-up accelerator Y Combinator and YC Group. He left the accelerator programme group in 2019 when he became CEO of OpenAI, which he co-founded in 2015 with current company president Greg Brockman. Other founding members included the likes of Elon Musk, Trevor Blackwell, John Schulman and Vicki Cheung.

Altman co-founded OpenAI with a seemingly honourable mission: to ensure that “artificial general intelligence benefits all of humanity”. In other words, it aimed – in theory – to do good with artificial general intelligence (AGI) before any competitors reach a situation whereby the innovation of machines smarter than humans comes to harm us. 

Sam Altman and ‘effective altruism’

Unlike most of his AI peers, Sam Altman is – or perhaps was – a relatively trusted public face. Not only does he regularly speak to the press and public, but he also openly expresses concerns about AI ethics and the important balance between innovation and regulation. His AI regulation world tour in the summer of 2023 confirmed his dedication to at least present as a conscientious tech leader.

Altman has, in the past, been accused by some of being a closet “accelerationist” intent on pushing AI research and development as fast as possible with as few restrictions as possible. The OpenAI chief executive denied this recently on the New York Times Hard Fork podcast. “I think what differentiates me [from] most of the AI companies is [that] I think AI is good,” said Altman. “I am a believer that this is a tremendously beneficial technology and that we have got to find a way, safely and responsibly, to get it into the hands of the people [and] to confront the risks, so that we get to enjoy the huge rewards.”

Altman endorses effective altruism (EA), a philosophy based on the idea of “doing good better”, not least by promoting a balance between the innovation and regulation of AI. EA’s core value is to maximise the impact of problem-solving, or, in other words, to adopt a capitalist approach to charity and support. Its effectiveness as a philosophy is open to debate: one of its loudest advocates, crypto entrepreneur Sam Bankman-Fried, was recently convicted of running an $8bn financial fraud scheme.

In his private life at least, Altman is an enthusiastic practitioner of EA’s “quick-and-safe” logic: he is a vegetarian and a prepper. In 2016, he told the New Yorker “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

What happened between Sam Altman and OpenAI?

On paper, it almost sounds like Altman’s view of AI could be the solution to all our problems and sufferings. His recent dispute with OpenAI, however, demonstrated the practical limitations of this argument.

On 17 November, an official statement released by OpenAI announced Altman and Brockman’s sacking by stating that the former “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”. 

After five days in exile, they were back as OpenAI leaders under a new board of directors when 95% of OpenAI employees threatened to quit if both weren’t reinstated. Their open letter also called for the resignation of all current board members, as the signatories felt they were “unable to work for or with people that lack competence, judgement and care”.

While the details of the disagreements remain unknown, there are rumours that they could be related to Altman exploring how to set up a chip venture to rival Nvidia.

Another reason that could have carried weight in Altman’s ousting is the safety concerns raised within the company only days before the board’s decision. Some OpenAI researchers reportedly wrote to the board alarmed at the rumoured capabilities of their newly developed AI model Q* – pronounced ‘Q star’ – in solving complicated maths problems. While OpenAI’s interim CEO Emmett Shear – who lasted just four days in the role before Altman returned – denied that Altman’s sacking had anything to do with Q* safety concerns, OpenAI could, again, be put to the test of AI alignment and Altman’s principles. OpenAI has yet to publicly release any details of Q*, or even confirm its existence.

After his abrupt ouster, Altman was initially replaced with CTO Mira Murati, but two days later, the board appointed former Twitch boss Shear as their new CEO. Shear then rapidly became, as his Twitter bio says, “interim ex-CEO of OpenAI”.

What does the OpenAI saga mean for the future of AI?

Earlier this year, at the Wisdom 2.0 conference in San Francisco – an annual gathering dedicated to addressing human well-being in relationship to technological growth – Altman said humans should work together to define the limits of AI. But it seems unclear where he puts that limit. 

Although Altman is an accelerationist, he said he recognises the importance of aligning AI, that is, balancing the growth of AGI with humanity’s best interests, usually through slowing down (or at least, not rushing) AI development. However, while he said he agreed that more safety measures should be in place, he nonetheless refused to sign a letter promoted by Elon Musk about pausing AI development.

To judge by the number of CEOs that have been at the head of the most famous AI company in the world over just five days (Altman, Murati, Shear, Altman again), it seems slowing down to avoid chaos is not OpenAI’s priority either. 

Consequently, tech journalist Kara Swisher advocates for elected people to regulate the tech industry rather than a small group of powerful people “who have their own self-interest at heart”, she says. For her, “it’s always the people that are the problem, not the machines.”

Beatriz Valle, senior technology analyst at GlobalData, told Tech Monitor that the problems at OpenAI highlight the need for effective international regulation, to support technological development “without stifling innovation”.

Read more: UK and allies launch cybersecurity guidelines for AI developers

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU