View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. What Is
December 20, 2023updated 03 Jan 2024 5:06pm

What is OpenAI?

The answer has changed many times in recent years, as the AI powerhouse has struggled to reconcile alignment with the profit imperative.

By Livia Giannotti

OpenAI is a research and deployment organisation specialising in artificial general intelligence and generative models, with its signature product so far being the pioneering AI-powered chatbot ChatGPT. The company was founded in 2015 by Sam Altman, Elon Musk and others including Trevor Blackwell, Vicky Cheung and John Schulman. 

OpenAI was initially created for the “benefit of humanity”. (Photo by Dennis Diatel/Shutterstock)

The stated aim behind OpenAI was – and is still supposed to be – to create AI models that “benefit all of humanity”. For this reason, the organisation emphasises its commitment to developing products according to the principles of AI “alignment,” a movement that advocates the prioritisation of transparency, safety and trust in the creation of new artificial intelligence programs.

Recent months, however, have led critics to doubt whether this commitment of OpenAI’s can be reconciled with the profit imperative. Indeed, the hiring and re-hiring of Altman in November 2023 has demonstrated to many observers of the firm that its corporate governance framework is unable to shoulder the lofty burden of creating AI programs for the wider benefit of mankind. 

What does OpenAI do?

OpenAI created the first GPT (generative pre-trained transformer) large language model in 2018. While the model was first received as a pioneering success, studies later revealed significant flaws in GPT-3’s performance in specific tasks, including the generation of racist and inaccurate answers. Nevertheless, the LLM served as the foundation for all subsequent GPT-based models.

OpenAI became one of the most important tech companies in the world in 2022 after launching ChatGPT, a pioneering, free-to-use chatbot based on an extensive language model. The bot was trained to provide answers to users on the widest range of topics ever covered at the time, considering context, previous answers, level of detail, language and more. The high level of sophistication and advancement offered by ChatGPT has even raised concerns about intellectual property issues, liability, algorithmic bias and misinformation.

While the release of ChatGPT – and upgraded versions such as GPT-4 – marked the beginning of a new era for AI, performance analysis has also revealed weaknesses in the bot’s answers. For example, a Purdue report found that over half the answers provided by ChatGPT about software questions were wrong, making it a potentially unreliable source.

While OpenAI has released other generative models – such as Whisper or Codex – it is also known for its cutting-edge user interfaces such as the text-to-image model DALL·E 3 capable of generating sophisticated images from textual prompts. DALL·E 3 was integrated into the paid versions of ChatGPT (which offer more capable systems) Plus and Enterprise in October 2023. 

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

OpenAI’s research

OpenAI’s research projects span from models able to summarise, generate and classify text to generative image modelling and original music composition.

One of the intricacies of OpenAI lies in the balance between its commercial arm – the part that is responsible for products such as ChatGPT – and its research arm. 

In its first four years of existence, OpenAI focused mostly on research rather than the development of AI, as its main mission was to research a “safe and beneficial” artificial general intelligence (AGI). In 2015, OpenAI pledged to “freely collaborate with others” by ensuring total transparency in its research, patents and codes. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” said OpenAI in its introduction statement. “Since our research is free from financial obligations, we can better focus on a positive human impact.”

Zachary Lipton, an assistant professor of Machine Learning and Operations Research at Carnegie Mellon University specialising in AI, told Tech Monitor in December 2023 that “among research outfits, OpenAI has always had a pragmatic bent. They’ve been militantly unpretentious about pursuing ideas that work, rather than ideas that appear mathematically elegant.” 

Lipton said that “for example, they caught on rather early that training larger and larger language modelling was yielding interesting capabilities. Executing those large training runs required a massive amount of data curation, filtering, and systems engineering, and plenty of hacks.” 

When OpenAI started commercialising its products in 2020 with its OpenAI API, it also announced that “commercialising the technology helps [them] pay for [their] ongoing AI research safety and policy efforts.” 

However, with the release of ChatGPT, it became clear that OpenAI stopped focusing only on its research arm to concentrate more on the commercial one – which also strayed from the organisation’s founding principles of transparency. For instance, OpenAI has not been entirely transparent with the release of its newest model, GPT-4. In a paper released by the organisation, OpenAI said that, given “both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

How much is OpenAI worth?

From non-profit to capped-profit

The intricacies caused by the difficulties in balancing research and development also influenced OpenAI’s economic model.

When the organisation was founded in 2015, the focus was on research and safety – meaning profit was not a part of the initial equation. OpenAI was created as a non-profit, accompanied by an endowment of $1 billion made by founding members. As such, OpenAI’s focus was on its mission – to “benefit all of humanity” – rather than on financial gain.

After four years of cutting-edge AI research, the organisation decided the non-profit model was not fit for purpose anymore as it had reached a point where it needed to “invest billions of dollars” in its work. However, to avoid falling for a profit-driven configuration and keep serving its mission while still increasing its “ability to raise capital”, OpenAI changed its legal structure.

In 2019, the organisation announced it was creating a “capped-profit” sub-company, meaning it would now make a profit, capped at 100 times any investment (while being owned and controlled by the non-profit arm). Furthermore, it stopped functioning as an open source, thereby revisiting most of its foundational principles. 

“Now, a good portion of their research efforts are mobilised towards the incremental improvements of their offerings, i.e., many researchers are engaged in the efforts that will contribute directly to the capabilities of the next GPT and DALL-E models,” Lipton says. Additionally, “a good portion of their researchers are engaged in their long-term AI safety efforts.”

OpenAI’s distinctive governance model

Today, the non-profit arm of OpenAI (focusing on research) owns and controls the for-profit arm (focusing on commercialisation). However, this further entrenches the lack of balance between research and development. It seems priorities have changed for OpenAI: while the commercial arm of the organisation is privately valued at $86 billion, the research arm only generated $44,485 in revenue in 2022

This shift in the balance between research and development resulted in a stronger focus on profit and a lesser attention to research on safety. This has also been highlighted by the OpenAI saga of November 2023: one of the reasons behind Altman’s ousting is believed to be related to his disagreements with the board regarding the commercialisation of a potentially harmful AI product, going – again – against OpenAI’s foundational values.

However, Altman came back. And while the mere chaos was already enough to prompt the question of whether the organisation should continue to operate as a hybrid profit-non-profit, Altman’s reinstatement – along with board members’ resignations – could be seen as a clear shift in strategy, further and further away from OpenAI’s foundation ideals.

Read more: Which companies are working on LLMs and ChatGPT alternatives?

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.