OpenAI is preparing to launch its new reasoning AI model, dubbed o3 mini, within weeks. The Microsoft-backed AI company’s CEO Sam Altman announced that the model’s release would include its application programming interface (API) and an update to ChatGPT. “Thank you to the external safety researchers who tested o3-mini”, posted Altman on the social media platform X. “We have now finalised a version and are beginning the release process; planning to ship in ~a couple of weeks.”

The o3 mini is part of OpenAI’s o3 series, a follow-up to the o1 reasoning models released in September 2024. The o1 series was designed to handle complex scientific, coding, and mathematical tasks. According to OpenAI, the new o3 models, including the smaller, task-specific o3 mini, aim to deliver even greater capabilities.

OpenAI originally outlined plans for the o3 mini in December 2024, alongside the larger o3 model. At the time, the company highlighted its goal of creating AI systems capable of tackling increasingly sophisticated challenges. These models are expected to outperform existing solutions and attract new users and investors in the competitive generative AI sector.

In addition to the upcoming release, OpenAI recently introduced a beta feature called “Tasks” within ChatGPT. This move marks the company’s entry into the virtual assistant market, positioning it as a competitor to established products such as Apple’s Siri and Amazon’s Alexa. The growing functionality of ChatGPT reflects OpenAI’s efforts to broaden its AI applications. ChatGPT was initially launched in late 2022. OpenAI secured $6.6bn in funding by October 2024, buoyed by new product developments and its growing user base.

Competitors ramp up language model innovations

OpenAI’s advancements come amidst significant activity in the AI space by other major players. In October 2024, Meta unveiled a series of AI models, including the Self-Taught Evaluator and Meta Lingua, as part of its efforts to enhance AI across multiple domains. These models address areas such as language, reasoning, perception, and alignment.

NVIDIA has also pushed forward with innovations in AI hardware. The company introduced foundation models capable of running locally on its RTX AI PCs powered by GeForce RTX 50 Series GPUs. These systems, built on NVIDIA’s Blackwell architecture, incorporate FP4 compute technology to improve memory efficiency and performance, enabling generative AI workloads to run on consumer-grade devices.

Furthermore, in April 2024, Microsoft announced Phi-3 mini, a compact large language model (LLM) in its new Phi-3 model family. According to a paper published on arXiv, the Phi-3 mini model is small enough to operate on a smartphone while matching the performance of larger LLMs. The paper also introduced two larger models in the Phi-3 series, named Phi-3 Small and Phi-3 Medium.

Read more: IBM rolls out Granite 3.0 AI models for business use cases