OpenAI has announced the release of GPT-4o mini, described as a more cost-effective model in the company’s artificial intelligence (AI) lineup.
The pricing of the new model is set at 15 cents per million input tokens and 60 cents per million output tokens, making it more affordable than the previous model, GPT-3.5 Turbo, said the AI research company.
The GPT-4o mini achieves an 82% score on the massive multitask language understanding (MMLU) benchmark and leads in chat preferences on the language model system (LMSYS) leaderboard.
It currently supports text and vision in the API, with plans to include additional capabilities for handling text, image, video, and audio inputs and outputs.
The model is designed to facilitate a variety of applications due to its affordability and low latency, said OpenAI.
It is said to be capable of managing tasks that involve chaining or parallelising multiple model calls, processing large volumes of context, or delivering quick text responses in real time.
The GPT-4o mini has a context window of 128K tokens, supports up to 16K output tokens per request, and contains updated knowledge up to October 2023.
In terms of performance, GPT-4o mini is claimed by OpenAI to excel in textual intelligence and multimodal reasoning, performing well on several academic benchmarks.
It scores 82% on MMLU for reasoning tasks involving text and vision, and shows strong results in mathematical reasoning and coding tasks with scores of 87% on MGSM and 87.2% on HumanEval, said the company.
Safety features are integrated throughout the development process of GPT-4o mini, from pre-training to post-training.
The model employs techniques such as reinforcement learning with human feedback (RLHF) to align with OpenAI’s safety policies and improve the accuracy and reliability of responses.
GPT-4o mini is now available via the Assistants API, Chat Completions API, and Batch API.
OpenAI has made GPT-4o mini available to users of ChatGPT Free, Plus, and Team, with Enterprise users gaining access in the coming week.