OpenAI is planning a major overhaul of its developer programme. The company plans to reduce the cost of its APIs in a bid to convince more companies to use its tools to build apps. It comes as analysts predict that generative AI will have a “cold shower” next year due to rising costs and lower demand as the hype begins to wane.

OpenAI is opening up more of its technology to third-party developers and reducing the cost of API calls (Photo: rafapress / Shutterstock)
OpenAI is opening up more of its technology to third-party developers and reducing the cost of API calls. (Photo by rafapress/Shutterstock)

The changes in the AI lab could make the cost of making API calls up to 20 times cheaper. This is in part due to a cut in cost per token, but also through improved memory storage, Reuters reports.

OpenAI and its rivals are increasingly looking to monetise their products, with tech giants like Google and Microsoft also adding incentives such as developer platforms and governance resources to make traceability easier.

The company is also planning to offer the new vision capabilities recently introduced in ChatGPT that allow users to send images to the AI for analysis rather than just text. This has potential in everything from entertainment to medicine and could be used against fine-tuned data for a custom application in those areas. The update will include a stateful API that will allow applications created with GPT-4 to remember the conversation history of an inquiry, allowing the developer to check any new query against previous requests rather than send it to the OpenAI API each time. This in turn will reduce the amount of usage developers need to pay for and reduce overall costs.

At the moment a one-page document being analysed and processed by GPT-4 will cost about $0.10, but with the memory changes and token cost reductions that could drop to below $0.01 if developers implement the new API changes properly.

The ChatGPT creator says it is also working to make its future models fully multi-modal, which would allow it to analyse video and audio in addition to images and text. It is also integrating DALL-E 3, its image generation model, into ChatGPT and making it available as an API.

Potential ‘cold shower’ for GenAI

The new developer-friendly features will reportedly be released at the OpenAI developer conference on 6 November. This event will be used to encourage companies to utilise GPT-4 and other models to build autonomous agents and chatbots for a range of systems and industries.

This is the latest salvo in OpenAI’s attempt to court the enterprise market that previously saw the launch of a data-secure version of ChatGPT for enterprise. The Microsoft-backed lab is facing growing competition from Anthropic, which recently took investment from Amazon, and Google’s own DeepMind as well as increasing scrutiny from government regulators.

The enterprise market is seen as the path to profit for OpenAI, which hopes to hit $1bn in revenue by the close of next year, up from $200m this year. This includes gaining revenue from other companies building products on its models, as well as building its own products such as ChatGPT and DALL-E 3.

An insider at OpenAI told Reuters that the company has struggled to win over developers and other companies. Its competition for this market is larger than in the consumer and hobbyist sector with big players like IBM and OpenAI’s own investor Microsoft actively courting the enterprise market.

It isn’t clear what the long-term prospect for this market is. Most analysts suggest significant revenue growth and continued impact on the enterprise market, but not everyone agrees. Allied Market Research suggests it could reach $191.8bn by 2032, growing at a rate of 34.1% from 2023 to 2032.

However, analyst house CCS Insight believe growth may be more sluggish. In its hype predictions for 2024 and beyond, released this week, the CCS team said: “The hype of 2023 has ignored several obstacles that will slow progress in the short term,” for generative AI.

They explained: “The cost of deployment is a prohibitive factor for many organizations and developers. Additionally, future regulation and the social and commercial risks of deploying generative AI in certain scenarios result in a period of evaluation prior to rollout.”

Read more: Will OpenAI really build its own chips?