View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cloud
February 23, 2023updated 17 Mar 2023 9:16am

Nvidia launches generative AI and supercomputing cloud services as part of ‘new business model’

Demand for products is 'through the roof' says Nvidia CEO, thanks to the popularity of ChatGPT and other AI systems.

By Matthew Gooding

GPU maker Nvidia is launching a new service which will allow customers to access its DGX AI supercomputer in the cloud. The company also revealed a new suite of cloud-based tools for building generative AI systems. Such models have experienced a boom in popularity since the launch of OpenAI‘s ChatGPT chatbot, and Nvidia has been cashing in.

Nvidia CEO Jensen Huang has announced new cloud AI products. (Photo by MANDEL NGAN / AFP)

CEO Jensen Huang revealed details of the new products on the company’s earnings call late on Wednesday, which saw it report revenue of $6.05bn, slightly up on the expected $6.01bn, but down 21% year-on-year.

Though the global economic slowdown is hitting the company’s bottom line, it expects to bring in $6.5bn in the current quarter, up on the previous estimate of $6.33bn, thanks largely to the generative AI craze fuelling demand for its chips.

Nvidia brings GDX supercomputer to the cloud

Huang revealed details of the new services, which will be officially launched at the company’s GTC Developer Conference in March, on his call with investors.

He said the company’s DGX supercomputer hardware for building AI systems would now be available virtually in the cloud. “Nvidia DGX Cloud, the fastest and easiest way to have your own DGX AI supercomputer, [you can] just open your browser,” Huang said. “Nvidia DGX Cloud is already available through Oracle Cloud Infrastructure and Microsoft Azure, Google Cloud Platform and others are on the way.”

The Nvidia CEO also used the call to talk about the company’s new AI cloud services, which he said would be offered directly by Nvidia to customers, and hosted on major cloud platforms such as Azure and Google Cloud.

Huang said the move would “offer enterprises easy access to the world’s most advanced AI platform, while remaining close to the storage, networking, security and cloud services offered by the world’s most advanced clouds”. He explained: “Customers can engage Nvidia AI cloud services at the AI supercomputer, acceleration library software, or pre-trained AI model layers.”

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

Customers, he said, would be able to use the platform for training and deploying large language models or other AI workloads. Nvidia will also offer a pre-trained generative AI model called NeMo and BioNeMo. Huang described these as “customisable AI models, to enterprise customers who want to build proprietary generative AI models and services for their businesses”.

He added: “With our new business model, customers can engage Nvidia’s full scale of AI computing across their private to any public cloud.”

Interest in generative AI goes ‘through the roof’ – Nvidia CEO

Nvidia’s announcement may not go down well with some of its biggest customers such as Google and Microsoft, both of which buy Nvidia chips to power their own AI products and services.

The company currently dominates the market for GPUs, which are used to accelerate AI workloads, so it is no surprise that the success of ChatGPT and the emergence of other generative AI models such as Google Bard has given a big boost to the company’s finances as more and more businesses seek to buy its chips.

Huang said the demand for GPUs and other processors to power generative AI systems is a major driver behind the company’s improved financial outlook. Describing the onset of such systems as a “new era of computing”, he said: “This type of computer is utterly revolutionary in its application because it’s democratised programming to so many people,” Huang said.

Referring to the ability of such models to write accurate code, he added: “This is an AI model that can write a programme for any programme. For this reason, everybody who develops software is either alerted or shocked into alert or actively working on something that is like ChatGPT to be integrated into their application or integrated into their service.”

He added: “The activity around the AI infrastructure that we build on Hopper [the company’s GPU microarchitecture] and the activity around inferencing using Hopper and [another Nvidia architecture] Ampere to inference large language models, has just gone through the roof in the last 60 days. And so there’s no question that whatever our views are of this year as we enter the year has been fairly, dramatically changed as a result of the last 60-90 days.”

Read more: This is how GPT-4 will be regulated

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU