View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
May 30, 2023updated 31 May 2023 9:43am

Nvidia launches GPU-first server platform as its value soars past $1 trillion

The CPU era is coming to an end, declares Jensen Huang as his company pushes more GPUs in data centres.

By Ryan Morrison

Nvidia has published a new server design platform called MGX. Announced at the Computex conference in Taiwan, the company claims it will provide a GPU-first architecture that system engineers can use to build a variety of servers geared towards AI and high-performance computing. It is the latest in a series of AI-focused moves by the company, which have led to its value topping $1trn.

Nvidia says its MGX platform comes in a variety of sizes and cooling methods and is GPU-first (Photo: Nvidia)
Nvidia says its MGX platform comes in a variety of sizes and cooling methods and is GPU-first. (Photo courtesy of Nvidia)

The rapid rise of artificial intelligence throughout the economy, driven in part by the success of ChatGPT from OpenAI, has led to ever-growing demand for GPU-based compute power.

To meet the growing demand Nvidia says a new architecture is required. MGX allows for 100 server variations and early adopters include ASUS, Gigabyte, QCT and Supermicro. Nvidia promises its MGX will cut development time of a new system by two-thirds to just under six months, and costs down by three-quarters over other platforms.

“Enterprises are seeking more accelerated computing options when architecting data centres that meet their specific business and application needs,” said Kaustubh Sanghani, vice president of GPU products at Nvidia. “We created MGX to help organisations bootstrap enterprise AI, while saving them significant amounts of time and money.”

The platform starts with a system architecture that has been optimised for accelerated computing. Engineers can then select the processing units that best fit their needs. It has also been built to work across data centres and in cloud platforms, Nvidia explained.

Move to GPU-centred compute

Nvidia has cashed in on the AI revolution, with the vast majority of the most popular models trained using Nvidia hardware. The company’s A100 GPU – and its recently launched successor the H100 – being snapped up by AI labs around the world in their thousands.

Last week, Nvidia reported record quarterly and forecast income $4bn higher than expected for the current period. The news saw the company’s share price shoot up, and today its market cap surpassed $1trn for the first time when the markets opened after the holiday weekend.

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

Jensen Huang, Nvidia CEO said during his keynote at Computex that existing CPU-centred servers aren’t up to the task of housing multiple GPUs and NICs. He told delegates it was necessary as existing designs aren’t built to cope with the amount of heat produced by Nvidia accelerators. 

The MGX architecture allows for air or water cooling and comes in a range of form factors to be more sustainable and customisable. It comes in 1, 2 and 4U chassis options and can also work with any Nvidia GPU, the company’s new Grace Hopper CPU or any CPU using Intel’s x86 architecture.

Huang said the era of the CPU was coming to an end. He claimed the performance improvement in CPUs had plateaued and that we are now moving to an era dominated by GPUs and accelerator-assisted compute. Huang said the effort required to train a large language model can be reduced under the new architecture. 

He cited a hypothetical 960-server system today that cost $10m and used 11GWh to train an LLM. In comparison, using the new architecture two Nvidia-powered MGX servers costing $400,000 filled with GPUs could do the same job while consuming just 0.13 GWh of electricity. He said a $34m Nvidia setup with 172 servers could train 150 large language models and use the same power as the 960-server CPU-first system of today. 

This is all driven by the growing demand for AI, which Huang described as a leveller and a way to “end the digital divide”. He was referring to its ability to create code, and explained: “There’s no question we’re in a new computing era. Every single computing era you could do different things that weren’t possible before, and artificial intelligence certainly qualifies.”

Read more: Why Google’s AI supercomputing breakthrough won’t worry Nvidia

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU