Unveiling a number of new Nvidia products and developments, CEO Jensen Huang said the company was in a position “to power this new industrial revolution”.
Huang announced the GB200 at Nvidia’s GPU Tech Conference (GTC). It consists of two Blackwell graphics processing units (GPUs) with one Grace computer processing unit (CPU). The latter has powered significant growth of generative AI.
The chip will power Nvidia’s new Blackwell AI computer system, which is designed for trillion-parameter generative AI models to transform generative AI capabilities.
The CEO claimed its Blackwell GPUs will drive huge increases in computing power to facilitate large language models. The Blackwell GPU has 208bn transistors compared with 2023’s H100 which has just 80bn, and can allegedly achieve some tasks 30 times faster than its predecessor. The chip is expected to be used by Amazon, Google, Microsoft and OpenAI in cloud-computing services and their own AI offerings.
“Generative AI is the defining technology of our time,” Huang said. “Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realise the promise of AI for every industry.”
“Sitting on a goldmine” with AI
Huang’s keynote covered several announcements of new developments and products from the AI giant, including a range of new chips, updated chatbot applications, robots for industrial use and digital twins that will transform weather forecasting with AI.
In addition to the GB200, Huang unveiled a new set of software tools, including microservices used to improve system efficiencies that enable businesses to embed AI models into work processes. The chief executive also revealed new line of chips for cars that enable chatbots to be run within the vehicle. Chinese electric vehicle (EV) makers Xpeng and BYD have announced they will be using these chips.
Huang asserted that the enterprise IT industry is “sitting on a goldmine”, capable of harnessing more opportunities with chatbots by using the huge scale of data available within large enterprises, that these technological developments can fuel.
Huang also detailed a new series of chips used for creating humanoid robots, with the release of Project GROOT (or “Generalist Robot 00 Technology), a new collection of application programming interfaces (APIs) that will learn through observing humans. Huang invited several of these robots to join him on stage, highlighting the technological advancement in robotics capabilities to be applied in industrial uses.
“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” Huang said. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”
AI work can begin at an “unprecedented scale”
Other revelations in the keynote involved the use of AI in weather forecasting, as Nvidia announced a new digital twin cloud platform that claims it will help weather experts and meteorologists produce more detailed simulations. The computing giant announced its new Earth-2 APIs at the event, claiming their use will help to address $140 billion in global losses from climate change-related extreme weather events. Huang said this work could now begin at an “unprecedented scale”.
In the future, “We’re going to manufacture everything digitally first, and then we’ll manufacture it physically”, said Huang.
A future of tough AI chip competition
Nvidia was founded in 1993 and was originally known for making computer chips that process graphics, especially for computer games. Its market share increased long before the AI takeover began, when the company began adding features to its chips that helped development with machine learning.
Nvidia’s market capitalisation has since grown to $2.2tn, making it the third most valuable company in the world behind Microsoft and Apple.