Chip designer Nvidia has launched new software and hardware which it hopes will give it a foothold in the nascent quantum computing ecosystem. The announcement, made at the company’s GTC 2023 developer conference this afternoon, was followed by the unveiling of a string of new products aimed at maintaining its position as the dominant company powering artificial intelligence systems such as ChatGPT.
Nvidia has launched DGX Quantum, which it says brings together graphics processing unit (GPU) powered hardware together with CUDA quantum, an open-source platform for building quantum algorithms in traditional coding languages C++ and python. The system can then help run the algorithm across quantum and classical computers depending on which is most efficient in solving the problem.
Quantum computers have the potential to far outperform classical machines, but the technology is in its early stages and existing quantum systems often suffer problems in reliability and lack of scale. A hybrid classical/quantum approach could help the technology go mainstream.
DGX Quantum can aid quantum computing R&D
Nvidia’s CUDA platform is the go-to solution for developers building AI systems to access the company’s GPUs, which give systems a performance boost. The company’s A100 GPU has become the chip of choice for running computers that train large language AI models such as OpenAI’s GPT-4, the technology that powers ChatGPT.
CUDA quantum will aim to perform a similar role in quantum machine development, backed by an Nvidia Grace Hopper supercomputer system connected by PCIe to Quantum Machines OPX+. The company claims this will enable sub-microsecond latency between GPUs and quantum processing units (QPUs).
“Quantum-accelerated supercomputing has the potential to reshape science and industry with capabilities that can serve humanity in enormous ways,” said Tim Costa, director of HPC and quantum at Nvidia. “Nvidia DGX Quantum will enable researchers to push the boundaries of quantum-classical computing.”
CUDA quantum has been made open source, and Nvidia says it has been developed with input from many quantum computing companies.
“All quantum today is research, not production, and that isn’t going to change next week,” said Costa in an interview with Reuters.
Nvidia continues its AI push
Meanwhile, Nvidia is set on maintaining its dominance in the AI space. “The iPhone moment of AI has started,” declared CEO Jensen Huang in a typically understated fashion, as he delivered his keynote speech at GTC earlier today.
Among the AI-related announcements from Nvidia was the official launch of DGX Cloud, which will enable users to access supercomputing services in the cloud. Nvidia said it is working with cloud providers to host the service, and it launches on Oracle Cloud, where clients can access a DGX-based supercomputer running on 32,000 Nvidia GPUs.
The new service will not come cheap, though. Nvidia says a single “instance” – a cluster of eight A100s or H100s, the company’s newest GPU – starts at $36,999 per month.
Elsewhere, Nvidia released a service called AI Foundations which will help companies train their own LLMs based on proprietary data. This is likely to be particularly useful for enterprises considering how to use LLMs in their organisations without sharing information with other businesses.
Huang said his company is also working with ASML, the world’s biggest producer of the advanced lithography machines used in chipmaking, and leading manufacturer TSMC, on methods to speed up AI semiconductor production.