Nvidia will not be involved in formulating a major new industry AI networking standard, it has emerged. In a joint announcement, Meta, Microsoft, Google and others explained that they would be forming a new consortium, named the “Ultra Accelerator Link” (UALink), to establish a uniform communications standard for AI accelerators. While chip giants, including Broadcom, Intel and AMD, were also named as participants, Nvidia – which boasts an 80% market share for the GPUs used to train many AI applications – remained conspicuously absent from that list. 

“The work being done by the companies in UALink to create an open, high performance and scalable accelerator fabric is critical for the future of AI,” said Forrest Norrod, general manager of AMD’s Data Center Solutions Group. “Together, we bring extensive experience in creating large-scale AI and high-performance computing solutions that are based on open standards, efficiency and robust ecosystem support.” Tech Monitor has reached out to Nvidia for comment. 

UALink intended to be rival to Nvidia’s NVLink solution

Training large language models typically requires the deployment of several GPUs within a data centre environment. A robust, low-latency networking system is needed to allow these GPUs to communicate with each other – one that has been provided, in large part, by Nvidia’s NVLink solution.

UALink, meanwhile, is designed to be an “open ecosystem” to help scale up AI accelerator operations. Using an open protocol, its designers claimed that UALink users would be able to more easily expand the number of GPUs or AI accelerators in a single pod and harness improvements in overall performance as a result. Pods, meanwhile, could be knitted together using Ultra Ethernet, a new standard backed by a consortium populated by many of the same companies as UALink. 

As demand for generative AI services continues to grow, said UALink’s members in a joint statement, an “industry specification becomes critical to standardize the interface for AI and Machine Learning, HPC (high-performance computing), and Cloud applications for the next generation of AI data centres and implementations.” 

Nvidia success being challenged

The formation of UALink could be seen as another shot across the bow at Nvidia by its rivals and erstwhile customers, both of which have grown increasingly concerned at the Santa Clara-based firm’s domination of the GPU market and its thriving data centre business. This success is only likely to continue as market enthusiasm for generative AI models – which are usually trained and modified in AI accelerator facilities – continues. Last week, Nvidia posted quarterly revenues of $26bn, up 18% on the previous quarter. Its data centre business, meanwhile, grew 427% annually. 

In response, rival chipmakers AMD and Intel have announced their own lines of AI-centred products to more effectively compete against Nvidia. Hyperscaler cloud providers, meanwhile, continue to dabble in designing their own chips more suited to the idiosyncrasies of their respective systems. This includes Google which, earlier this month, unveiled ‘Trillium,’ its latest so-called tensor processing unit or TPU.