Nvidia’s Tesla AI supercomputing platform powers 13 of the top measured computing systems, according to Green500.
The Green500 list, which was released today, features the world’s most energy-efficient high performance computing (HPC) systems. In essence, the list tracks the most energy-efficient supercomputers in the world.
The list has revealed that vendors such as HPE will be powering deep learning, artificial intelligence and other advanced capabilities in the future. Tsubame 3.0 ranks as the number one system deployed at the GSIC Centre and Tokyo Institute of Technology.
The list revealed that the new Tsubame 3.0 system that is powered by Nvidia’s Tesla P100 GPUs, reached 14.1 gigaflops per watt. This records 50 percent higher efficiency than the previous number one system Nvidia’s Saturnv, which is now ranked as number 10 on the latest list.
HPE is ranked as the number one system in the Green500 list, alongside Nvidia’s P100 SXM2. The remaining systems are housed at Yahoo Japan, Japan’s Institute of Advanced Industrial Science and Technology, Japan’s Centre for Advanced Intelligence Project (RIKEN), The University of Cambridge and the Swiss National Computing Centre (CSCS).
Read more:Nvidia gives hyperscale data centres a starter recipe for AI Cloud adoption
The other systems in the top 13 measured systems powered by Nvidia include E4 Computer Engineering, University of Oxford and the University of Tokyo.
Each of the 13 systems are embedded with Nvidia’s Tesla P100 data centre GPU accelerators, while also including four systems that are based on the Nvidia DGX-1 AI supercomputer.
The systems built on Nvidia’s DGX-1 AI supercomputer, combining its Tesla GPU accelerators with a fully optimised AI software package also include Raiden at RIKEN. As well as Jade at the University of Oxford, this is a hybrid cluster that is deployed at a major social media and technology company.
Nvidia also announced that its Tesla GPUs have improved in performance for HPC applications, by over three times that of the Kepler architecture that was released in 2015. This is according to performance data, and although the performance reduced in recent years, there is a significant boost compared to what would have been predicted by Moore’s Law.
The company has seen progress in the level of performance in achieving exascale computing, which also delivers high speed, efficiency and AI computing capability for the Summit supercomputer. This is expected to be delivered later in the year to the Oak Ridge Leadership Computing facility.
Jeff Nichols, associate laboratory director of the Computing and Computational Science Directorate, Oak Ridge National Laboratory said: “Oak Ridge’s pre-exascale supercomputer, Summit, is powered by NVIDIA Volta GPUs that provide a single unified architecture that excels at both AI and HPC. We believe AI supercomputing will unleash breakthrough results for researchers and scientists.”
According to Nvidia, the Summit will project 200 petaflops of performance, featuring Tesla V100 GPU accelerators and will include advanced AI computing capabilities to generate over 2 exaflops of half-precision Tensor Operations. The world’s current fastest system, China’s TaihuLight, projects only 93 petaflops.
Ian Buck, GM of Accelerated computing, Nvidia said: “Researchers taking on the world’s greatest challenges are seeking a powerful, unified computing architecture to take advantage of HPC and the latest advances in AI.
“Our AI supercomputing platform provides one architecture for computational and data science, providing the most brilliant minds a combination of capabilities to accelerate the rate of innovation and solve the unsolvable.”
Nvidia’s Tesla V100 GPU accelerators for PCIe-based systems will be available later this year for purchase from Nvidia reseller partners and manufacturers such as HPE.