NVIDIA have released nine new high performance computing containers as part of NVIDIA GPU Cloud.

The American technology company’s news comes on the heels of this week’s International Supercomputing Conference that is taking place in Frankfurt, Germany.

Summit, which is the world’s fastest supercomputer by IBM is currently being used by the U.S. Department of Energy’s Oak Ridge National Laboratory was announced as Number One in the TOP500.

The Summit supercomputer is used to solve problems within energy, advanced materials, AI and other domains.

Other supercomputers that have reached the TOP500 top spot include Jaguar (November 2009, June 2010) and Titan (November 2012).

CHROMA, CANDLE, PGI, and VMD have all been added to NVIDIA GPU Cloud in addition to the other eight containers (NAMD, GROMACS, and ParaView) which were launched at last year’s Supercomputing Conference in Denver, Colorado.

Developers will be able to build high performance computing (HPC) applications on Nvidia GPU Cloud (NGC) that target specifically multicore CPUs and NVIDIA Tesla GPUs using the container for PGI compilers.

Over 27,000 users have already registered to access NVIDIA’s container registry now.

Containers can simplify the complexities of deploying and installing frameworks where users can obtain access the latest application versions by using pull and run commands.

NVIDIA have said on their recent blog post that container deployment is not restricted to deep learning, but also supercomputing.

They said: “Supercomputing has a dire need to simplify the deployment of applications across all the segments. That’s because almost all supercomputing centres use environment modules to build, deploy, and launch applications.

“The complexity of such installs in supercomputing limits users from accessing the latest features and enjoying optimised performance, in turn delaying discoveries.”

Users who are currently on the NVIDIA GPU Cloud are able to access the latest versions of high performance computing applications, as well as having their deep learning frameworks updated and optimised by NVIDIA themselves across the entire software stack.

Cloud services such as Amazon Web Services, Google Cloud Platform, and Oracle Cloud Infrastructure have been tested and supported on NVIDIA GPUs alongside GPU-powered workstations that include NVIDIA DGX systems.