Teams from the Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory have jointly won the Gordon Bell prize, an award given by the Association for Computing Machinery for outstanding work in computer science.
Both teams awarded the Gordon Bell prize used the Summit supercomputer to carry out their research projects.
The Oak Ridge National Laboratory (ORNL) team won the sustained performance category with their submitted paper, “Attacking the Opioid Epidemic: Determining the Epistatic and Pleiotropic Genetic Architectures for Chronic Pain and Opioid Addiction.”
The ORNL team crafted a new algorithm named “CoMet” or Custom Correlation Coefficient method, which compares variations of the same genes in a chosen population. Through Summits Exascale-level computing the team at ORNL are able to undertake analysis and comparison of millions of genomes in short periods of time.
ORNL computational biologist Dan Jacobson stated in a blog post that: “Machines like Summit are poised to turbocharge our understanding of genomes at a population scale, enabling a whole new range of science that was simply not possible before it arrived.”
“The techniques we develop for one domain will often help us make discoveries in another domain. Thus, we can use the same tools, in combination with phenotypes and genomes to discover the complex genetic architectures responsible for opioid addiction in people or cell wall construction in plants.”
Summit Supercomputer
The Summit supercomputer was developed and build by IBM, Nvida and Mellannox, following a United States Department of Energy contract award of $325 million in 2014.
Located in the Oak Ridge National Laboratory in the state of Tennessee, Summit itself occupies the space equivalent to two tennis or basketball courts.
It is now considered the fastest computer in the world capable of 200 petaflops of computational power. One petaflop is equivalent to a quadrillion floating-point operations per second or a thousand trillion.
The Summit Supercomputer is powered in part by 27,648 NVIDIA Volta Tensor Core GPUs capable of performing three exaops or 3 billion billion (extra billion not a typo) calculations per second. Tesla Volta V100 GPUs contain 21.1 billion transistors placed over just 815mm2 of silicon.
Exascale-level computing refers to a computer system that can compute a billion billion calculations a second or a thousand raised to the power of six (1018) operations per second.
Computational scientist and ORNL team member Wayne Joubert commented in a blog post that: “Although Tensor Cores weren’t designed with genomics data analysis in mind, as scientists we wondered if we could adapt our application to take advantage of the high performance offered by this NVIDIA feature.”
“In this case, we found a way to recast our problem to fit the hardware without losing accuracy and the results are pretty exciting. In one hour on Summit, we can solve a problem that would take 30 years on a desktop computer.”
While doing their research the ORNL team registered that Summit had broken the exascale barrier with a peak of 2.36 exaops. As of data so far reported this is now the fasted computational scientific work ever carried out by a computer and is equivalent to nearly 2.36 billion billion calculations a second.
See Also: MIT Spintronics Breakthrough Could Help Create Next Gen Chips
The team from Lawrence Berkeley National Laboratory, who share the prize with ORNL, also broke an exascale barrier when they achieved a peak speed of 1.13 exaops while using a deep-learning tool to identify extreme weather patterns from high-resolution climate simulations.
In their paper, “Exascale Deep Learning for Climate Analytics” they showed that accurate datasets can be constructed for weather patterns such as atmospheric rivers and tropical cyclones.
Their application of the Summit to process large amounts of meteorological data represents one of the first successes of scaling a deep learning application to high performance computing.