View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

The race to exascale and HPE

By John Oates

You’re going to hear a lot more about exascale computing in the next few years. The US government is funding a selection of industry partners to create the next generation of supercomputer.

The numbers here are staggering. Existing high performance computing clusters are measured in petaflops – that is a quadrillion operations per second. A quadrillion is a 1 followed by fifteen zeros.

Exascale is defined as a machine which can perform a thousand petaflop operations or a billion billion operations per second.

The race to exascale has several competing projects running in Europe, Japan, China and India. They are all taking different routes towards the same goal and whichever wins they will all contribute a huge amount to HPC research.

The challenges are the same as running any enterprise data centre – power, cooling, software and memory. But the scale on which they operate is entirely different. Moore’s Law is no longer relevant.

But the greatest technical challenge for a true exascale machine is that to function it needs a completely new sort of memory, and a far faster way to get data to and from it.

Supercomputers are measured and placed on the Top500 list but the next step is a different order of improvement, not just an evolution of existing technologies.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

 

What is it?

 

Exascale is a genuinely different type of computing and it promises to deal with entirely different problems. It will allow big data projects to function on a different level to today’s systems. For certain types of systems like weather and climate there is limited utility in analysing part of the system. For true insight you need to embrace all the data available because everything interacts with everything else.

Exascale computing could change the way research into climate change, pharmaceuticals and disease and engineering will all be transformed by the ability to analyse vast quantities of data and create extremely complex models.

As data continues to grow exascale computing will allow big data analysis to fulfill its early promises.

 

How will the first systems be built?

 

Earlier this summer the United States Department of Energy made a grant of $258m through the Exascale Computing Project for research into hardware, software and applications to create the world’s first exascale computer. The aim it to have the first machine ready by 2021. China has promised to have a machine ready by 2020.

Funding for the Exascale Computing Project is promising to deliver a machine which can run actual applications, not just demonstrate raw computing power.

The money is going to six companies Advanced Micro Devices, Cray Inc., Hewlett Packard Enterprise, IBM, Intel Corp. and NVIDIA Corp. These firms must match the graint to at least 40 percent of their total project cost, creating a the total investment to at least $430 million.

The machine will be based on research on memory-centred computing which came out of HPE’s The Machine project.

This builds on HPE’s announcement earlier this summer of the world’s largest single memory machine. The machine was built with 160 terabytes of memory, using 1,280 ARM cores linked by photonic memory fabric. This gives the machine the equivalent of instant access to all the data from 160m books.

 

What happens next?

 

High-performance computing machines will need to be 10 times faster and more energy efficient than today’s fastest supercomputers in order to hit exascale speeds.

It will need processors which work faster and cooler and are smaller than existing chips.

The key to the US approach is HPE’s Memory-Driven Computing, an architecture that puts memory, not processing, at the centre of the computing platform. The Machine is HPE’s biggest ever research project and is not designed for commercial release but rather to inform the company’s whole computing portfolio.

Bill Mannel, vice president and general manager, HPC Segment Solutions, HPE said: “Our novel Memory-Driven Computing architecture combined with our deep expertise in HPC and robust partner ecosystem uniquely positions HPE to develop the first U.S. exascale supercomputer and deliver against the PathForward program’s goals.”

HPE believes that its new memory fabric and low-energy photonics interconnects will help create the path to exascale machines. The company is also continuing to explore nonvolatile memory options that could attach to the memory fabric, significantly increasing the reliability and efficiency of exascale systems.

It will use one unified protocol to address close and distant memory devices to help remove the memory latency and bandwidth restrictions on current supercomputers.

The system will be built on open standards and be based on the Gen-Z (www.genzconsortium.org) chip-to-chip protocol. Gen-Z provides a memory-semantic chip-to-chip communications protocol that allows for the tight coupling of many devices including CPUs, GPUs, FPGAs, DRAM, NVM, system interconnects and a host of other devices, all sharing a common address space. This allows for the creation of ‘memory centric’ system designs which offer dramatic improvements in application performance and power efficiency.

More from HPE here: https://news.hpe.com/u-s-dept-of-energy-taps-hewlett-packard-enterprises-machine-research-project-to-design-memory-driven-supercomputer-2/

 

Even assuming the hardware challenges can be met ECP will still need to create a new world of programming which can make proper use of the vast resources available. The group will use existing software where possible but alter it to work on highly-parallel systems.

Alongside operating systems ECP is also looking for applications which will support key national initiatives including oil and gas exploration and production, aerospace engineering, and medicinal chemistry such as pharmaceuticals, protein structure, and basic science.

 

The future of exascale computing

 

Assuming the success of ECP’s project we are likely only a handful of years away from exascale machines being available for enterprise data centres. As part of a hybrid infrastructure they will transform aspects of business technology and cyber security.

By offering almost instant analysis of huge data lakes they might change the way we defend corporate networks because it will be possible to accurately monitor what is going on in real time.

It will also allow completely different big data analysis especially of unstructured and chaotic data sets like those created by social media. This will need a different strategic view of technology’s role within the organisation and it might also offer the sort of artificial intelligence which can transform how the business functions.

 

You can keep up to speed with the project via the ECP website here: https://exascaleproject.org

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU