It has built the world’s largest single memory computer with 160 terabytes of memory, using 1,280 ARM cores linked by photonic memory fabric.
Meg Whitman, CEO of Hewlett Packard Enterprise, said: “The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day. To realize this promise, we can’t rely on the technologies of the past, we need a computer built for the Big Data era.”
Putting memory at the centre of the platform also avoids the predicted end of Moore’s Law – the computing maxim that processor power doubles, by cramming ever smaller transistors onto a chip, every two years. But this growth, which has stayed steady since 1965, is starting to slow, limited by the laws of physics.
The Machine is largest ever research and development project in Hewlett Packard’s, and now HPE’s history.
The prototype can simultaneously work with data equivalent to 160m books.
Kirk Bresniker, chief architect at Hewlett Packard Labs, told CBR: “One of the rules to keep us on track was that we had to create a system of meaningful scale, something which would make people stop and think about computing in a completely fresh way.”
Bresniker said the decision to go with ARM chips, not typically used in servers, was partly reflective of the Labs culture. He said: “Part of the function of the Labs is to learn and understand. We know a lot about putting Intel and AMD chips into servers and supercomputers – with SGI we know a lot about putting Intel onto fabric memory. We purposely chose ARM to learn more from the process.”
The 160TB of memory is spread across 40 physical nodes linked by high-performance fabric. It uses a Linux-based operating system running on Thunder X2, a second generation, dual socket capable ARMv8-A system on a chip and X1 photonics links.
The hardware half fills a standard rack and needs no special cooling.
HPE expects to be able to scale this prototype to an exabyte-scale single-memory machine. Beyond that, HPE hopes to scale to an almost limitless memory pool of 4,096 yottabytes, which is two hundred and fifty thousand times the size of the existing digital universe.
Bresniker said: “When I joined HP in 1989 we did it all – we built our own chips and even had our own database software. You can’t be that vertically integrated any more, this is a conversation we’re having with the whole industry.”
The Machine will not ship as a product but will influence HPE’s, and other companies, future hardware strategy.
Bresniker said its appearance in enterprise data centres depends on the supply chain, on silicon development, on the agreement of specifications and on the speed of software development.
He expects scientific high performance computing in universities to be an early adopter partly because it is currently at the limits of computing infrastructure and is able to accept the risk profile and the need for code to be optimised.
Currently, the prototype runs a programme which analyses huge data log files looking for signs of advanced persistent threats.
Bresniker said the company is committed to genuinely operating in the open and collaborating with the open source community. He said the platform provides an opportunity to exploit existing software skills but also to look at problems in entirely new ways.
Software development has always been limited but also directed by Moore’s Law – chip design and software upgrades have followed the same path. If memory-centred computing becomes the norm then we will need to think about software development in completely different ways.