Cloud has a significant role to play in business; Nyberg doesn’t question this, but he believes that it has its shortcomings.
"What a lot of companies are finding is that once you get above a certain utilisation rate, it’s actually cheaper to own the machine yourself."
Believing that almost all of Cray’s large machines in industry, and to some degree in research and academia, could be described as private cloud resources, he says: "Once you reach a certain utilisation level, those things flip over and you’re starting to hear that more and as the cloud market matures a lot of organisations start moving over towards it and experimenting and discovering exactly that."
What is driving usage and acquisition of supercomputers or High Performance Computing in general is the return on investment.
"They are able to do things with these machines that they couldn’t do with anything else, including cloud.
"That counts for things like weather services, so the MET office has come out and said they are going to have an ROI for the system they bought in the order of $2bn to the UK society and economy."
In the end, he believes that cloud architectures are not designed to run the types of problems that, for example, computational fluid dynamics analysis of a car or a jet engine present.
"They are not designed to do that at that scale, so the types of problem that a company or even an organisation like the weather service needs to run in order to remain competitive, to really advance their product design, just their overall business – those types of problem cloud is not architected to run them."
One of the areas which Nyberg discussed as relying on supercomputing is weather forecasting. With technology such as phased array weather radars comes more data. These arrays are capable of doing things such as full 3D volume scans up to 60KM at a hundred metre resolution every 30 seconds.
Nyberg explained that at the largest centres in the world like the EMCWF (European Centre for Medium Range Weather Forecast) they use around 60-65 different sources of satellite data, then they use around 35-40 million observations a day.
To typically deal with these problems Nyberg, said: "For the most we install massively instant parallel machines, they typically will use a commodity microprocessor in them but they’ll have hundreds of thousands of cores and that’s really where the trick is."
The data problem increases but the requirement to execute in the same amount of time remains, this requires scalability.
"It’s that scalability aspect that’s absolutely critical…everything is designed to scale up to use hundreds of thousands of cores simultaneously on one or several problems."
The other key aspect is the communication within the model, so while processing and using all these cores simultaneously, they are also talking to each other.
"One of these critical aspects to be able to scale is the communication fabric within the system, the interconnect as we call it. So if the bandwidth with the latency to talk to a very remote processor is too long then that becomes the performance bottleneck and ultimately limits the scalability."
The typical life-spans of the technology is typically three to four years, with large industrial organisations replacing a third of their equipment every three years, so it is a constant upgrade churn.
The company is constantly developing and it is seeing a drive in interest around GPU’s and the many core architecture from Intel.
Cray is also expecting to release the next iteration of Xeon Phi called Knights Landing, either later this year or early next.
It is working on a large contract with the US Government with NERSC, the National Energy Research Scientific Computing Centre, this system will be completely composed of Xeon Phi and Nyberg expects it to be one of the largest systems in the world.