The launch last week by Google Cloud Platform (GCP) of a new range of virtual machines powered by AMD’s Epyc Milan chipset reflects an upheaval underway in the server chip market. It has long been dominated by Intel, but rival manufacturers and the cloud providers themselves are now getting in on the act, developing silicon specifically for use in the data centre. This increased competition is a boon for enterprise buyers, which are benefiting from greater choice and lower cloud prices.

The new GCP VMs, known as Tau, will offer improved performance and pricing compared to rival services, according to the company’s announcement. Google says they are ideal for “scale-out applications”, including web serving and handling large data logs. Snap and Twitter feature in the press release extolling the virtues of Tau for their businesses.

Competition in the server chip market
The AMD Epyc Milan is powering new VMs for Google Cloud Platform. (Photo courtesy AMD press office)

AMD is unsurprisingly pleased to have persuaded GCP to use the Epyc Milan, which was released earlier this year. “We have entered a high-performance computing megacycle led by the accelerated digital transformation of businesses and industries that is re-shaping cloud computing,” said AMD president and CEO Dr Lisa Su. “We work extremely closely with strategic partners like Google Cloud to ensure our AMD Epyc processors are ideally suited to meet the growing customer demand for more compute, more performance and more scalability.”

The changing server chip landscape

Intel has enjoyed near-total dominance in the server chip space, but has seen AMD gradually eat into its lead over recent years. And while it looks like a two-horse race for the time being, both companies face emerging competitors.

For example, Arm's low-power chip designs are well suited to the changing needs of cloud providers. These companies are increasingly looking for bespoke solutions to meet their clients' demands, especially for AI and high-performance computing. "The main theme that's unfolding in cloud computing is the need for faster throughput, lower power draw data centres," says Mike Orme, who covers the semiconductor industry in his role as thematic research consultant at GlobalData, "This will increasingly call for low-power system-on-chips involving multiple functions on the same chip, or multiple chips in 3D packages, rather than monolithic motherboard-based chips."

Last month it was revealed Oracle Cloud is using Arm-based semiconductors for its cloud servers, while AWS already has its own Graviton chips based on blueprints from the British chip design giant.

Meanwhile, the cloud giants are becoming more self-reliant when it comes to semiconductors. Google is developing its own custom silicon, and earlier this year set up a chip design division in Israel headed by Uri Frank, who has been recruited from Intel as VP for engineering. With Amazon, Microsoft and Alibaba all reportedly developing chips of their own, Intel and AMD could soon find themselves with fewer customers. "This will progressively reduce the reliance [of the cloud hyperscalers] on merchant market suppliers," Orme argues.

Is competition in the server chip market good for businesses?

The tie-up between Google and AMD gives cloud customers another option when it comes to how their VMs in the cloud are powered. "Customers of AWS, Azure and Google Cloud Platform do have some choice as to what processors underlie their virtual machines," says Jean Atelsek, research analyst in the cloud transformation and digital economics unit at 451 Research. "While they probably don’t care about whether the chips are from AMD or Arm or Intel, they do care about the price and performance of the resources they’re using, and this is where silicon is having an impact."

Indeed, this increased competition has already had significant cost implications. 451 Research's cloud price index has been tracking cloud costs since 2015, and the last year saw the price of cloud compute plummet. "In 2020 we recorded the steepest annual drop yet in cloud compute pricing, down 8%-15%," Atelsek says. "This was primarily driven by the broad roll-out of new-generation chips."

And chips are likely to become an even more important differentiator for cloud providers as businesses make decisions about where to deploy complex workloads. "The choice of chipset makes an even bigger difference when dealing with compute-intensive applications, including those that apply machine learning to proprietary data to help companies increase efficiency, make decisions and improve the customer experience," Atelsek says. "Cloud providers are custom-designing processors optimised for these workloads, which represents a growing opportunity for them."