AMD has pulled the covers off its highly anticipated new 7nm CPU series– the company’s second generation EPYC Rome range of server chips.
The processor lineup has broken 70 world records in terms of performance, the company revealed as it unveiled the new line of CPUs and touted some impressive customers, including Twitter, Microsoft and Google.
The processors are designed for data centre use and demanding workloads. They scale to 64 cores with 128 threads per socket, returning up to 3.4GHz with a 256MB cache, and have some unique security features built in designed to prevent data spilling from the kernel space in side-channel attacks.
Twitter’s Jennifer Fraser, a senior engineering director, said Twitter achieved 45 percent more cores per rack and 25 percent lower total cost of ownership with the second-gen EPYC and will deploy widely in 2019.
IDC’s Ashish Nadkarni said: “AMD has delivered a viable, very capable and in many ways a superior alternative processor/SoC for cloud, enterprise and high performance computing workloads.”
AMD has been on roll in recent months, with both Microsoft and Sony committing to custom AMD SoCs to power their next generation game consoles, and Samsung signing a major multi-year GPU licensing deal.
AMD CEO Lisa Su said in an earnings call early July that AMD has four times more enterprise and cloud customers actively engaged on deployments prior to launch, than for its first generation of EPYC processors.
“As a result, it will ramp significantly faster”.
AMD Infinity Architecture now decouples two streams, the company said: “Eight dies for the processor cores, and one I/O die that supports security and communication outside the processor. Because its die design is not monolithic it can deliver new CPU core process while letting I/O circuitry develop at its own rate, meaning new capabilities can be brought to market faster.
AMD EPYC is the only current x86-architecture server processor supporting PCIe 4.0 (peripheral component interconnect express; the latest interface standard for connecting high speed components), the company boasted.
PCIe 4.0 delivers double the I/O performance over PCIe 3.0.
Users can run 128 lanes of I/O (input/output) to double the network bandwidth that ties together HPC clusters.
For other application needs and in virtualised environments, they can connect faster to GPU accelerators, NVMe drives, and use integrated disk controllers to access spinning disks without the typical bottleneck of a PCIe RAID controller.
Traditional CPUs typically must scale up to a 2-socket server to overcome an imbalance of resources. The new processors are powerful enough that a 1-socket servers satisfy most workload needs, AMD claims.
This helps to decrease density and reduce capital, power, and cooling expenses but perhaps most importantly for users, halve licensing costs for “per-socket” software like VMware vSphere.
As launches go, it left analysts impressed. AMD will be watched closely to see if it can scale consistently and smoothly. Intel meanwhile may be sweating a little at some of the metrics reported today.
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.