It’s a candy shop for CIOs as AMD’s EPYC 4 lands in servers galore
Good news for IT leaders with the budget for a hardware refresh: A host of high-powered new data centre servers are hitting the market as Dell, HPE, Lenovo, Supermicro and others roll out products powered by AMD’s new EPYC 4 CPU that boast some impressive new performance and energy efficiency numbers as a result.
AMD pulled the dust sheets of its anticipated new “Genoa” range of data centre CPUs this week. The EPYC 4 series is designed for cloud, enterprise data centre and high performance computing (HPC) uses. They can pack up to 96 cores into a single processor, meaning customers can deploy fewer and more powerful servers.
The major cloud service providers (CSPs) also touted new instances powered by the hardware.
Project Monterey bears hardware-accelerated fruit for VMware
AMD noted that cloud customers increasing “bring many different types of workloads to the IaaS, which need to provide different VM sizes with varying vPCU, memory, and network bandwidth options.
“This requires identifying and providing configurations that can support a wide range of workloads deployed as IaaS and that can be quickly reconfigured at the software level for PaaS deployments”
AMD cited a case study of the bank DBS which said it had cut power consumption by 50% running Dell PowerEdge servers with EPYC chips — making “unprecedented use of virtualization in a private cloud deployment with wide adoption of management automation for a much larger number of virtual machines compared to the physical ones used by DBS’s previous infrastructure… DBS now has 99 percent virtualization” it claimed.
*Full table with the new series’ specs and pricing at bottom.
AMD EPYC 4: Specs in brief
The new EPYC 4 series provide up to 2.8X more performance, up to 54% less power, said AMD.
The AMD EPYC 4 series feature support for DDR5 memory and PCIe Gen 5 – critical for AI and ML applications.
The fourth generation AMD EPYC processors also support CXL 1.1+ for memory expansion.
(CXL 1.1+ is an interconnect for high-bandwidth, low-latency connectivity between host processor and devices such as accelerators, memory buffers, and smart I/O devices. Companies like MemVerge and Astera Labs this week also pushed out new products to support EPYC 4-powered servers, with software that enables memory to be be dynamically pooled, tiered, and shared – dynamically placing the hottest data in the fastest tier.)
“I’ll take you to the candy shop” — AMD EPYC 4 servers galore land
Dell was among those announcing new AMD EPYC 4 servers. The next generation of Dell PowerEdge servers is available in one- and two-socket configurations.
The OEM claimed tests showed that customers can expect up to a 121% performance improvement, as well as up to 33% more front drive count for 2U servers and up to 60% higher front drive count for 1U servers.
They’re available in “limited configurations” this November, with “planned full global availability in February 2023.”
HPE meanwhile touted a number one ranking in 28 benchmarks for its new HPE ProLiant Gen11 servers.
It boasted “56% better performance on decision support systems on Microsoft SQL Server 2022 Enterprise Edition… 60% faster transaction processing speed on SAP across industries, such as distribution, financial services, manufacturing, and retail”.
These new servers will be available through a pay-as-you-go consumption model with HPE GreenLake.
It was not immediately clear how badly current supply chain issues will impact availability of this flurry of new hardware goodies from all of the OEMs above, and others. Potential buyers are best off asking a trusted partner rather than a journalist on this one we suspect.
For the chip geeks: Under the hood of EPYC 4
AMD EPYC 4 Series Processors offer a unified 32MB of L3 cache per Core Complex Die (CCD). Up to eight cores per CCD can now share 32MB of unified L3 cache with this generation.
Each processor has up to 12 Universal Memory Controllers (UMC). Each UMC or memory channel can support up to two DIMMs per channel (DPC), for a maximum of 24 DIMMs per socket.
A single processor can support 6TB of DDR5 memory.
AMD notes that “while 12 memory channels are most common and generally provide the best performance, the I/O Die (IOD) also has the flexibility to support 4, 6, or even 8 memory channel configurations.”
The Core Complex Dies (CCDs) connect to memory, I/O, and each other through the I/O Die (IOD). Each CCD connects to the IOD via a dedicated high-speed, or Global Memory Interconnect (GMI) link.
The IOD also contains memory channels, PCIe Gen5 lanes, and Infinity Fabric links. All dies, or chiplets, interconnect with each other via AMD’s Infinity Fabric Technology. The fabric clock (FCLK) can now run at up to 2400 MHz and thus be coupled with DDR4-4800 memory DIMMs, also running at 2400 MHz (MEMCLK).
Dual-socket systems typically have 64 PCIe Gen5 links per socket, which maintains the same total of 128 lanes per
system as the single socket platform. Some two-socket systems may be configured with a total of 160 PCIe lanes. This is generally reserved for systems that are focused on I/O intensive workloads.
All memory and I/O connect to the single IOD but can be abstracted into logical quadrants, where each quadrant has 3
memory channels and 32 I/O lanes meanwhile.
There’s a full architecture overview here.
Details for CSPs from AMD are here.
Follow The Stack on LinkedIn
|Model||Cores||Default TDP||cTDP||Base (GHz)||Boost (GHzxi)||4th Gen EPYC™|
1kU Pricing (USD)