Skip to content

Search the site

Dell has a $4.5 billion AI server backlog - smashes Q3 record

"GPUs devour data. I mean, you have to feed the beast..."

Enterprise as well as cloud customers are snapping up Dell’s “AI servers” (ones packing GPU firepower rather than CPU) – with Dell reporting an AI server backlog of $4.5 billion and over 2,000 enterprise customers. 

Many are keen to get their hands on the company’s NVIDIA Blackwell servers: “We saw in Q3 a shift and a pretty rapid shift of the orders moving towards our Blackwell design,” said Dell COO Jeff Clarke.

(The OEM has promised a new NVIDIA GB200 Grace Blackwell NVL4 superchip-powered Dell PowerEdge XE server, designed for the Dell IR7000 rack, supporting up to 144 GPUs in a 50OU standard rack.)

Clarke added in an earnings call Q&A: “Is our business still weighted towards the Tier 2 CSPs in building out those digital native platforms?

“Without question. But… enterprise continues to be a larger portion of the opportunity for us. It's a larger portion of the pipeline, and we're only in the very, very early innings of enterprises figuring out how to deploy AI."

See also: $300 million cloud bill triggered a rethink - and a shopping spree on modular hardware

“They understand that it's highly disruptive. It drives higher levels of innovation, higher levels of productivity. They're in their experimentation, some of advanced proof-of-concept. And as they migrate through that, the opportunity is immense,” the Dell President and COO added. 

Dell reported Q3 earnings late Thursday, with revenue up 10%, to $24.4 billion. Q3 net income was up 11% to $1.5 billion. Servers and networking revenue was $7.4 billion up 58% – that’s a Q3 record for the company. 

Dell AI servers

“AI server” a catch-all one for Dell’s range of air-cooled and liquid-cooled PowerEdge servers – which ship with various GPUs and accelerators, up to 32 DDR5 memory DIMM slots, up to eight U.2 drives and up to 10 front facing PCIe Gen 5 expansion slots for intensive inferencing workloads.

COO Jeff Clarke said on a call: “Beyond the AI servers, we like the profit pools that surround them, like power management and distribution, cooling solutions, network switches, network cables, optics, storage, deployment, maintenance, professional services and financial services.”

Customers are “focusing on consolidation and power efficiency by modernizing their data centers with more efficient and dense 16G servers, freeing up valuable floor space and power that will support their AI infrastructure” noted CFO Yvonne McGill on the earnings call. 

Clarke added: “The opportunity is beyond the node into full rack scale integration. It's the networking opportunity, the storage opportunity, mundane things like cooling and how you actually build very efficient cooling subsystems to take the energy density out, how do we do power distribution, power management, putting telemetry in, doing power management, all… are opportunities for us to expand our margins.”

“The AI opportunity for storage is immense simply because GPUs devour data. I mean, you have to feed the beast, and they're not very effective without a lot of information. Storage, it typically tends to be unstructured information. So we think scale out, file and object capabilities for training, tuning and inference are essential. We think parallel file systems for this kind of end-use transient data and these large training environments is essential. And remember, 80% of the data is on-prem. So we think AI is driving new needs in the storage architecture, which really drive to a three tier architecture. So the ability to scale, the ability to drive efficient deployment of storage, the ability to be flexible and high performance are all things required to meet these high performance modern AI workloads."

Join peers following The Stack on LinkedIn

Latest