NVIDIA reported $26 billion in revenues for the quarter and $14.8 billion in net income; up 262% and 628% on the previous year respectively – as customers from cloud service providers through to automotive companies scrambled to build out their AI infrastructure on the company’s platforms.
The blockbuster earnings came as CEO Jensen Huang said demand would outstrip supply for its new H200 and Blackwell platforms into next year – and shrugged off concerns that the rapid improvement in performance from its next generation equipment might leave customers concerned about rapidly depreciating but hugely expensive AI infrastructure assets.
“If you're 5% into the build-out versus if you're 95% into the build out, you're going to feel very differently,” admitted Huang. “We want our customers to see our road map for as far as they like, but they're early in their build-out anyways… time is really, really valuable to them.
“As in all technology races, the race is so important… Time to train matters a great deal… getting started three months earlier is everything. [That’s] why we're standing up Hopper systems like mad right now.”
Inference drove about 40% of its record $22.6 billion data center revenue said CFO Colette Kress, saying this was “fueled by strong and accelerating demand for generative AI training and inference on the Hopper platform. [Beyond cloud] generative AI has expanded to consumer internet companies, and enterprise, sovereign AI, automotive and healthcare customers, creating multiple multibillion-dollar vertical markets.”
Speaking on a May 22, fiscal Q1 2025 earnings call and taking a question about the extent to which future low product utilisation might slow growth, NVIDIA CEO Jensen Huang said he had no such concerns at all.
As well as strong cloud service provider growth “there's also a long line of generative AI startups, some 15,000-20,000 startups that in all different fields from multimedia all kinds of design tool application – productivity applications, digital biology, the moving of the AV industry to video, so that they can train end-to-end models, to expand the operating domain of self-driving cars. The list is just quite extraordinary. We're racing actually.
“Customers are putting a lot of pressure on us to deliver the systems… I haven't even mentioned all of the Sovereign AIs…” he added.
NVIDIA’s new Blackwell platform is now “in full production and forms the foundation for trillion-parameter-scale generative AI” added Huang, saying that “we have 100 different computer system configurations that are coming this year for Blackwell… so you're going to see liquid cooled version, air cooled version, x86 visions, Grace versions…”
“Video Transformers, while consuming significantly more computing, are enabling dramatically better autonomous driving capabilities and propelling significant growth for NVIDIA AI infrastructure across the automotive industry. We expect automotive to be our largest enterprise vertical within Data Center this year, driving a multibillion revenue opportunity across on-prem and cloud consumption,” concluded NVIDIA. (Automotive revenue was $329 million for the quarter; up 17% from the previous quarter.)
The company’s new H200 units are currently in production with shipments on track for Q2. They “nearly double” the inference performance of H100. “For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time.
“That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over four years,” the company said, encouraging customers to see these investments as potentially hugely revenue-generative.