$NVDA 200 Billion in data center next year
By Tae Kim | Wednesday, July 10
Blackwell Ramp. Earlier this year, Andrej Karpathy, Tesla’s former director of AI and an OpenAI co-founder, posted on social media that “money can’t buy happiness.” But Nvidia’s AI chips? They're a different story.
By now, most everyone knows about Nvidia’s H100 AI graphics processing unit. It’s become the hottest commodity in technology. The GPU drove the company’s data center sales in the latest quarter to $22.6 billion from $4.3 billion a year earlier. And that came despite sales shortages for most of the past year.
As groundbreaking as the H100 was, the latest indications and checks from Wall Street suggest Nvidia’s updated GPUs—based on the company’s new Blackwell architecture—are going to make AI scientists even happier and drive another considerable step up in Nvidia’s sales.
In March, Nvidia CEO Jensen Huang painted a bright outlook for the new chip that has 208 billion transistors and cost the company $10 billion to develop. “Blackwell will be the most successful product launch in our history,” he said during a keynote address at the GTC developers conference in San Jose.
At the conference, he revealed an alphabet soup of Blackwell products, including the B100, B200, GB200 Superchip, HGX B200 server board, and GB200 NVL72 server rack system. That lineup is expected to be released on a staggered schedule this year.
But the star is the GB200 NVL72 AI server system. The NVL72 stitches together 36 GB200 Superchips, with each GB200 connecting two Blackwell GPUs to an Nvidia Grace CPU so that they can work more efficiently together. This means each NVL72 system has 72 Blackwell GPUs all linked together for an unprecedented density of computing power.
It also means the price for Nvidia’s highest-end AI server system is rising. Last year, Nvidia priced a two-system package of 16 H100 GPUs at $400,000. According to KeyBanc, the GB200 NVL72 is likely to cost $3.8 million, a far cry from the $32 per videogame graphics chip that Nvidia received some 25 years ago.
The high price point doesn’t seem likely to hurt demand. KeyBanc analyst John Vinh estimates the NVL72 will account for 60% to 70% of Nvidia’s GB200 server rack volume versus less-expensive configurations. “Feedback this past quarter indicates demand for the GB200 next year is greater than what we had initially heard last quarter and could support over $200 billion in data center revenue in 2025,” he wrote.
Some $200 billion in revenue for Nvidia’s data center segment next year would be an incredible feat, up from $48 billion last year and ahead of the current $140 billion that Wall Street currently expects. It would also be in the same neighborhood as the revenue that Apple generates from the iPhone, which Wall Street pegs at $210 billion for calendar year 2025.
Customers are clamoring for the NVL72 because it’s far more efficient than prior models, saving companies money on overall AI model training and queries.
Nvidia says the GB200 NVL72 provides up to 30 times the performance versus the same number of H100 GPUs for large-language model inference—the process of generating answers from those AI models—while also reducing the cost of power consumption per unit of compute. The new system is also four times faster at training AI models than Nvidia’s prior version.
Whenever Huang quips “The more you buy, the more you save” during presentations, the audience often laughs, thinking it’s mainly a joke. But Blackwell is proving that his catchphrase has an element of truth.
The question now is how long the high demand will last. Late last month, Dario Amodei, the CEO of AI start-up Anthropic, said AI models continued to scale at impressive rates. Amodei noted that the latest cutting-edge models now cost $100 million to build, with some $1 billion models under development. He expects models costing $10 billion to $100 billion to be created by 2027.
Nvidia AI data center revenue is likely to keep rising until the larger AI models show diminishing returns in capabilities. Until then, there’s no end in sight.
Write to Tae Kim at tae.kim@barrons.com or follow him on X at @firstadopter.