NVIDIA reported record third-quarter fiscal 2026 revenue of $57 billion on November 19, marking a 22% sequential increase and 62% year-over-year growth. The surge was fuelled by sustained demand for accelerated computing and AI infrastructure. Data centre revenue reached $51.2 billion, rising 66% compared to the same period last year.
CEO Jensen Huang noted that demand for NVIDIA’s Blackwell architecture continues to surpass supply, stating that Blackwell sales are off the charts and cloud GPUs are sold out. He highlighted that compute demand is rapidly accelerating across both training and inference, with each expanding at an exponential pace. Huang described the current phase as the industry having “entered the virtuous cycle of AI.
During the earnings call, CFO Colette Kress highlighted that NVIDIA’s record Q3 data centre revenue of $51 billion represented a 66% year-over-year increase “a significant feat at our scale,” she said. Kress credited the surge to the rapid ramp-up of GB300 GPUs, strong demand for networking products, and the continued expansion of AI deployments across hyperscalers and model builders.
Kress added that “the clouds are sold out, and our GPU installed base spanning new and previous generations including Blackwell, Hopper, and Ampere is fully utilised.”
Addressing concerns about a potential AI bubble, Huang rejected the notion. “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” he said.
Blackwell’s GB300 contributed to “roughly two-thirds of total Blackwell revenue,” surpassing the GB200 as customers rapidly shifted to the newer platform.
Huang also underscored the architecture’s performance leap, noting that Blackwell Ultra delivers “5x faster time to train vs Hopper.” On DeepSeek R1 benchmarks, it provides “10x higher performance per watt and 10x lower cost per token compared to the H200.”
The next-generation Rubin platform is still on track for a 2026 ramp-up, with NVIDIA already having received the first silicon. “Our ecosystem will be ready for a fast Rubin ramp,” Kress noted.
Expanding AI Partnerships
NVIDIA revealed that it is now engaged in AI factory projects involving a total of five million GPUs, covering cloud providers, sovereign AI programs, enterprise deployments and major supercomputing centres.
The company pointed to major large-scale deployments such as xAI’s Colossus 2 gigawatt-scale data centre, its expanded partnership with AWS, and a project with Humane that aims to deploy up to 150,000 NVIDIA AI accelerators.
NVIDIA also announced a new strategic agreement with Anthropic, which is adopting NVIDIA’s architecture for the first time and has committed up to one gigawatt of compute for its upcoming systems.
Huang noted that NVIDIA’s infrastructure supports a wide range of workloads, saying, “We run every AI model OpenAI, Anthropic, xAI, Gemini, science models, biology models, robotics models.









