Nadella noted that this IP pipeline gives Microsoft the flexibility to advance Maia on its own terms, ensuring steady, intentional evolution even as rivals like Google accelerate their efforts with custom silicon.
Microsoft CEO Satya Nadella said the company has full access to OpenAI’s system-level intellectual property, a strategic advantage that enables Microsoft to advance its own silicon roadmap while still relying heavily on NVIDIA’s GPUs to support large-scale AI workloads.
In the interview, Nadella explained that Microsoft receives every component of OpenAI’s accelerator-related IP, with the sole exception of consumer hardware. When pressed on the extent of this access, he replied succinctly, “All of it.
OpenAI and Broadcom recently revealed a multi-year partnership to co-develop and scale up 10 gigawatts of OpenAI-designed AI accelerators and networking systems, signaling a significant leap in OpenAI’s infrastructure expansion.
Nadella noted that Microsoft had previously shared its own IP with OpenAI while collaborating on supercomputer development, establishing a two-way exchange of technology. “We built it for them and they benefited from it and now as they innovate, even at the system level, we get access to all of it,” he said.
Nadella said this IP pipeline enables Microsoft to advance Maia at a measured, strategic pace, even as rivals like Google accelerate their custom chip programs. He emphasized that in-house silicon efforts only thrive when supported by strong internal model demand. “If you build your own vertical thing, you better have your own model and you have to generate your own demand for it or subsidise the demand for it,” he explained.
He pointed out that major cloud competitors operate under the same realities. “Even Google’s buying NVIDIA, and so is Amazon,” he said. “It makes sense because NVIDIA keeps innovating, it’s a versatile platform, all models run on it, and customer demand remains strong.”
He said Microsoft’s strategy draws on its long history of deploying successive generations of compute hardware. “We had a lot of Intel, then we introduced AMD, and then we introduced Cobalt. That’s how we scaled it,” he noted, emphasizing that Microsoft is already accustomed to running large, mixed-silicon environments.
Nadella said Microsoft will keep a tight integration between its MAI models and its silicon roadmap, ensuring each microarchitecture evolves in step with the company’s own workloads. At the same time, Microsoft plans to move swiftly using NVIDIA hardware to meet immediate performance needs.
Nadella said Microsoft will begin by deploying the systems OpenAI develops for them, and then expand those designs across its wider infrastructure.









