Nvidia and OpenAI announced a strategic partnership under which Nvidia intends to invest up to $100 billion as OpenAI builds out at least 10 gigawatts of Nvidia-powered AI datacenters over the next several years. The first 1 GW of systems—based on Nvidia’s upcoming “Vera Rubin” platform—is slated to come online in the second half of 2026. The capital will be deployed in tranches as each gigawatt is built, alongside OpenAI’s large purchases of Nvidia AI systems.
Executives framed the deal as the hardware backbone for OpenAI’s next wave of models and products. Nvidia CEO Jensen Huang called it the “next leap forward,” while OpenAI said the collaboration will “empower people and businesses at scale.” The companies also highlighted ongoing work with a wider partner network—including Microsoft, Oracle, SoftBank and Stargate—to assemble what they describe as the world’s most advanced AI infrastructure.
The tie-up lands amid intensifying geopolitical headwinds. China has reportedly told major tech firms to halt purchases of Nvidia AI chips, and authorities there have cited anti-monopoly concerns—developments that could constrain Nvidia’s China revenue even as global AI demand soars.
For OpenAI, scale is the draw: the company says ChatGPT now reaches roughly 700 million weekly active users, underscoring the compute needs behind training and serving frontier models.
Why it matters: If completed as outlined, this would be one of the largest private infrastructure programs in tech, further entrenching Nvidia’s platform at the center of AI and giving OpenAI priority access to scarce cutting-edge compute. Expect regulatory scrutiny (competition, national security, energy use) and industry ripple effects as rivals race to lock in long-term compute supply.