OpenAI has officially added Google Cloud to its list of infrastructure providers, marking its shift from a single-vendor arrangement to a multi-cloud strategy. The move responds directly to soaring demand for ChatGPT and API services, with the company handling half a billion requests weekly.
Strategic Shift Toward Multiple Cloud Providers
OpenAI now counts Google Cloud alongside Microsoft Azure, Oracle, and CoreWeave in its infrastructure mix. This diversification enables the company to tap into various hardware pools and negotiate more favorable terms. It also spreads risk, ensuring that no single outage can disrupt service for millions of users.
Under the new “right of first refusal” model, OpenAI can accept offers from other providers before returning to Microsoft, giving it more flexibility.
Leadership and the Stargate Project
OpenAI CEO Sam Altman has been vocal about raw compute pressures. He said GPUs were “melting” under demand pressures as early as March 2025. To address that, the company oversees a massive deployment known as the Stargate Project.
The Stargate Project is a $500 billion AI infrastructure initiative designed to develop custom systems necessary for next-generation models. While OpenAI manages operations, SoftBank funds a significant share of the build-out. This partnership strikes a balance between technical control and deep-pocketed support.
Negotiations for the Google Cloud deal took months to complete. In May 2025, after months of negotiations, the Google Cloud deal was finalized. That approval lifted barriers from the prior exclusivity and set the stage for immediate deployment.
Infrastructure Demands and Performance Gains
The push to diversify comes as ChatGPT handles 500 million active sessions each week. OpenAI has 500 million weekly active users, stretching the capacity limits of any single cloud. By adding Google’s specialized accelerators, OpenAI boosts responsiveness and reliability.
In particular, Google’s Cloud TPU lineup brings notable gains. Its flagship model, the TPU v5p, offers a 2x improvement in FLOPS and a 3x improvement in high-bandwidth memory. These upgrades reduce inference latency, enabling the training of larger models more quickly.
Additionally, Google introduced Ironwood, its seventh-generation TPU, designed for inference. Ironwood’s architecture optimizes for real-time AI services, aligning perfectly with ChatGPT’s requirement for low-latency responses.
Talent and Competitive Dynamics
The AI landscape remains fiercely competitive. Google’s DeepMind and OpenAI raced to release the best large language models, and Microsoft publicly named OpenAI as a competitor in 2024. Even as partners, they vie for top talent and market share. Google hopes that powering ChatGPT will showcase its cloud prowess to other AI leaders.
CoreWeave also plays a key role by supplying GPU capacity that Google resells to OpenAI. This layered supply chain highlights how specialized providers collaborate in the AI era. It also shows that partnerships can blur lines—competitors on one front can be infrastructure allies on another.
Capital Expenditure and Financial Implications
In March 2025, OpenAI signed a $11.9 billion deal with CoreWeave. Two months later, it signed an additional $4 billion contract with CoreWeave in May 2025. Those commitments underpin the deep compute backbone now spanning multiple clouds.
Financially, OpenAI’s scale is evident in its revenue. By June 2025, the firm reached a run rate of $10 billion, up sharply from $5.5 billion six months earlier. Meanwhile, Google Cloud also hit a high-water mark, having generated $43 billion in revenue in 2024, and represents 12% of Alphabet’s total revenue. The OpenAI deal thus bolsters Google’s bottom line as it challenges Amazon and Microsoft.
Geographic Reach and Technical Roadmap
OpenAI will route ChatGPT traffic through Google’s facilities in key markets, including the US, Japan, the Netherlands, Norway, and the UK. This global span cuts latency and improves local compliance. It also fits with the company’s broader plan to roll out enterprise and government services in regions with specific data rules.
The multi-cloud setup bolsters resilience, letting OpenAI switch traffic away from a provider that faces an outage or maintenance window. At the same time, researchers continue to work on custom AI chips. Those in-house designs aim to trim costs and further reduce dependence on public clouds.
Key Takeaways:
- OpenAI expanded beyond Microsoft by adding Google Cloud to its infrastructure providers.
- High user demand (500 million weekly sessions) drove the shift to a multi-cloud approach.
- Major deals include $11.9 billion and $4 billion contracts with CoreWeave.
- OpenAI’s annualized revenue run rate hit $10 billion by June 2025.
- Google’s TPUs—v5p and Ironwood—offer hefty performance gains for AI workloads.
- Services will run across five regions to boost reliability and compliance.