OpenAI is keeping its faith in Nvidia’s graphics processing units, choosing not to deploy Google’s in-house AI chips at scale despite recent speculation about a potential pivot. As confirmed by an OpenAI spokesperson to Reuters, the company is conducting early tests with Google’s Tensor Processing Units (TPUs) but has “no plans to deploy them at scale” at this time. This clarification comes after reports suggested OpenAI might look to Google’s silicon to help meet the surging demand for AI compute power.
The context behind this decision is telling. While it’s standard practice for AI labs to experiment with a variety of chips, moving an entire production workload to a new hardware platform is a massive undertaking. It requires overhauling system architecture and software support, a process that can’t be rushed. For now, OpenAI continues to rely heavily on Nvidia GPUs, with AMD chips also in the mix to support its expanding AI infrastructure.
Interestingly, OpenAI has inked a deal with Google Cloud to access additional computing capacity—a move that surprised many given the fierce rivalry between the two in the generative AI race. However, the bulk of OpenAI’s compute still comes from GPU servers provided by CoreWeave, a specialized cloud provider.
Google, for its part, has been aggressively pushing its TPUs into the broader market. Historically reserved for internal use, these chips have recently attracted high-profile customers, including Apple, Anthropic, and Safe Superintelligence—both competitors to OpenAI, founded by former OpenAI leaders.
OpenAI’s current approach is pragmatic: test everything, but don’t commit until the technology and economics make sense. In parallel, the company is developing its custom AI processor, with a key “tape-out” milestone—the point when a chip design is finalized and sent to manufacturing—expected later this year. Industry reports suggest this in-house chip, being developed with Broadcom and TSMC, could launch as early as Q4 2025.
For now, Nvidia remains the cornerstone of OpenAI’s hardware stack. But as the AI arms race intensifies, expect the landscape to keep shifting—one chip at a time.
Be the first to post comment!