According to Bloomberg, OpenAI's ambitious "Stargate" artificial intelligence (AI) project is set to build its first AI data center in Abilene, Texas, within the coming months. The facility is expected to deploy tens of thousands of NVIDIA's GB200 chips by the end of next year.
Sources reveal that the Abilene data center is anticipated to install up to 64,000 GB200 chips by the end of 2026, with the deployment happening in phases. The first phase, set for this summer, will see 16,000 chips installed.
This scale of deployment represents a significant investment, with costs reaching into the billions of dollars. While NVIDIA has not officially disclosed the pricing for the GB200 chip, CEO Jensen Huang mentioned last year that lower-performance B200 chips are priced between $30,000 and $40,000 per unit.
OpenAI has confirmed that it is collaborating with Oracle to design and deliver the Abilene data center, with Oracle responsible for procuring and operating the supercomputers being built at the site. Oracle has not commented on the matter, and NVIDIA has declined to provide further details.
In addition to Texas, OpenAI and its partners from SoftBank are exploring potential sites for additional Stargate data centers in Pennsylvania, Wisconsin, and Oregon. Salt Lake City is also under consideration, as Oracle already operates some cloud computing facilities in the area.
The Stargate project, a joint venture between OpenAI, SoftBank, Oracle, and MGX, was announced in late January. The initiative is set to invest $500 billion over the next four years, with an initial investment of $100 billion. SoftBank and OpenAI have committed $19 billion each, securing a 40% stake in the joint venture, while Oracle and MGX will contribute $7 billion each.
Globally, tech companies are racing to develop AI-focused data centers. The Stargate project joins a growing list of initiatives. Recently, Elon Musk's xAI struck a $5 billion deal with Dell to supply AI servers for its Memphis data center. Meta also plans to have the equivalent of 600,000 H100 servers by the end of 2024. Meanwhile, CoreWeave, an AI-focused cloud services provider, recently disclosed that it operates over 250,000 NVIDIA AI GPUs across 32 data centers, according to its IPO filings earlier this month.