On October 13, OpenAI and semiconductor design leader Broadcom announced a major partnership to jointly develop custom AI accelerators for large-scale data centers with a total computing capacity of 10 gigawatts (GW). Under this collaboration, OpenAI will design its own AI accelerators and systems, while Broadcom will support the development, networking, and deployment processes.
According to OpenAI, creating in-house AI chips allows the company to embed its extensive experience from developing frontier models directly into the hardware, unlocking greater performance and intelligence. These custom-designed racks will feature Broadcom's Ethernet and connectivity solutions and will be installed across OpenAI's facilities and partner data centers worldwide to meet the surging global demand for AI computing power.
Both companies have signed a long-term agreement outlining the joint development and supply of AI accelerators. Following the announcement, Broadcom's stock price jumped over 10% in pre-market trading. Although the financial details remain undisclosed, industry insiders suggest this collaboration may be tied to the rumored $10 billion cloud infrastructure order that Broadcom mentioned during its recent earnings call.
OpenAI executives estimate that deploying 1GW of AI computing capacity could cost around $50 billion based on current energy prices—meaning the 10GW project could reach a total investment of up to $500 billion. Servers typically account for over 60% of data center costs, while AI chips represent more than 70% of AI service expenses. This indicates that the value of AI chips alone in this project could exceed $200 billion.
However, by designing its own chips and working with Broadcom for backend design while relying on TSMC for manufacturing, OpenAI expects significant cost savings compared to sourcing chips directly from NVIDIA or AMD. This cost efficiency is one of the main reasons driving OpenAI's move toward self-developed AI hardware.
According to CNBC, the partnership between OpenAI and Broadcom has been in quiet development for about 18 months. Their custom systems are designed to integrate networking, storage, and computing resources optimized specifically for OpenAI's workloads. Reports also indicate that OpenAI's in-house AI chips are optimized for inference performance and will utilize Broadcom's Ethernet stack for high-speed connectivity. The companies plan to begin deploying these racks by late 2026.
OpenAI President Greg Brockman revealed that the company is even using its own AI models to accelerate chip design and improve development efficiency. This 10GW accelerator project is part of OpenAI's broader vision to expand its computing capabilities—CEO Sam Altman has hinted that the 10GW goal is just the beginning. Currently, OpenAI operates at around 2GW of computing power, which has been sufficient to scale ChatGPT, develop video generation model Sora, and conduct advanced AI research.
In recent months, OpenAI has also signed massive infrastructure agreements: a $300 billion cloud deal with Oracle, a $22 billion partnership with CoreWeave, a $100 billion chip procurement agreement with NVIDIA, and a 6GW AI chip deal with AMD. If all these collaborations proceed as planned, OpenAI's total computing capacity could surpass 30GW—setting a new benchmark for large-scale AI development.