Part #/ Keyword
All Products

NVIDIA GB300 AI Servers to Begin Shipping in H2 2024

2025-07-02 11:17:56Mr.Ming
twitter photos
twitter photos
twitter photos
NVIDIA GB300 AI Servers to Begin Shipping in H2 2024

According to industry sources, NVIDIA's next-generation Blackwell GB300 AI servers are expected to begin shipping in the second half of 2024, with shipment volumes projected to surpass those of Apple's upcoming iPhone. Analysts anticipate that the GB300 could become the most powerful AI server platform in the world this year.

Contract manufacturers in Taiwan are racing to secure production orders for the GB300, reflecting its strategic importance in the global AI supply chain. Key assembly partners reportedly include Foxconn, Quanta Computer, Wiwynn, and Inventec—all of which are preparing to commence shipments in the coming months.

Foxconn is said to have landed the largest share of GB300 orders, particularly for the highest-end configurations. These premium servers will be equipped with up to 72 Blackwell AI GPUs, making them one of the most advanced AI systems available globally.

Other major manufacturing partners such as Quanta, Wistron, Wiwynn, and Inventec are also gearing up for large-scale deliveries. Quanta has already begun shipping the GB200 series in Q2 2024 and is currently in the testing phase for the GB300 with its clients. The company is expected to initiate GB300 shipments by September, alongside Wiwynn and Inventec.

Industry insiders suggest that the GB300's critical role in supporting global AI infrastructure—particularly for generative AI applications—is driving suppliers to prioritize its production over other consumer electronics, including the next iPhone.

Amid tight GPU supply and soaring prices, the market is also seeing rising interest in alternative AI chips. These include custom-developed accelerators from hyperscalers like Amazon and Alphabet, as well as designs from leading chip developers such as Broadcom and Marvell.

Additionally, reports indicate that OpenAI has begun shifting part of its computing demand to Google's Tensor Processing Units (TPUs), largely due to the high cost associated with NVIDIA's AI platforms.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!