Part #/ Keyword
All Products

NVIDIA GB300 to Feature Samsung HBM3E

2025-10-11 14:06:43Mr.Ming
twitter photos
twitter photos
twitter photos
NVIDIA GB300 to Feature Samsung HBM3E

According to South Korean media reports, NVIDIA CEO Jensen Huang recently sent a letter to Samsung Group Chairman Lee Jae-yong, officially announcing that NVIDIA's next-generation AI accelerator, the GB300, will use Samsung Electronics' 12-layer HBM3E (High Bandwidth Memory).

This marks Samsung's return to NVIDIA's core supply chain after approximately 19 months, breaking the previous "duopoly" in HBM supply dominated by Micron and SK Hynix. This is considered a key milestone for Samsung in the HBM field.

As the global HBM market rapidly transitions to the sixth generation, HBM4, with Micron and SK Hynix already sending samples, some customers plan to begin mass production in 2025. Industry experts suggest that Samsung's 12-layer HBM3E for the GB300 is primarily for "technical validation and supply chain positioning," with limited actual supply, mainly serving small-scale initial production of the GB300 and helping Samsung further verify the stability of its 12-layer process.

Sources indicate that NVIDIA and Samsung are currently in final negotiations regarding the actual supply volume, pricing, and delivery schedule for the 12-layer HBM3E. Meanwhile, Samsung is conducting quality testing for HBM4, which is expected to pass 1-2 months ahead of schedule. If progress continues smoothly, customer samples may begin in the first half of 2025, directly competing with Micron and SK Hynix's HBM4 mass production timeline.

The communication between Jensen Huang and Lee Jae-yong also touched on a "mutual cooperation," with both parties discussing Samsung's intention to purchase 50,000 GPUs from NVIDIA. While the specific GPU models have not been disclosed, market analysts suggest that these GPUs will likely be used for two main purposes:

Samsung's internal AI transformation: In recent years, Samsung has accelerated its AI applications in semiconductor design, consumer electronics R&D, and smart manufacturing. These 50,000 GPUs would support its internal AI computing clusters, enhancing chip process yields and improving smartphone image algorithms.

The OpenAI collaboration data center: Earlier reports indicated that Samsung plans to collaborate with OpenAI to build a large AI data center in Pohang, South Korea, to train Korean language models and develop AI solutions. These GPUs could serve as the core computational infrastructure for this facility.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!