AMD has officially confirmed that its latest AI accelerators, the MI350X and MI355X, will be powered by 12-high HBM3E memory chips from Samsung Electronics and Micron Technology. This marks a significant milestone for Samsung, which has previously faced certification delays for HBM with other major AI platforms.
The announcement was made at the AI Advancing 2025 conference held on June 12 at the San Jose Convention Center. While Samsung has supplied HBM to AMD in the past, this is the first formal acknowledgment of Samsung's HBM3E being integrated into AMD's AI hardware roadmap.
The MI350X and MI355X accelerators share the same core architecture, with differences primarily in thermal design, affecting their maximum operating speeds. Both models feature a substantial 288GB of HBM3E memory, a 12.5% increase over the 256GB in the MI325X and a notable jump from the 192GB in the MI300X.
At the system level, platforms combining eight GPU units will utilize up to 2.3TB of HBM3E memory. AMD also unveiled a strategy targeting high-density server racks that integrate up to 128 GPUs, requiring substantial volumes of HBM3E to meet computational demands—presenting an opportunity for large-scale memory deployment.
Looking ahead, HBM4 is expected to enter full-scale production next year. AMD’s next-generation MI400 series, slated for release in 2025, will include 432GB of HBM4 per GPU. The upcoming Helios rack, configured with 72 MI400 GPUs, will offer a staggering 31TB of HBM4 memory, delivering 10× the AI computing performance of current-generation MI355X systems.
According to AMD CEO Lisa Su, the MI400-powered Helios system will match the computing power of NVIDIA's Vera Rubin rack, while offering 1.5× the memory capacity and bandwidth—a compelling advantage in AI infrastructure scalability.
As the JEDEC HBM4 standard reaches finalization, production is ramping up. Samsung Electronics and SK hynix are preparing to begin mass production by the end of 2025. Notably, Samsung aims to regain its leadership position in the high-performance memory market with a 6th-generation 10nm-class (1c) process, in contrast to SK hynix and Micron's use of 5th-generation (1b) nodes.
According to market research firm Omdia, SK hynix led the global DRAM market in Q1 2025 with a 36.9% share, followed by Samsung at 34.4%, and Micron at 25%. Samsung's upcoming HBM4 strategy is seen as pivotal to reshaping the competitive landscape in next-gen AI memory technologies.