Part #/ Keyword
All Products

Micron to Mass Production of HBM4 for NVIDIA

2026-03-17 10:40:05Mr.Ming
twitter photos
twitter photos
twitter photos
Micron to Mass Production of HBM4 for NVIDIA

According to reports, Micron Technology has begun mass production of its 36GB 12-Hi HBM4 memory, developed specifically for the Nvidia Vera Rubin GPU platform. At NVIDIA GTC 2026, the company also revealed that its PCIe 6.0 data-center SSD and a new SOCAMM2 memory module have entered mass production, making Micron the first company to deliver all three production-ready products designed for the Vera Rubin ecosystem.

The 36GB 12-Hi HBM4 stack delivers pin speeds above 11 Gb/s and bandwidth exceeding 2.8 TB/s. Based on Micron's internal power modeling, the new HBM4 provides 2.3× higher bandwidth and over 20% better power efficiency compared with the company's Micron HBM3E in the same 36GB 12-Hi configuration. The improvement reflects the growing demand for faster and more efficient memory in large-scale AI training and inference platforms.

Micron has also shipped 48GB 16-Hi HBM4 stacked memory samples to customers. By adding four additional memory layers compared with the 12-Hi design, the 16-Hi configuration increases the capacity of a single HBM device by about 33%, pointing toward higher-density memory setups that future AI accelerators are expected to adopt.

Earlier, Micron confirmed that its Micron 9650 PCIe 6.0 SSD had entered mass production, marking one of the first PCIe 6.0 SSDs to reach this stage. The drive supports sequential read speeds up to 28 GB/s and 5.5 million random read IOPS. Compared with PCIe 5.0 solutions, it delivers double the read performance while improving performance per watt by 100%. The SSD is designed for liquid-cooled data-center environments handling AI training, inference, and agent-based workloads, and it is optimized for the Nvidia BlueField4 STX reference architecture.

At the same time, Micron introduced a 192GB SOCAMM2 memory module built for the Nvidia Vera Rubin NVL72 system and the standalone Nvidia Vera CPU platform. The company's SOCAMM2 lineup spans capacities from 48GB to 256GB. When used with the Vera Rubin platform, each CPU can support up to 2TB of memory with bandwidth reaching 1.2 TB/s, providing the large memory capacity required for next-generation AI computing workloads.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!