On April 17, the global semiconductor standards body JEDEC officially released the HBM4 (High Bandwidth Memory 4) specification, marking a major leap forward in memory technology. This next-generation standard introduces significant advancements in bandwidth, channel architecture, and power efficiency—designed to meet the growing performance demands of generative AI, high-performance computing (HPC), advanced graphics processing, and data center servers.
HBM4 continues the hallmark of vertically stacked DRAM dies seen in previous HBM generations. However, it delivers substantial improvements over HBM3, offering greater design flexibility, higher data rates, and improved energy efficiency. With a 2048-bit interface and speeds of up to 8 Gb/s, HBM4 achieves a total bandwidth of up to 2 TB/s.
One of the most notable upgrades is the expansion of independent channels per stack—from 16 in HBM3 to 32 in HBM4—each now containing two pseudo-channels. This architectural shift enhances memory parallelism and flexibility, enabling faster and more efficient data access.
HBM4 also sets new standards in power efficiency. It supports configurable VDDQ voltages (0.7V, 0.75V, 0.8V, or 0.9V) and VDDC levels (1.0V or 1.05V), optimizing power consumption while improving energy performance. Backward compatibility with HBM3 controllers is maintained, allowing seamless integration and support for systems that require mixed operation with both HBM3 and HBM4.
Another important enhancement is the implementation of Directed Refresh Management (DRFM), which helps mitigate row-hammer vulnerabilities, while improving reliability, availability, and serviceability (RAS). This is especially critical for maintaining data integrity and system stability in enterprise-level deployments.
In terms of capacity, HBM4 supports stack configurations of 4, 8, 12, and 16 DRAM layers, with die densities of 24Gb or 32Gb. A 16-high stack using 32Gb dies enables a single stack to deliver up to 64GB of memory—addressing the growing need for high-capacity solutions across diverse workloads.
Architecturally, HBM4 also separates command and data buses, reducing latency and boosting concurrency—particularly beneficial for AI and HPC applications. Additional improvements in physical interface design and signal integrity support faster data rates and higher channel efficiency.
The HBM4 standard was developed through close collaboration among key industry players, including Samsung, Micron, and SK Hynix. These companies have played a pivotal role in shaping the standard and are expected to begin showcasing HBM4-compatible products soon. Samsung has announced plans to begin mass production of HBM4 in 2025, aligning with the surging demand from AI chipmakers and hyperscale cloud service providers.