
According to industry sources, Micron plans to supply both HBM3E and HBM4 in parallel starting in 2026, with the flexibility to adjust the product mix based on customer demand. This dual-track strategy is designed to support ongoing demand for fifth-generation HBM3E in AI accelerators, while preparing for the expected surge in sixth-generation HBM4 adoption.
Micron is working closely with multiple partners across the HBM customer ecosystem to drive strong revenue growth in 2026. The company now expects the global HBM market to expand from around USD 35 billion in 2025 to roughly USD 100 billion by 2028—two years earlier than previously forecast. Notably, Micron has already locked in both volume and pricing for its HBM3E and HBM4 products for 2026, with HBM4 shipments scheduled to begin in the second quarter of that year.
Addressing the current memory supply tightness, Micron acknowledged that shortages will be difficult to ease in the near term. Capacity expansion is being driven mainly by process node transitions, with the 1γ node set to become the primary engine of supply growth in 2026. The 1γ process has now reached mature yields and is expected to account for the majority of bit shipments in the second half of 2026.
At the same time, Micron is accelerating its global manufacturing footprint. Construction of its advanced memory manufacturing site in New York State began on January 16, 2024, with phased production expected between 2027 and 2030. In Japan, the Hiroshima facility has added cleanroom space to support advanced process nodes. The company's HBM advanced packaging plant in Singapore is scheduled to begin contributing output in 2027, while the assembly and test facility in India is targeting mass production in 2026.
On the competitive front, SK hynix and Samsung Electronics have already started delivering final HBM4 samples to NVIDIA, signaling the transition toward commercial deployment. While Micron may trail slightly in early HBM4 shipments, securing a position in AMD's MI400 HBM4 supply chain could provide meaningful momentum for its business.
In the LPDDR segment, Micron has successfully sampled its SOCAMM2 solution, increasing capacity to 192GB using the 1γ process along with improved thermal design. Samsung plans to begin mass production of its SOCAMM2 in early 2026, while SK hynix is expected to reach volume production in the second quarter of 2026 due to capacity constraints. This timeline gives Micron an early advantage in the AI server LPDDR market.
As AI workloads reshape computing architectures, the memory industry is undergoing a fundamental shift in both demand and supply. Large language model training continues to push HBM bandwidth requirements higher, while AI servers are rapidly increasing adoption of DDR5 and LPDDR, turning memory into a critical performance bottleneck and must-have resource. In parallel, DDR5, LPDDR, CXL memory, and enterprise SSDs are forming a layered memory hierarchy across AI data centers.
To capitalize on these trends, Micron plans to raise its fiscal 2026 capital expenditure to USD 20 billion, with a strong focus on expanding HBM and 1γ production capacity, reinforcing its long-term competitiveness in the memory market.