On December 29th, credible sources reported that NVIDIA has initiated substantial procurements of HBM3E memory from major suppliers such as SK Hynix and Micron, strategically positioning itself for the forthcoming generation of AI products. However, this sizable acquisition by NVIDIA has presented challenges for other manufacturers in securing limited HBM memory resources.
Reports indicate that NVIDIA has committed a noteworthy prepayment ranging between 7 to 10 trillion Korean won (equivalent to approximately 38.5 to 55 billion Chinese yuan) for the procurement of HBM3E memory, underscoring a substantial financial investment. Given the industry's substantial demand, it is anticipated that the actual procurement amount will surpass the initial estimate. The upfront payment alone is approximately 775 million USD, with the actual procurement amount potentially exceeding 1 billion USD.
Insider insights from the industry reveal that SK Hynix and Micron have each received prepayments within the 7 to 10 trillion Korean won range from NVIDIA, designated for the supply of cutting-edge High Bandwidth Memory (HBM) products.
Noteworthy is Micron's recent financial report, where CEO Sanjay Mehrotra disclosed that the robust demand for high-bandwidth memory (HBM) due to the surge in generative AI applications has led to the complete exhaustion of Micron's HBM capacity for the entire year of 2024.
NVIDIA's considerable prepayment has prompted storage chip manufacturers, including SK Hynix and Micron, to embark on expansion initiatives for their HBM production capacities. SK Hynix, in particular, plans to channel the prepayment from NVIDIA into expanding Through Silicon Via (TSV) facilities. In the preceding year's third quarter, SK Hynix inaugurated a new TSV line. Correspondingly, Micron's investments in TSV facilities are poised for an augmentation.
Additionally, recent developments reveal that Samsung Electronics has concluded compatibility tests for HBM3 and HBM3E products with NVIDIA, culminating in the signing of significant supply contracts. As the landscape of AI chip development evolves, companies such as Intel and AMD are intensifying their endeavors to advance AI chip technologies, consequently fueling an enduring surge in their demand for HBM.
At present, NVIDIA plans to debut HBM3e in its upcoming Blackwell AI GPU, with rumors suggesting a market introduction in the second quarter of 2024. Promising a definitive enhancement in performance per watt through a compact chip design, this development is poised to elevate the industry benchmark. Furthermore, NVIDIA's Hopper H200 GPU is slated to feature the world's fastest HBM3E memory, underscoring the critical significance of this solution for NVIDIA's foothold in the AI and HPC markets.
NVIDIA, with optimistic revenue expectations in the data center domain, is strategically positioned ahead of its counterparts. With aspirations to generate up to 300 billion USD in revenue by 2027 through AI-driven sales, the company's primary objective is to ensure a steadfast supply of AI GPUs to its clientele. An intriguing aspect lies in NVIDIA's substantial procurement of HBM3E, expected to catalyze considerable momentum in the HBM industry, particularly in the realm of facility expansion. Acknowledging the challenge posed by order backlogs for companies like SK Hynix and Micron, anticipatory measures are deemed imperative to address burgeoning demand effectively.