Morgan Stanley's recent AI supply chain analysis unveils a strategic shift in NVIDIA's product demand, indicating a transition from H100 orders to H200 and B100. Projections for 2024 estimate H100 sales at approximately 400,000 units.
The report emphasizes a promising trajectory for NVIDIA's GPU module shipments, expecting a doubling in volume to surpass 4 million units in 2024, up from 1.8 million in 2023. The commencement of a new production line in Mexico is poised to significantly augment the shipment of NVIDIA GPU carrier boards in the upcoming year.
Furthermore, the report highlights insights into the Chinese market, revealing a substantial monthly chip production capacity of 200,000 to 300,000 units for the Chinese customized version of NVIDIA's H20 in the first half of 2024. However, due to performance disparities and apprehensions regarding potential regulatory constraints, Chinese entities are actively exploring domestic alternatives rather than opting for the downgraded H20.
Detailed specifications of the NVIDIA H20, part of the Hopper architecture series, include an expanded memory capacity of 96GB HBM3 and a GPU memory bandwidth of 4.0TB/s. In terms of computational power, the H20 boasts an FP8 performance of 296 TFLOPS and FP16 performance of 148 TFLOPS, positioning it at 1/13th the computational power of the acclaimed H200, recognized as the current epitome of AI chips.