NVIDIA and AMD are ramping up their investments in the high-performance computing (HPC) market, leading to increased demand for TSMC's advanced CoWoS and SoIC packaging capacities through this year and next.
TSMC anticipates robust growth in AI applications, with revenue from server AI processors expected to more than double this year. These processors are projected to contribute a low-teens percentage of the company's total revenue in 2024. Over the next five years, the compound annual growth rate (CAGR) of server AI processors is estimated to reach 50%, accounting for over 20% of TSMC's revenue by 2028.
Strong AI demand has driven global cloud service providers such as Amazon AWS, Microsoft, Google, and Meta to escalate investments in AI server technology. This has increased orders for AI chips from NVIDIA and AMD, resulting in high demand for TSMC's advanced packaging technologies, including CoWoS and SoIC. All of TSMC's capacity for 2024 and 2025 has already been allocated.
To meet the growing needs of its clients, TSMC is expanding its advanced packaging capacity. By the end of this year, TSMC's CoWoS monthly capacity is expected to reach between 45,000 and 50,000 wafers, a substantial increase from the 15,000 wafers produced in 2023. By the end of 2025, CoWoS monthly capacity could rise to 50,000 wafers.
SoIC monthly capacity is forecast to reach 5,000 to 6,000 wafers by the end of this year, up from 2,000 wafers at the end of 2023. This capacity is projected to expand further to 10,000 wafers per month by the end of 2025. Due to major clients securing all available capacity, TSMC's utilization rates are expected to remain high.
NVIDIA's current production focus, the H100 chip, uses TSMC's 4nm process with CoWoS advanced packaging. The chip incorporates SK Hynix's high-bandwidth memory (HBM) in a 2.5D packaging approach.
NVIDIA's next-generation Blackwell architecture AI chip will also use TSMC's 4nm process but with an enhanced N4P production process. The chip integrates higher capacity and newer specifications of HBM3e memory, promising a significant boost in computational capabilities compared to the H100 series.
Meanwhile, AMD's MI300 series AI accelerators use TSMC's 5nm and 6nm processes. AMD integrates CPU and GPU die vertically using TSMC's SoIC advanced packaging, which is then combined with HBM using CoWoS. This method introduces an added layer of complexity to the packaging yield.