Part #/ Keyword
All Products

AMD Announces Instinct MI450 Built on 2nm Process

2025-10-10 12:00:10Mr.Ming
twitter photos
twitter photos
twitter photos
AMD Announces Instinct MI450 Built on 2nm Process

Recently, AMD announced its next-generation Instinct MI450 accelerators, built on the CDNA 5 architecture and manufactured using TSMC's cutting-edge N2 process. This marks the first time AMD has adopted a 2nm process for its AI GPUs, a strategic move aimed at boosting performance and efficiency as it competes against NVIDIA's upcoming Rubin GPU platform.

AMD CEO Dr. Lisa Su shared her excitement in an interview:“We're thrilled about the MI450 series. It leverages 2nm technology with the most advanced manufacturing capabilities and delivers rack-scale solutions. Building this required the dedication of our entire team, and we're incredibly proud of what we've achieved.”

The current Instinct MI350 series, based on CDNA 4 and manufactured with TSMC's N3 process, entered mass production in late 2022. Transitioning to a 2nm-class process for next-generation AI and HPC GPUs is a natural evolution, paving the way for higher density, lower power consumption, and greater computational throughput.

TSMC's N2 technology introduces a major full-node advancement, delivering 10–15% higher performance or 25–30% lower power compared with N3E, while improving transistor density by around 15%. Its Gate-All-Around (GAA) transistors and Design-Technology Co-Optimization (DTCO) allow chip developers to push design efficiency and performance to new levels.

While NVIDIA's Rubin GPU will reportedly use a customized N3P process, AMD's move to 2nm gives it a clear manufacturing edge. AMD's Helios rack-level platform, equipped with 72 Instinct MI450 GPUs, is said to feature up to 51TB of HBM4 memory and a massive 1,400TB/s of memory bandwidth — surpassing NVIDIA's NVL144 system in memory capacity and throughput. However, NVIDIA's solution still leads in FP4 performance (3,600 PFLOPS vs. AMD's 1,440 PFLOPS), meaning real-world efficiency will depend on workload and interconnect optimizations such as AMD's UALink.

Notably, OpenAI is expected to be among the first adopters of AMD's MI450 accelerators, with deployments starting in the second half of next year. Dr. Su revealed that the collaboration will roll out in phases and could generate billions in incremental revenue once fully operational — marking a milestone for AMD's long-term investments in AI infrastructure and data center innovation.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!