Part #/ Keyword
All Products

AMD Unveils MI430X GPU with 432GB HBM4 & 19.6TB/s

2025-11-21 14:18:31Mr.Ming
twitter photos
twitter photos
twitter photos
AMD Unveils MI430X GPU with 432GB HBM4 & 19.6TB/s

On November 19, AMD unveiled its next-generation AI accelerator series, Instinct MI400, starting with the MI430X, highlighting its design direction and key specifications.

The MI430X GPU is engineered to handle both AI and high-performance computing (HPC) workloads, leveraging AMD's next-generation CDNA™ architecture. It offers up to 432GB of HBM4 memory and 19.6TB/s memory bandwidth, tackling the common bottlenecks encountered when training large language models or running complex simulations. Its support for FP4, FP8, and FP64 precision ensures a balanced performance for AI and scientific applications, delivering exceptional computing power for researchers, engineers, and AI innovators.

Coupled with AMD ROCm™ software, MI430X achieves full-stack compatibility and scalability in data centers and supercomputing environments. ROCm's integration with frameworks like PyTorch, TensorFlow, and JAX optimizes training and inference performance across thousands of GPUs.

AMD confirmed that the Discovery supercomputer at Oak Ridge National Laboratory in the U.S. and Europe's Alice Recoque supercomputer are already deploying MI430X accelerators. Discovery combines MI430X GPUs with next-gen AMD EPYC "Venice" CPUs on the HPE Cray GX5000 platform, enabling U.S. researchers to train, fine-tune, and deploy large-scale AI models while advancing energy research, materials science, and generative AI computing.

Alice Recoque, Europe's new exascale supercomputer, integrates MI430X GPUs with EPYC "Venice" CPUs on Eviden's BullSequana XH3500 platform, delivering exceptional double-precision HPC and AI performance. Its architecture leverages massive memory bandwidth and energy efficiency to accelerate scientific breakthroughs while meeting strict sustainability goals.

Looking ahead, AMD also teased the Instinct MI455X, designed to compete with NVIDIA's Rubin series, focusing on large model training, inference speed, and energy efficiency.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!