Part #/ Keyword
All Products

NVIDIA Unveils GB200 NVL4: 2 Grace CPUs, 4 GPUs

2024-11-19 16:39:43Mr.Ming
twitter photos
twitter photos
twitter photos
NVIDIA Unveils GB200 NVL4: 2 Grace CPUs, 4 GPUs

November 19, 2024 NVIDIA, a global leader in artificial intelligence (AI) hardware and solutions, has officially launched two groundbreaking AI platforms: the Blackwell GB200 NVL4 and the Hopper H200 NVL. These platforms mark a significant leap forward in AI computational performance and scalability.

The Blackwell GB200 NVL4 module is an advanced iteration of the GB200 Grace Blackwell Superchip AI platform. Designed for high-performance computing, it integrates two Blackwell GB200 GPUs, each paired with a Grace CPU, onto a single motherboard. This configuration provides a powerful four-GPU NVLINK domain with 1.3 terabytes of coherent memory. The result is a single-server solution capable of delivering 2.2 times the simulation performance and 1.8 times the training and inference performance of its predecessor. NVIDIA's partners are expected to roll out the NVL4-based solutions in the coming months.

Meanwhile, the Hopper H200 NVL, a PCIe-based platform, is now available. It allows up to four GPUs to be interconnected via NVLINK domains, offering seven times the bandwidth of traditional PCIe solutions. This platform is optimized for hybrid high-performance computing (HPC) and AI workloads, providing data centers with unparalleled flexibility and efficiency.

The Hopper H200 NVL boasts impressive specifications, including 1.5 times the HBM memory, 1.7 times the large language model (LLM) inference performance, and 1.3 times the HPC performance compared to previous generations. Each GPU features 114 SMs, totaling 14,592 CUDA cores, 456 Tensor Cores, and up to 3 FP8 TFLOPs (with FP16 accumulation). The module is equipped with 80 GB of HBM2e memory, utilizing a 5,120-bit interface and operating at a TDP of 350 watts.

On the power consumption side, the GB200 NVL4 solution is projected to require approximately 6,000 watts due to its enhanced design, compared to the Superchip module's 2,700 watts. These advancements underscore NVIDIA's commitment to accelerating AI computation capabilities.

Recently, NVIDIA's dominance in the AI landscape is further demonstrated by its record-breaking performance in MLPerf v4.1 benchmarks for training and inference tasks. While the new Blackwell platform sets new standards, the company continues to refine the Hopper platform through ongoing optimizations. Looking ahead, NVIDIA has accelerated its AI roadmap, targeting annual release cycles and planning future innovations like the Blackwell Ultra and Rubin infrastructure.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!