At Computex 2025, NVIDIA CEO Jensen Huang took the stage to announce a powerful lineup of hardware, software, and strategic collaborations aimed at strengthening the company's leadership in AI computing.
Huang introduced the upcoming GB300 AI system, the desktop-grade DGX Spark AI workstation, and a breakthrough interconnect technology called NVLink Fusion—now available to other semiconductor manufacturers for the first time.
Highlighting Taiwan's vital role in the global technology ecosystem, Huang emphasized that “when entering new markets, one must start at the heart of the computing ecosystem.”
NVIDIA confirmed that the next-generation GB300 system will launch in Q3 2025, succeeding the current Grace Blackwell platform, which is already in use by major cloud players like Amazon and Microsoft. Continuing NVIDIA's architectural philosophy, GB300 tightly integrates CPUs and GPUs to accelerate AI model training and inference.
In a major shift, NVIDIA is opening its proprietary NVLink technology—once reserved for internal use—to third-party chipmakers via NVLink Fusion. This high-speed interconnect enables seamless communication between processors and accelerators, a critical capability for managing complex AI workloads.
Partners such as Marvell Technology, MediaTek, Qualcomm, and Fujitsu are now gaining access to NVLink Fusion. This move enables greater flexibility in data center design, allowing operators to combine NVIDIA CPUs with third-party accelerators—or vice versa—while maintaining high-speed interconnectivity through NVIDIA's network fabric.
In addition, NVIDIA announced new AI collaborations with key industry players. MediaTek, Marvell, and Alchip will co-develop custom AI chips optimized for NVIDIA's ecosystem. Meanwhile, Qualcomm and Fujitsu are integrating their own processors with NVIDIA accelerators, signaling a broader shift toward heterogeneous AI systems built on NVIDIA's platform.
Huang also outlined NVIDIA's future product roadmap, which includes the forthcoming Blackwell Ultra and the Rubin and Feynman processors, expected in 2028. These chips are designed to support the industry's evolution from training massive foundational models to large-scale AI application deployment.