Part #/ Keyword
All Products

Eight Tech Giants Form UALink Group

2024-05-31 10:18:24Mr.Ming
twitter photos
twitter photos
twitter photos
Eight Tech Giants Form UALink Group

Eight leading technology companies, including Intel, Google, Microsoft, and Meta, have formed a new industry group called the Ultra Accelerator Link (UALink) Promotion Group. This group aims to guide the development of components that connect AI accelerator chips in data centers.

Announced on Thursday, the UALink Promotion Group's members also include AMD, Hewlett Packard Enterprise, Broadcom, and Cisco. The group has introduced a new industry standard for connecting the growing number of AI accelerator chips in servers. AI accelerators, ranging from GPUs to custom-designed chips, are essential for enhancing the training, fine-tuning, and operation of AI models.

The proposed standard, UALink 1.0, will enable the connection of up to 1024 AI accelerators within a single computing "pod," which the group defines as one or several racks in a server. UALink 1.0, based on open standards such as AMD's Infinity Architecture, will facilitate direct loading and storage between the memory attached to AI accelerators. This improvement will enhance speed and reduce data transfer latency compared to existing interconnect specifications.

In the third quarter, the group plans to establish the UALink Alliance to oversee the future development of UALink specifications. UALink 1.0 will be available to companies joining the alliance at that time. An updated, higher-bandwidth specification, UALink 1.1, is scheduled for release in the fourth quarter of 2024.

The first products utilizing UALink are expected to launch "in the coming years," according to Norrod.

The primary beneficiaries of UALink, besides AMD and Intel, appear to be Microsoft, Meta, and Google, which have invested billions in Nvidia GPUs to power their cloud services and train their expanding AI models. These companies aim to diversify from what they see as an overly dominant supplier in the AI hardware ecosystem.

Google has its custom AI chips, TPU and Axion, for training and running AI models. Amazon has several AI chip families. Last year, Microsoft joined the race with its Maia and Cobalt chips. Meta is also developing its own line of accelerators.

Microsoft and its partner OpenAI reportedly plan to spend at least $100 billion on a supercomputer to train AI models, equipped with future Cobalt and Maia chips. These chips will require a solution to connect them, potentially provided by UALink.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!