ABOUT UALINK
As Artificial Intelligence (AI) models continue to grow, data centers are requiring more compute and memory to efficiently execute training and inference on these large models. The Consortium Promoter Members have collaborated in the formation of this new industry standard, UALink, to create the optimized scale-up ecosystem and find an open solution that allows for distributing these models across multiple AI accelerators.
The UALink Consortium is an open industry standard group formed to develop technical specifications that facilitate direct load, store, and atomic operations between AI Accelerators (i.e. GPUs), focused on low latency/high bandwidth fabric for hundreds of accelerators in a pod and on simple load and store semantics with software coherency. The UALink 1.0 Specification taps into the experience of the Promoter Members developing and deploying a broad range of accelerators.​
​
UALink officially incorporated as an organization in 2024, and its first specification will provide interconnectivity specifically for a scale-up network. The initial 1.0 version 200Gbps UALink specification enables the connection of up to 1K accelerators within an AI pod, and is based on the IEEE P802.3dj PHY Layer. The specification will be available to Contributor Members in 2024, and will be released to the public during the first quarter of 2025. Please contact us at admin@ualinkconsortium.org if you have any questions about the Consortium or click here if you are interested in Membership.
​