About UALink

Our Mission

To unite the ecosystem in delivering UALink, an open standard, for AI scale-up networking that meets the escalating computing demands of AI applications.

Our Vision

Unleashing the full potential of open, optimized, high-performance, scale-up connectivity to enable transformative AI solutions.

Industry Demand

As Artificial Intelligence (AI) models continue to grow, data centers are requiring increasing amounts of available compute and memory to efficiently execute training and inference.

The UALink Consortium was formed to develop technical specifications that facilitate direct load, store, and atomic operations between AI Accelerators (i.e. GPUs). We are currently developing a new industry standard, working to establish an optimized scale-up ecosystem, and investing in an open solution that enables advanced models across multiple AI accelerators. 

The UALink 1.0 Specification taps into the expertise of our Promoter Members who are actively developing and deploying a broad range of accelerators. The technology centers around low latency/high bandwidth fabric for hundreds of accelerators in a pod as well as simple load and store semantics with software coherency. The initial specification enables the connection of up to 1K accelerators within an AI pod and is based on the IEEE P802.3dj PHY Layer. 

The Consortium officially incorporated as an organization in 2024, and its first specification was made available to the public in April 2025.

Questions? Contact admin@ualinkconsortium.org.

Interested in joining? Learn more.