Microsoft has announced it has connected massive data centers in Wisconsin and Atlanta, approximately 700 miles and five states apart, through a high-speed fiber-optic network to operate as a unified system.
The announcement Wednesday morning marks the debut of what the company is calling its AI “superfactory,” a new class of data centers built specifically for artificial intelligence. The facilities are designed to train and run advanced AI models across connected sites, a setup Microsoft describes as the world’s first “planet-scale AI superfactory.”
Unlike traditional cloud data centers that run millions of separate applications for different customers, Microsoft states the new facilities are designed to handle single, massive AI workloads across multiple sites. Each data center houses hundreds of thousands of Nvidia GPUs connected through a high-speed architecture known as an AI Wide Area Network, or AI-WAN, to share computing tasks in real time.
Microsoft indicates it is using a new two-story data center design to pack GPUs more densely and minimise latency, a strategy enabled in part by a closed-loop liquid cooling system.
By linking sites across regions, the company states it can pool computing capacity, redirect workloads dynamically, and distribute the massive power requirements across the grid so that it is not dependent on available energy resources in one part of the country.
This unified supercomputer will train and run the next generation of AI models for key partners such as OpenAI, and for Microsoft’s own internal models.
The new approach demonstrates the rapid pace of the AI infrastructure race amongst the world’s largest technology companies. Microsoft spent more than $34 billion on capital expenditures in its most recent quarter, much of it on data centers and GPUs, to keep pace with what it perceives as soaring AI demand.
Amazon is taking a similar approach with its new Project Rainier complex in Indiana, a cluster of seven data center buildings spanning more than 1,200 acres. Meta, Google, OpenAI and Anthropic are making similar multibillion-dollar investments, collectively committing hundreds of billions into new facilities, chips, and systems to train and deploy AI models.
Some analysts and investors see echoes of a technology bubble in the rush to build AI infrastructure, if business customers do not realise sufficient value from AI in the near term. Microsoft, Amazon and others maintain the demand is genuine, not speculative, pointing to long-term contracts as evidence.
The concept of an AI “superfactory” represents a fundamental architectural shift in how hyperscale computing infrastructure is conceived and operated. Traditional data centers, whilst physically large and technologically sophisticated, function as discrete facilities serving localised or regionally distributed workloads. The superfactory model instead treats geographically distant facilities as components of a single computing system, analogous to how individual processors in a server work together despite physical separation.
The 700-mile separation between Wisconsin and Atlanta facilities creates significant technical challenges that Microsoft’s AI-WAN architecture must overcome. Network latency, the time required for data to travel between locations, increases with distance and can disrupt tightly coordinated computing tasks. High-speed fiber-optic connections and sophisticated networking protocols are essential to maintain the real-time coordination necessary for distributed AI training, where model parameters must be continuously synchronised across computing nodes.
The concentration on “single, massive AI workloads” rather than diverse customer applications reflects the distinctive computational patterns of large language model training and inference. Training advanced AI models requires coordinating calculations across enormous numbers of GPUs simultaneously, with constant communication to update model weights. This differs fundamentally from traditional cloud computing where independent applications run in isolation with minimal inter-process communication.
The hundreds of thousands of Nvidia GPUs housed in each facility represent extraordinary concentrations of computing power and capital investment. Individual high-end AI GPUs cost tens of thousands of dollars, meaning the GPU investment alone in these facilities likely exceeds billions of dollars before accounting for networking equipment, cooling systems, power infrastructure, and building construction.
The two-story data center design Microsoft describes represents physical architecture optimisation for GPU density. Traditional data center layouts prioritise flexibility to accommodate diverse equipment types and customer needs. Purpose-built AI facilities can instead optimise for the specific thermal, power, and networking requirements of dense GPU installations, potentially achieving higher utilisation of building footprint.
The closed-loop liquid cooling system addresses the extraordinary heat generation from densely packed GPUs. Air cooling, standard in traditional data centers, becomes insufficient when power density reaches levels common in AI facilities. Liquid cooling, whilst more complex and expensive, can remove heat more efficiently and allow tighter equipment spacing.



