Particle.news
Download on the App Store

Microsoft Links Atlanta and Wisconsin Fairwater Sites Into First AI 'Superfactory'

The company pitches a fungible fleet over an AI‑wide network that pools hundreds of thousands of Nvidia GPUs for model training as well as inference.

Overview

  • Microsoft says the linked Fairwater hubs function as one system via an AI Wide Area Network that can dynamically allocate compute across sites.
  • The two‑story facilities use direct‑to‑chip liquid cooling to pack accelerators densely, reduce latency, and curb water consumption.
  • Microsoft states the capacity will support partners including OpenAI, Mistral AI, and Elon Musk’s xAI alongside its own models.
  • Reporting cites roughly 120,000 miles of new fiber to connect Fairwater locations for near‑light‑speed data transfers across regions.
  • Industry reporting indicates the Atlanta site will deploy Nvidia GB200 NVL72 rack systems, a high‑density configuration not yet detailed by Microsoft.