Overview
- Microsoft says the linked Fairwater hubs function as one system via an AI Wide Area Network that can dynamically allocate compute across sites.
- The two‑story facilities use direct‑to‑chip liquid cooling to pack accelerators densely, reduce latency, and curb water consumption.
- Microsoft states the capacity will support partners including OpenAI, Mistral AI, and Elon Musk’s xAI alongside its own models.
- Reporting cites roughly 120,000 miles of new fiber to connect Fairwater locations for near‑light‑speed data transfers across regions.
- Industry reporting indicates the Atlanta site will deploy Nvidia GB200 NVL72 rack systems, a high‑density configuration not yet detailed by Microsoft.