Particle.news
Download on the App Store

South Korea Secures 260,000 Nvidia GPUs as SK Lays Out HBM and Data-Center Push

Analysts say success will hinge on securing power and skilled workers to run the planned AI infrastructure.

Overview

  • At APEC in Gyeongju, Nvidia committed to deploy more than 260,000 Blackwell GPUs to Korea, with over 50,000 designated for government and domestic cloud infrastructure and phased deliveries extending through the decade.
  • SK hynix detailed an HBM roadmap, targeting HBM4/HBM4E supply starting in 2026 and HBM5/HBM5E between 2029 and 2031, alongside plans to start M15X in Cheongju next year and bring the Yongin mega cluster online in 2027.
  • SK Telecom said its AWS-partnered Ulsan AI data center will open at 100 megawatts with space for about 60,000 GPUs and that it intends to expand capacity toward a 1‑gigawatt target.
  • SK Group emphasized a shift to co‑design with customers as a 'full‑stack AI memory creator' and highlighted ecosystem partnerships with Nvidia, OpenAI and AWS, with leaders citing extraordinary HBM demand such as OpenAI’s reported 900,000 wafers per month request.
  • Editorials and columns warn that operating roughly 260,000 GPUs will require on the order of 1 gigawatt of additional power and a larger talent base, and they flag regulatory and potential U.S. policy uncertainties as execution risks.