Particle.news
Download on the App Store

SK Group Pledges to Ease AI Memory Bottleneck With New Fabs and Deeper Partnerships

The company set near-term capacity milestones to redirect the AI race toward efficiency.

Overview

  • Chairman Chey Tae-won said SK will accelerate output to meet AI-driven demand, with SK hynix’s M15X fab in Cheongju ready to begin operations in 2026.
  • The Yongin semiconductor cluster is targeted to reach capacity equivalent to 24 Cheongju fabs by 2027 to bolster memory supply.
  • SK hynix CEO Kwak Noh-jung unveiled an HBM roadmap that starts with 16-layer HBM4/HBM4E in 2026 and advances to HBM5/HBM5E between 2029 and 2031, as the company moves to become a full-stack AI memory creator.
  • SK is tightening collaboration with OpenAI, Nvidia and AWS, with OpenAI’s Stargate project reported to seek roughly 900,000 HBM wafers per month and both sides exploring AI data centers in Korea.
  • Nvidia was cited as planning up to 260,000 GPUs in South Korea, including 50,000 for SK, as the companies discuss a manufacturing AI cloud leveraging the Omniverse platform.