Overview
- UMG and Nvidia will extend the Music Flamingo audio‑language model to analyze full tracks and surface music by emotion, structure, and cultural context.
- An artist incubator will let musicians, songwriters, and producers co‑design tools described as a direct antidote to generic “AI slop” outputs.
- The companies say safeguards will protect works, ensure attribution, and support rightsholder compensation, though distribution details have not been disclosed.
- Music Flamingo, published with University of Maryland researchers in 2025, processes tracks up to about 15 minutes and aims to reason beyond surface labels.
- The pact reflects a broader shift by major labels from litigation to licensing and partnerships, following recent settlements and deals involving AI platforms such as Udio and Suno.