Particle.news
Download on the App Store

Adobe Recasts Firefly and Creative Cloud With AI Assistants, Model Choice at MAX 2025

The company is pivoting to a multi‑model, assistant‑driven workflow to speed on‑brand creation across Creative Cloud.

Overview

  • Firefly Image Model 5 was unveiled with native 4‑megapixel output, improved photorealism for humans, and new layered and prompt‑based editing for precise, context‑aware changes.
  • Firefly’s Generate Soundtrack and Generate Speech launched in public beta, offering AI‑composed instrumental music and multilingual voiceovers with fine‑tuned emotion, including voices via ElevenLabs.
  • Photoshop’s Generative Fill now supports third‑party engines alongside Adobe’s own models, starting with Google Gemini 2.5 Flash and Black Forest Labs’ Flux, with additional partners such as Topaz Labs and ElevenLabs integrated across the Firefly platform.
  • Agentic AI assistants debuted in Adobe Express (public beta) with a similar assistant in Photoshop entering private beta, while Project Moonlight was previewed to link assistants across apps and draw style cues from creators’ social channels.
  • Premiere Pro’s AI Object Mask and Lightroom’s Assisted Culling entered public beta, a web‑based Firefly Video Editor moved into private beta, and a new partnership enables direct YouTube Shorts publishing from the Premiere mobile app.