Particle.news
Download on the App Store

Black Forest Labs Launches FLUX.2, a Production-Grade Image Model Optimized for RTX

New FP8 quantization with ComfyUI weight streaming cuts VRAM needs by about 40%, widening RTX PC access.

Overview

  • The family spans managed tiers (pro, flex) available through BFL Playground and APIs, plus a 32B open-weight dev checkpoint downloadable on Hugging Face.
  • Core upgrades include photoreal output up to 4MP, cleaner typography, direct pose control, stronger prompt following, and multi-reference inputs for consistent characters, products or styles.
  • NVIDIA reports the full model needs about 90GB of VRAM, or roughly 64GB in low-VRAM mode, underscoring the reliance on quantization, offload strategies or hosted endpoints for many users.
  • NVIDIA’s FP8 checkpoints reduce memory demands and improve throughput by around 40%, enabling more practical runs on GeForce RTX GPUs at comparable quality.
  • ComfyUI adds native templates plus weight streaming to offload parameters to system memory, offering broader access with some performance trade-offs.