Particle.news

Download on the App Store

New Nonprofit 'Fairly Trained' Certifies AI Companies Using Consented Data

Amidst copyright controversies, the organization aims to incentivize ethical data practices in AI training.

  • Ed Newton-Rex, former executive at Stability AI, has launched a nonprofit called 'Fairly Trained' to certify AI companies that train their models only on data whose creators have consented.
  • The move comes amid a growing debate over the use of copyrighted work to train AI systems, with several lawsuits filed against major AI companies like OpenAI and Meta.
  • Fairly Trained does not ask companies seeking certification to share their datasets for auditing, but instead requires written submissions detailing their data sources and due diligence processes.
  • Nine models had been certified by Fairly Trained at its launch, many of them made by AI companies in the music-generation space.
  • Fairly Trained charges fees for its certification service on a sliding scale based on the companies' annual revenue.
Hero image