Particle.news

Download on the App Store

US Judge Rules AI Training on Copyrighted Books Fair Use but Flags Pirated Works

The decision enables Claude to use protected texts under fair use doctrine with potential damages for illicit copies to be determined in a civil trial

Anthropic, évaluée à 61,5 milliards de dollars et largement soutenue par Amazon, a été fondée en 2021 par d’anciens ingénieurs d’OpenAI, la société qui a développé Chat GPT.
Aux États-Unis, utiliser des livres pour entraîner un modèle d’IA ne constituait pas une violation des lois du pays en matière de droits d’auteurs.
Image
Dario Amodei, directeur général d'Anthropic, à Davos, en Suisse, le 23 janvier 2025

Overview

  • Federal Judge William Alsup held that training Anthropic’s Claude model on copyrighted books constitutes fair use, noting AI’s learning mirrors human reading
  • The ruling exempted lawfully acquired texts but found the use of millions of pirated book copies to be a copyright violation
  • A civil trial has been scheduled to quantify damages for unauthorized works, with penalties of up to $150,000 per title
  • Anthropic, valued at $61.5 billion and backed by Amazon, said the outcome affirms its responsible innovation approach and industry practices
  • A parallel fair use decision for Meta this week signals a growing legal precedent likely to shape how AI firms source training data