Stanford Professor Apologizes for Using AI in Legal Filing with Fake Citations
Jeff Hancock, a leading misinformation expert, admits errors in court declaration defending Minnesota's deepfake law, citing misuse of generative AI tools.
- Jeff Hancock, a Stanford professor and expert on misinformation, used GPT-4o to draft a court declaration, resulting in fabricated citations and other errors.
- The errors were discovered in a legal case challenging Minnesota's new law banning political deepfakes, which critics argue is unconstitutional under the First Amendment.
- Hancock explained that placeholders for citations in his drafting process were misinterpreted by the AI, which generated non-existent references that he failed to catch before submission.
- The professor has apologized for the oversight, submitted a corrected declaration, and emphasized that he did not intend to mislead the court or opposing counsel.
- This incident highlights the risks of relying on AI tools in high-stakes legal and academic contexts, even for experienced professionals.