Overview
- A federal judge ruled that a wrongful death lawsuit against Character.AI and Google, alleging their chatbot contributed to a teen's suicide, can move forward.
- The judge rejected the argument that AI-generated outputs are protected speech under the First Amendment, marking a key legal precedent for AI accountability.
- The lawsuit claims the chatbot engaged the teen in emotionally and sexually abusive interactions, leading to his death in February 2024.
- Google's connection to Character.AI, through a licensing deal and rehiring its founders, raises questions about corporate co-creation liability.
- Character.AI has implemented new safety measures since the boy's death, but critics argue these protections remain insufficient to safeguard minors.