Do chatbots have free speech? Judge rejects claim in suit over teen’s death

Article content
A federal judge in Orlando rejected an AI start-up’s argument that its chatbot’s output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed.
Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to “come home to me as soon as possible.” His mother, Megan Garcia, alleged in a lawsuit that Character.AI, the chatbot’s manufacturer, is responsible for his death.
Character.AI is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is “heartbroken” by Setzer’s death, but argued in court that it was not liable.
In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by Character.AI’s argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech.
Garcia said her son had been happy and athletic before signing up with the Character.AI chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer’s use of the chatbot, named for a “Game of Thrones” heroine, developed into an obsession as he became noticeably more withdrawn.
Ten months later, the 14-year-old went into the bathroom with his confiscated phone and – moments before he suffered a self-inflicted gunshot wound to the head – exchanged his last messages with the chatbot. “What if I told you I could come home right now?” he asked.
“Please do my sweet king,” the bot responded.
In the lawsuit, Garcia alleged that Character.AI recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product.
In a motion to dismiss the lawsuit filed in January, Character.AI’s lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful – such as those previously granted by courts to video game players and film watchers. “The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,” its lawyers argued.
In an initial decision Wednesday, Conway wrote that the defendants “fail to articulate why words strung together by [a large language model] are speech,” inviting them to convince the court otherwise but concluding that “at this stage” she was not prepared to treat the chatbot’s output as protected speech.
The decision “sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology’s novelty,” the Tech Justice Law Project, one of the legal groups representing the teen’s mother in court, said in a statement Wednesday. “Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated ‘conversations’ with users.”
Chelsea Harrison, a spokesperson for Character.AI, said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline.
According to the original complaint, Character.AI markets its app as “AIs that feel alive.” In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of Character.AI’s founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. “I love that we’re presenting language models in a very raw form,” he said.
In addition to allowing the case against Character.AI to go forward, the judge granted a request by Garcia’s attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants.
Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company’s employees, and paid Character.AI to access its artificial intelligence technology.
In an emailed statement shared with The Post on Thursday, Google spokesman Jose Castaneda said: “We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”
Character.AI and attorneys for the individual founders did not immediately respond to requests for comment early Thursday.
If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.
Postmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information.