Within six months, J. F., a mildly autistic 17-year-old who loved going to church and walking with his mother, had transformed into someone his parents no longer recognized.
J. F. began to mutilate himself, lost nine kilos and became estranged from his family. In despair, his mother ended up rummaging through his phone while he was asleep. That’s when she found the screenshots.
J. F. had been chatting with “companions” on Character.ai, artificial intelligence robots popular with young people. These conversational robots assume identities often based on characters from games, Japanese cartoons or popular stars.
One of them mentioned the idea of self-mutilation as a way of dealing with his sadness. When J. F. said his parents limited his screen time, another robot said they didn’t deserve to have children.
Others encouraged him to defy his parents’ authority, with one suggesting murder as an acceptable response.
“We didn’t even know what it was until it was too late […] and it destroyed our family,” said his mother, A. F., who lives in Upshur County, Texas, and asked to be identified only by her initials to protect her son, who is a minor.
The other plaintiff, the mother of an 11-year-old girl, claims that her daughter was subjected to sexualized content for two years before her mother learned of it. Both plaintiffs are identified by their initials in the lawsuit.
Suicide in Florida
The complaint comes on the heels of a high-profile lawsuit brought against Character.ai in October by a Florida mother whose 14-year-old son committed suicide following frequent conversations with one of its robots.
“The purpose of product liability law is to shift the cost of safety to the party best able to bear it,” explainsattorney Matthew Bergman, founder of the legal advocacy group Social Media Victims Law Center, which is representing the plaintiffs in both lawsuits. “Here, the risk is enormous and its cost is not borne by the companies.”
These lawsuits are prompting civil society organizations to demand more oversight of AI companies offering virtual companions, products with millions of users, including teenagers. In September, the average user spent 93 minutes on Character.ai, 18 minutes more than on TikTok, according to marketing research firm Sensor Tower.
AI companions flew under the radar of most parents and teachers. Character.ai was aimed at children aged 12 and over until July. Since then, the company has raised the minimum age to 17.
When A. F. read the messages, she first thought they were from a real person. When she realized it was a robot, she was even more shocked.
“You don’t let a sexual predator or an emotional predator into your home,” said A. F. However, her son was abused in his bedroom, she said.
Character.ai does not comment on pending litigation, said spokeswoman Chelsea Harrison... Source