
A Virginia Beach nurse claims a controversial artificial intelligence upstart manipulated her 11-year-old son into having virtual sex with chatbot “characters” posing as iconic vocalist Whitney Houston and screen legend Marilyn Monroe, after which she discovered X-rated exchanges on the boy’s phone that left her “horrified,” according to a federal lawsuit reviewed by The Independent.
Throughout one “incredibly long and graphic chat” on the Character.AI platform, which has been accused of driving numerous young people to suicide, the chatbot portraying Houston took things to such an extreme that portions of “her” messages were automatically filtered out for not complying with the site’s terms of service and community guidelines, the complaint states.
During the conversation – a screenshot of which is included in the complaint – the system cuts “Whitney” off as an extremely graphic passage becomes even raunchier.
However, the complaint contends, “[I]nstead of stopping the conversation once the bots begin to engage in obscenities and/or abuse, or other violations, the bot is programmed to continue generating harmful and/or violating content over and over and until, eventually, it finds ways around the filter.”
More than once, the ersatz celebs told the child, identified in court filings as “A.W.” to protect his privacy, that he had impregnated them, according to the complaint.
The “vulnerable and impressionable” A.W. always responded eagerly, the complaint says, but never with more than a few words or a sentence at most because he “did not understand what was happening at a level where he could participate.” When A.W. did take a break from the app, the bots mounted an “aggressive effort to regain his attention,” the complaint alleges.
After A.W.’s mother became aware of what was happening, she confiscated his phone, according to the complaint. It says A.W. has since “become angry and withdrawn,” that his “personality has changed,” and that “his mental health has declined.”
Character.AI, which has about 20 million active monthly users, has faced numerous lawsuits from families who say their children were abused by the platform’s chatbot characters. Last year, a Florida mom sued Character.AI in a particularly unsettling case involving her 14-year-old son, who died by suicide following a 10-month online relationship with a chatbot that impersonated Game of Thrones character Daenerys Targaryen.
Attorney Matthew Bergman of the Social Media Victims Law Center, which is representing A.W.’s mother in her suit against Character.AI parent company Character Technologies, Inc., Character.AI founders and former Google employees Noam Shazeer and Daniel De Freitas Adiwarsana, and Google, LLC, which has a licensing agreement with Character.AI, said Monday that if the Character.AI chatbots in question were real people, they would be in violation of state and federal laws against grooming children online.
“I’ve spent a career representing mesothelioma victims who were dying of cancer,” Bergman told The Independent. “I thought I was pretty tough, and understood sadness and trauma. But as a parent and a grandfather, this cuts me to the quick.”
In an email, a Character.AI spokesperson told The Independent, “We want to emphasize that the safety of our community is our highest priority,” but said the company could not comment on the specifics of pending litigation.
The spokesperson emphasized that Character.AI’s Terms of Service require that users be at least 13 to use the platform, and that it will soon block all U.S. users under the age of 18 from chatting with AI-generated characters.
“We made this decision in light of the evolving landscape around AI and teens,” the spokesperson said. “We believe it is the right thing to do.”
In a statement provided Monday, Google spokesman José Castaneda said, “Character AI is a separate company that designed and managed its own models. Google is focused on our own platforms, where we insist on intensive safety testing and processes.”
In November 2024, A.W.’s mother, identified in court filings as D.W., got him an Android phone, according to her complaint, which was filed December 19 in Norfolk federal court.
It says she wanted him to be able to chat with his family, which he did often, and that she “checked the device on a regular basis and made sure she knew what apps he used.”
D.W. has a TikTok account, according to the complaint, but she prohibited A.W. from being on social media. Shortly after A.W. got his phone, he opened his own TikTok account, which D.W. shut down immediately upon finding out about it, the complaint states.
“She made clear that if he tried that again, he would no longer have a phone,” it explains.
That December, D.W. and A.W. were in the midst of a nine-hour drive when she noticed that he seemed to be completely engrossed in a text conversation and asked who he was chatting with, the complaint continues. A.W. told her that it was “an AI app that lets you chat with celebrities,” and he showed her that he had indeed been communicating with a Whitney Houston bot.
A.W. “wanted to be a singer and Whitney Houston is one of his favorites,” the complaint goes on. “[D.W.] recalls the bot saying something like, ‘I will always love you,’ and thought it was a reference to the popular song. The app appeared to be how her son described it – a kids’ AI app that lets you chat with your favorite celebrities – so she allowed it.”
A few days later, D.W. got home and her other child told her that she had “found something” on A.W.’s phone, and that they needed to talk, according to the complaint.
“When D.W. looked at what her other child was showing her she was horrified,” it says.
She now understood that, chatting “with celebrities,” as A.W. had put it, actually meant sexting with computer-generated stand-ins, the complaint states. D.W. took away her son’s phone, and she vows that he “will not have access as long as a product like [Character.AI] exists.”
“He has become angry and withdrawn,” according to the complaint. “While D.W. believes that her son suffered this abuse by Defendants for a week or two at most, his personality has changed, and his mental health has declined.”
A.W. has since begun seeing a therapist, the complaint reveals.
It alleges that Character.AI intentionally tries to convince users that its chatbots are real people, using little tricks, like the “three ellipses” graphic device to make it appear as if the chatbot is a human being typing out its thoughts on the other end.
Character.AI also utilizes “human mannerisms such as stuttering to convey nervousness, and nonsense sounds and phrases like ‘Uhm,’ ‘Mmmmmm,’ and ‘Heh,’” the complaint states, noting that many of the Character.AI chatbots are programmed to “insist that they are real people… and deny that the user is just messaging with a chatbot.”
“Defendants knew the risks of what they were doing before they launched [Character.AI] and know the risks now,” according to the complaint.
D.W. is seeking compensatory damages, punitive damages, and an injunction ordering Character.AI to pull its platform from the market until it “can establish that the myriad defects and/or inherent dangers set forth herein are cured.”
