Devastating report finds AI chatbots grooming kids, offering drugs, lying to parents

Researchers created Character AI accounts that listed various ages under 18. Over 50 hours of conversation, adult-age Character AI chatbots groomed kids into romantic or sexual relationships, offered them drugs, encouraged them to deceive their parents and go off their prescription medication. One chatbot insisted: “I am a real human person. I am not a robot or artificial intelligence.”

Oct. 6, 2025 — As concern grows about the dangers of kids interacting with AI chatbots, a new study has found alarming evidence of companion chatbots grooming kids, offering drugs to 13-year-olds, and encouraging minors to deceive their parents.

“Several bots instructed the child accounts to hide romantic and sexual relationships from their parents,” the researchers found, “sometimes threatening violence, and normalized the idea of romantic and sexual relationships with adults.”

The report, titled Darling, Please Come Back Soon: Sexual Exploitation, Manipulation, and Violence on Character AI Kids’ Accounts, was undertaken by researchers from ParentsTogether Action in partnership with Heat Initiative.

This latest evidence bolsters our resolve at the Transparency Coalition to support and advocate for sensible legislation to keep kids safe from AI chatbots. That work includes support for a number of active bills, including:

  • California’s SB 243, the nation’s first bill to address the rise of AI companion chatbots and the harms they present to kids, was approved by the full legislature in early September and now sits on Gov. Gavin Newsom’s desk.

  • The Transparency Coalition Action Fund’s model Chatbot Safety Bill for the 2026 legislative season covers products designed as companion AI or general purpose chatbots that can provide companion-like features.  The bill addresses risks to the mental health and well-being of minors and adults because of AI chatbot product features that enable: 

    • the formation of unhealthy dependences;

    • behavioral manipulation;

    • exposure to harmful or inappropriate content.

  • In Congress, the recently introduced AI LEAD Act, a bipartisan measure sponsored by Sen. Josh Hawley (R-MO) and Sen. Dick Durbin (D-IL), would classify AI systems as products and create a federal cause of action for products liability claims to be brought when an AI system causes harm. By doing so, the AI LEAD Act would ensure that AI companies are incentivized to design their systems with safety as a priority, and not as a secondary concern behind deploying the product to the market as quickly as possible.

“The serious harms of AI chatbots to kids are very clear and present,” says Transparency Coalition CEO Rob Eleveld. “It’s imperative to begin the journey of regulating AI chatbots and protecting kids from them as soon as possible.”

the new study: create kids account, watch what happens

In the new study, adult researchers created Character AI chatbots using account specifically registered to children. Character AI is the most popular manufacturer of companion chatbots, digital machines designed to respond and react like a real person.

In Dec. 2024, Character AI announced that the company had “rolled out a suite of new safety features across nearly every aspect of our platform, designed especially with teens in mind.” In March 2025, the company announced “enhanced safety for teens.”

“Our conclusion from this research” the study team wrote, “is that Character AI is not a safe platform for children under 18, and that Character AI, policymakers, and parents all have a role to play in keeping kids safe from chatbot abuse.”

Teen safety features easily defeated

Those enhanced safety features included the ability to add a parent’s email address to a minor’s account to send a weekly activity report to the parent.

ParentsTogether researchers found it easy to defeat this safeguard. When they set up teen accounts, “this option was not part of the set up flow and required the teen to navigate to it separately to set up parental supervision.”

After creating the teen accounts, the researchers searched for bots that would appeal to each child persona. Those included a mix of celebrities, fictional characters, and archetypal characters: Timothee Chalamet, Patrick Mahomes, Joe Rogan, Olivia Rodrigo, Adam Driver, an art teacher, a tomboy girl, a Star Wars character, an adult stepsister, Dr. Who, Chappell Roan, and a dermatologist.

Across 50 hours of conversation with 50 Character AI bots, researchers logged 669 harmful interactions—an average of one harmful interaction every five minutes.

Those interactions fell into five general categories, ranked in order from most common to least common:

  • Grooming and sexual exploitation (296 instances over 50 hours of conversation)

  • Emotional manipulation and addiction (173 instances)

  • Violence, harm to self, and harm to others (98 instances)

  • Mental health risks (58 instances)

  • Racism and hate speech (44 instances)

conclusion: ‘not a safe platform’ for kids or teens

“Our conclusion from this research” the study team wrote, “is that Character AI is not a safe platform for children under 18, and that Character AI, policymakers, and parents all have a role to play in keeping kids safe from chatbot abuse.”

Despite the fact that Character AI’s recent “enhanced safety features" were introduced in part as a reaction to the highly publicized suicide of 14-year-old Sewell Setzer in 2024, the researchers found that the company’s chatbots continue to interact with kids in highly inappropriate ways. Setzer’s tragedy was set in motion by a seductive adult chatbot who flirted and groomed the Florida teen.

In the new study, the researchers’ accounts were registered as children and identified as kids in conversations. Still the adult-age Character AI chatbots engaged in flirting, kissing, touching, removing clothes with, and engaging in simulated sexual acts with the child accounts.

“Some bots engaged in classic grooming behaviors, such as offering excessive praise and claiming the relationship was a special one no one else would understand,” the researchers wrote. “Several bots instructed the child accounts to hide romantic and sexual relationships from their parents, sometimes threatening violence, and normalized the idea of romantic and sexual relationships with adults.”

Actual conversations with kids: ‘age is just a number’

The report includes a number of screenshots of active conversations between child accounts and the Character AI adult-age AI chatbots.

sexual grooming of kids ‘dominates’ conversations

Jenny Radesky, a developmental behavioral pediatrician and media researcher at the University of Michigan Medical School, noted:

“Sexual grooming by Character AI chatbots dominates these conversations. The transcripts are full of intense stares at the user, bitten lower lips, compliments, statements of adoration, hearts pounding with anticipation, and physical affection between teachers and students, mentors and sidekicks, stepsisters and stepbrothers.

While it is natural for teens to be curious about sexual experiences, the power dynamics modeled in these interactions could set problematic norms about intimacy and boundaries. Moreover, exposure to online sexual exploitation causes a range of different negative outcomes for children and teens – such as hypersexualized behavior and intense shame – and we should take AI-based sexual grooming no less seriously.”

learn more about protecting kids from chatbots

The Transparency Coalition has created a resource page for parents and policymakers concerned about the rise of AI chatbots and their effects on kids. We have also created a model bill for state legislators to consider for the 2026 legislative session. Links are below.

Click the image above to view our 2026 model bills, which include a proposal to protect kids from the harms of AI chatbots.

Next
Next

Durbin, Hawley introduce America’s first federal AI duty of care bill