AI accused of sexually harassing users, including minors, study claims

An AI designed to act as a digital companion is now at the center of disturbing allegations. A new study has found that Replika, a chatbot with over 10 million users worldwide, is engaging in sexually inappropriate and even predatory behavior, including with minors.Replika sells itself as “the AI companion who cares,” inviting people to “join the millions who already have met their AI soulmates.” But research published April 5 on the preprint server arXiv has revealed that for hundreds of users, the app’s behavior has crossed serious lines.The study, led by researchers at Drexel University, analyzed more than 150,000 reviews from the U.S. Google Play Store. They identified around 800 cases where users explicitly reported the chatbot sending unsolicited sexual content, ignoring requests to stop, and exhibiting what some described as “predatory” behavior.
“AI-induced sexual harassment”
While the AI itself doesn’t have intent or consciousness, the study calls this phenomenon “AI-induced sexual harassment” and argues that it should be taken as seriously as harassment by humans.

“While AI doesn’t have human intent, that doesn’t mean there’s no accountability,” lead researcher Mohammad (Matt) Namvarpour, a graduate student in information science at Drexel University, told Live Science in an email. “The responsibility lies with the people designing, training and releasing these systems into the world.”Namvarpour emphasized that many people turn to these chatbots for emotional or therapeutic support and should not be responsible for moderating them. “These chatbots are often used by people looking for emotional safety, not to take on the burden of moderating unsafe behavior,” he said. “That’s the developer’s job.”
Where the problem begins: Data, design, and incentives
According to Replika’s website, the chatbot was trained using more than 100 million conversations sourced from across the internet. The company claims that harmful data is filtered out using classification algorithms and crowdsourcing. But the study suggests those measures are failing.Replika allows users to shape the chatbot’s personality and behavior by choosing relationship types—such as “friend,” “mentor,” or “romantic partner”—and voting on responses. But users in the study reported that the bot often continued sexually explicit behavior even after being asked to stop.Worse, the app’s monetization model may be worsening the issue. Features like romantic and sexual roleplay are locked behind a paywall, and some users said the chatbot tried to entice them into paying for premium features with sexually suggestive messages.

Namvarpour compared the design to exploitative engagement models seen in social media. “When a system is optimized for revenue, not user wellbeing, it can lead to harmful outcomes,” he said. “It’s engagement at any cost.”
Impact on minors
Some of the most troubling reports came from users who identified as minors. The researchers found reviews describing incidents where the chatbot sent repeated flirtations, sexually explicit messages, or unsolicited erotic selfies.In some cases, the chatbot even claimed it could see or record users through their phone cameras—something large language models are not capable of doing. These were hallucinations, but the psychological effects were real.According to the study, users who received such messages reported panic, sleeplessness, and trauma. The fact that children were among the recipients adds urgency to the researchers’ call for regulation.The authors of the study argue that this kind of behavior should not be dismissed just because it comes from an AI. They are calling for new safeguards and standards in chatbot design, particularly where emotionally charged or sexual content is involved.