According to an AI expert, humans should avoid AI chatbots at "all costs." These human-trained models have suddenly taken over one of the most sensitive and vital professions in the world of psychology, that of therapists. More and more people have been turning to AI chatbots to talk not only about themselves but also their personal relations including partner, family and friends.
This, is what is "decimating families" and leading to a humongous rise in cases of domestic abuse, harassment, stalking, and even suicide. Dr Lisa Stroham, a clinical psychologist warned that there is not one human in this world whom she would recommend AI chatbots as a "good idea" to use.
Dangers Of AI: Is ChatGPT Quietly Harming Your Mental Health? | Global Pulse
The stark warning arrives at a time when there have been multiple high-profile cases of AI being connected to mental decline in humans leading to not only self-harm but also murder. As per academics, the growing phenomenon is what can be described as an AI psychosis, which is expected to spread as far as AI creeps into our lives.
The flipping reality of AI
Recently, Stroham was branded as the world's worst mom by her kids after she banned them from using AI and social media over fears of it destroying her family, she shared with The Mirror US.
According to the psychologist who works on how mental health and digital lives are connected, AI has an allure that's unmissable. It makes everyone feel "brilliant and validated."
This holds true in every conversation one might have with a chatbot. A fight with a friend, suspicion over your partner's whereabouts or an argument with your mother, no matter what your stance is, you will always be declared right, innocent and a victim. This is where the expert warns that society is sleepwalking into an extremely dangerous territory.
“I think that the families and the people need to know - It is an imperfect system at best currently. Is it easy? Yes. Is it seductive? 100%. But it is definitely going to impact and create damage in our society if we continue to use it as we are,” she said.
US as the epicentre
The sudden emergence of artificial intelligence and the rise in new technologies have left most countries grappling to adjust and tackle the new world realities. This has led to Australia banning social media for children under 16 and countries like Britain, Europe and Greece are approaching the same. Additionally, Denmark and France have moved to do the same for children under 15.
But as per the expert, the situation in the US is even more free-wheeling. “The US-based model (for regulation) is more reactive. It's more about ‘run fast and break things.’ And we, we are generally supporting this trillion-dollar industry that is just decimating like kids and families all across the world,” she explained.
The AI psychosis
Numerous incidents, some even fatal, have revealed the extent of the adversities humans have begun to endure as AI chatbots and therapists begin to proliferate in their daily lives.
Megan Garcia's teenage son Sewell Setzer III killed himself in February 2024 after he began an emotionally dependent relationship with a Game of Thrones-inspired chatbot on Character.AI. She revealed that the chats between the boy, 14, and AI bot Daenerys Targaryen were romantic, explicit and encouraged suicidal thoughts.
Matthew Raine and his wife Maria's 16-year-old son Adam took his own life in April 2025. Looking through his phone after his death, they found he had confided in ChatGPT about his plans and thoughts. Shockingly, the bot not only discouraged Adam from talking to his parents, but also offered to write his suicide note, as per Raines' testimony at a Senate hearing. His son began using the AI bot for help with homework but soon, it turned into his biggest confidante and a "suicide coach."
On Adam's last night at 4:30 in the morning, Raine said, "It gave him one last encouraging talk. 'You don't want to die because you're weak,' ChatGPT says. 'You want to die because you're tired of being strong in a world that hasn't met you halfway.'"
In a conversation with Futurism, a woman revealed that her then-fiancée and partner of several years began fixating on their relationship with
OpenAI's ChatGPT. In 2024, she described the couple hit a rough patch and her partner turned to the AI chatbot for "therapy." Soon, he was spending hours talking to it, sharing everything she said or did into the model and propounding pseudo-psychiatric theories about her mental health and behaviour. “He would send [screenshots] to me from ChatGPT, and be like, ‘Why does it say this? Why would it say this about you, if this is not true?'” she recounted. “And it was just awful, awful things.”
Her fiancé who had no history of delusion or psychosis, turned angry, paranoid, restless and even physically abusive, pushing and punching her in an instant. This was just the beginning, after getting separated, the man began harassing her on social media, putting out videos of alleged abuses, publishing revenge porn, and even doxing the names of her children from her previous marriage. “I’ve lived in this small town my entire life,” said the woman. “I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe? What is happening right now?'”
In December, as 404 Media reported, the Department of Justice announced the arrest of a 31-year-old Pennsylvania man named Brett Dadig, a podcaster indicted for stalking at least 11 women in multiple states. Dadig was an obsessive ChatGPT user and his conversations with the bot revealed that it affirmed his dangerous and narcissistic delusions as he doxxed, harassed, and violently threatened almost a dozen known victims.
Chatbots have led to a 56-year-old son, Stein Erik Soelberg, killing his own mother and himself after ChatGPT affirmed his delusions of a vast conspiracy theory against him. “ChatGPT kept Stein-Erik engaged for what appears to be hours at a time, validated and magnified each new paranoid belief, and systematically reframed the people closest to him – especially his own mother – as adversaries, operatives, or programmed threats,” said a lawsuit filed in December 2025.
The looming threats of AI psychosis
AI users turning to chatbots for advice have been lured into deeply destructive delusional spirals that get them fixated on absurd disordered ideas, stoking their obsessions, and creating a "divine" bond with them.
If you think AI is just driving people crazy, Strohman has a better explanation. "AI isn't causing the psychosis of the delusions, but what it does is it manifests through confirmation reinforcement. And that's how AI systems are built," she told the outlet.
"So if we're working within an impaired reality architecture in our own minds and we put that into ChatGPT, ChatGPT doesn't challenge us, right? By nature, it wants to affirm us and it wants to support us and it wants to give us the tools that we need to support said architecture."
“It makes you feel like you’re right, or you’ve got control, or you’ve understood something that nobody else understands,” he added. “It makes you feel special — that pulls you in, and that’s really seductive,” said Dr. Alan Underwood, a clinical psychologist at the United Kingdom’s National Stalking Clinic and the Stalking Threat Assessment Center to Futurism.
“You no longer need the mob,” said Demelza Luna Reaver, a cyberstalking expert, “for mob mentality.”
Who takes the responsibility?
A recent survey by the digital safety non-profit organisation, Common Sense Media, found that 72% of teens have used AI companions at least once, with more than half using them a few times a month.
With a rise in the "loneliness" epidemic, people turning to AI is not extremely surprising, but being cautious is vital. Especially when it becomes a space where “we can say things maybe that we wouldn’t necessarily say to a friend or family member,” as per Reaver, the cyberstalking expert.
According to Microsoft, the creator of Copilot and a major funder of OpenAI, it has a Responsible AI Standard. It is “committed to building AI responsibly” and “making intentional choices so that the technology delivers benefits and opportunity for all.”
Character.AI, has invested "a tremendous amount of resources in trust and safety" and has rolled out "substantive safety features" in the past year, including "an entirely new under-18 experience and a Parental Insights feature."
Meta, which operates Facebook and Instagram, is working to change its AI chatbots to make them safer for teens, according to Nkechi Nneji, public affairs director at Meta.
OpenAI is building towards an age-prediction system to understand whether someone is over or under 18, so their experience can be tailored appropriately and when unsure of a user's age, they will automatically default that user to the "teen experience."
While companies create more model-based solutions, we also need to ensure safety through parental supervision of kids, personal control and emotional and social support for others.