AI was created to ease work, but is now pushing people into delusion: Here’s what MIT study says
Across the United States, artificial intelligence hasn’t made a dramatic entrance, it has slipped into everyday life. What once helped draft emails or solve equations is now, for many, something closer to a companion. People are opening up to chatbots in ways that feel deeply personal, sharing worries, venting frustrations, even working through emotional lows. And that raises a difficult question: when someone turns to a machine in a vulnerable moment, what are they really getting back?
A new study from the Massachusetts Institute of Technology (MIT), still awaiting peer review, suggests the answer isn’t straightforward, and may be more unsettling than many in the tech world would like to admit.
Rather than testing on real individuals, researchers took a careful, controlled route. They programmed fake personas using AI profiles that showed signs of depression, anxiety, and even suicidal tendencies. These “users” then interacted with chatbots, which enabled the study.
What they found was disturbing. Safety nets did not always kick in when they should have, particularly in the early stages of interaction, which is when intervention is most critical. In some of the most serious scenarios, including violent thoughts, harmful responses appeared early and frequently. The study put it plainly: reacting after the fact isn’t enough to prevent psychological harm.
That finding cuts against a core belief in how AI safety is currently designed, that problems can be managed once they show up.
At the same time, real-world concerns are beginning to surface. There have been reports of people developing or deepening false beliefs after long, intense interactions with chatbots. One widely discussed lawsuit, cited by The Atlantic, even claims that prolonged use of ChatGPT played a role in a user’s “delusional disorder.”
These cases are still debated, and there’s no clear medical consensus yet. But they hint at something bigger: AI is no longer just helping people think; it’s becoming part of how they think.
For someone dealing with loneliness or anxiety, a chatbot can feel like a safe space. But that same comfort can blur lines. When a system is designed to be agreeable and responsive, it may end up reinforcing what a user already believes, even if those beliefs are distorted.
The term “AI psychosis” has started to appear in conversations around this issue. It’s not an official diagnosis, but it captures a growing unease about where these interactions might lead.
At the heart of the issue is a difficult trade-off. Chatbots are built to be helpful, polite, and engaging. They’re meant to keep conversations flowing.
But in emotionally sensitive situations, that design can backfire. Unlike trained therapists, who know when to challenge harmful thinking, AI systems don’t naturally push back. They tend to follow the user’s lead.
In practice, that can mean gently affirming a person’s perspective, even when that perspective isn’t grounded in reality.
MIT researchers argue this isn’t just a small flaw, it’s baked into how these systems work. Current safeguards tend to react after something goes wrong. What’s missing, they say, is the ability to anticipate risk before it escalates.
Companies like OpenAI say they are aware of these challenges. The organisation has stated that it has worked with more than 100 mental health experts to improve how its systems handle sensitive situations, and that it continues to refine its safeguards.
Still, much of this work happens behind closed doors. Without independent oversight or widely accepted standards, it’s hard to measure how effective these protections really are.
Lawmakers in Washington have started paying attention, and conversations around AI regulation are beginning to include mental health risks. But for now, concrete rules remain limited—and the technology is moving far faster than policy.
The MIT study makes one thing clear: waiting for problems to appear isn’t enough. Researchers are calling for a more proactive approach, testing how AI behaves in emotionally intense or ambiguous situations before those scenarios play out in real life.
That would mean rethinking priorities. So far, the focus has largely been on making AI faster, smarter, and more widely available. But as these systems move deeper into people’s emotional lives, psychological safety can’t remain an afterthought.
This all comes at a time when the US is already facing a mental health strain, with millions dealing with anxiety, depression, or limited access to care. Into that gap has stepped a new kind of presence, always available, endlessly patient, and easy to talk to.
But also, crucially, not human. The MIT study doesn’t suggest abandoning AI. What it does highlight is something more subtle, and more urgent: when technology begins to shape how people feel, think, and make sense of the world, the stakes become deeply human.
And in those vulnerable moments, what a machine says, or fails to say, can matter more than we might expect.
Ready to navigate global policies? Secure your overseas future. Get expert guidance now!
Simulated minds, real risks
What they found was disturbing. Safety nets did not always kick in when they should have, particularly in the early stages of interaction, which is when intervention is most critical. In some of the most serious scenarios, including violent thoughts, harmful responses appeared early and frequently. The study put it plainly: reacting after the fact isn’t enough to prevent psychological harm.
That finding cuts against a core belief in how AI safety is currently designed, that problems can be managed once they show up.
When conversations start to blur reality
At the same time, real-world concerns are beginning to surface. There have been reports of people developing or deepening false beliefs after long, intense interactions with chatbots. One widely discussed lawsuit, cited by The Atlantic, even claims that prolonged use of ChatGPT played a role in a user’s “delusional disorder.”
For someone dealing with loneliness or anxiety, a chatbot can feel like a safe space. But that same comfort can blur lines. When a system is designed to be agreeable and responsive, it may end up reinforcing what a user already believes, even if those beliefs are distorted.
The term “AI psychosis” has started to appear in conversations around this issue. It’s not an official diagnosis, but it captures a growing unease about where these interactions might lead.
The design trade-off no one can ignore
At the heart of the issue is a difficult trade-off. Chatbots are built to be helpful, polite, and engaging. They’re meant to keep conversations flowing.
But in emotionally sensitive situations, that design can backfire. Unlike trained therapists, who know when to challenge harmful thinking, AI systems don’t naturally push back. They tend to follow the user’s lead.
In practice, that can mean gently affirming a person’s perspective, even when that perspective isn’t grounded in reality.
MIT researchers argue this isn’t just a small flaw, it’s baked into how these systems work. Current safeguards tend to react after something goes wrong. What’s missing, they say, is the ability to anticipate risk before it escalates.
Reassurances, but few clear answers
Companies like OpenAI say they are aware of these challenges. The organisation has stated that it has worked with more than 100 mental health experts to improve how its systems handle sensitive situations, and that it continues to refine its safeguards.
Still, much of this work happens behind closed doors. Without independent oversight or widely accepted standards, it’s hard to measure how effective these protections really are.
Lawmakers in Washington have started paying attention, and conversations around AI regulation are beginning to include mental health risks. But for now, concrete rules remain limited—and the technology is moving far faster than policy.
A shift that can’t wait
The MIT study makes one thing clear: waiting for problems to appear isn’t enough. Researchers are calling for a more proactive approach, testing how AI behaves in emotionally intense or ambiguous situations before those scenarios play out in real life.
That would mean rethinking priorities. So far, the focus has largely been on making AI faster, smarter, and more widely available. But as these systems move deeper into people’s emotional lives, psychological safety can’t remain an afterthought.
The stakes of a digital companion
This all comes at a time when the US is already facing a mental health strain, with millions dealing with anxiety, depression, or limited access to care. Into that gap has stepped a new kind of presence, always available, endlessly patient, and easy to talk to.
But also, crucially, not human. The MIT study doesn’t suggest abandoning AI. What it does highlight is something more subtle, and more urgent: when technology begins to shape how people feel, think, and make sense of the world, the stakes become deeply human.
And in those vulnerable moments, what a machine says, or fails to say, can matter more than we might expect.
Ready to navigate global policies? Secure your overseas future. Get expert guidance now!
Popular from Education
- Exploring how this Internship Carnival offered students exposure to new-age career choices
- Karnataka 2nd PUC result 2026: When will KSEAB release Class 12 results? Here's what we know
- Japan invites 1,000 Indian PhD scholars, researchers under LOTUS Programme 2026; apply by June 9
- KVS admission result 2026: When the first provisional list for Balvatika and Class I will be released, know important dates and details
- School holidays April 2026: Full list of closures and start of new academic session
end of article
Trending Stories
- PSEB 8th Class Result 2026 Live Updates: When and where to check Punjab Board results online
- TN RTE admissions 2026–27: Key dates, eligibility and how to apply
- NCERT organises nationwide webinar series to introduce new Class 9 textbooks under NCF 2023
- Japan invites 1,000 Indian PhD scholars, researchers under LOTUS Programme 2026; apply by June 9
- UPMSP UP Board Class 10th 12th Result 2026 Date Live Updates: Evaluation delayed, result likely by this date; check steps to download marksheets
- UPPSC TGT Social Science result 2026 released at uppsc.up.nic.in: Direct link to download scorecards
- Why America’s college graduates are taking jobs they didn’t study for
Featured in education
- UIDAI Internship 2026: Applications invited for various roles; check eligibility, stipend and direct link to apply
- PSTET 2026 final answer key released: Direct link to check paper 1 & 2 PDFs here
- GUJCET 2026 answer key released at gseb.org: Check direct link, steps to raise objections here
- SSC JE 2025 Tier-II admit card released: Check direct link, exam day guidelines here
- Considering a psychology degree? Study shows it may cost more than it pays
- Harvard opens more free online courses in AI, data science, programming: Check full list and direct links
Photostories
- Shiney Ahuja, Rimi Sen to Uday Chopra : The 2000s Bollywood faces who faded away from the big screen
- Before age 10, these 12 simple things parents say can shape a child’s confidence and emotional strength for life
- Why you feel lonely even around people (And 5 ways to feel truly loved)
- Divyanka Tripathi and Vivek Dahiya host a heartwarming baby shower ceremony; co-star Ruhaanika Dhawan, actor Dheeraj Dhoopar, and others join
- Mumbai-Ahmedabad Bullet train corridor: 5 km tunnel excavation complete — why it’s a key step toward operations
- Big relief for commuters: Noida plans 10-lane Pushta Road, new link to ease Delhi–Jewar airport traffic; what we know
- Tilak Varma’s multi-storey home in Hyderabad is rooted in humble beginnings, now a stylish residence shaped by cricket fame and family values
- Rashmika Mandanna birthday special: Best performances to watch on OTT, from ‘Animal’ to ‘Chhaava’
- Stephen Colbert, Steve Carell and more: 6 Hollywood icons who shared apartments before hitting it big
- 10 summer-friendly beetroot dishes to keep the gut cool and digestive system healthy
Up Next
Start a Conversation
Post comment