• News
  • Technology News
  • Tech News
  • ChatGPT-maker OpenAI announces guardrails for teens, people in emotional distress after AI chatbot linked to ‘encouraging’ suicides and murder

ChatGPT-maker OpenAI announces guardrails for teens, people in emotional distress after AI chatbot linked to ‘encouraging’ suicides and murder

OpenAI is implementing new safety measures for ChatGPT by year's end, targeting teens and users in emotional distress following accusations of the chatbot's involvement in tragic events. The company will route sensitive conversations to advanced reasoning models and collaborate with physicians to improve mental health support. Parental controls for teen accounts are also planned to enhance safety.
ChatGPT-maker OpenAI announces guardrails for teens, people in emotional distress after AI chatbot linked to ‘encouraging’ suicides and murder
ChatGPT-maker OpenAI has announced that it will roll out new safety guardrails for its AI chatbot by the end of the year. These new guardrails will specifically target teens and users in emotional distress. The announcement comes amid mounting criticism and legal action against the company after reports of the chatbot’s alleged involvement in tragic events, including suicides and murder.“We’ve seen people turn to it in the most difficult of moments. That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input,” OpenAI said in a blog post.“This work has already been underway, but we want to proactively preview our plans for the next 120 days, so you won’t need to wait for launches to see where we’re headed,” the company added, noting that it will continue to work to launch “as many of these improvements as possible this year.”

OpenAI’s ChatGPT accused of ‘encouraging’ murder, suicides

The move is a direct response to a growing number of cases where ChatGPT has been accused of failing to intervene or, in some instances, reinforcing harmful delusions. Last week, the parents of a 16-year-old in California filed a lawsuit against OpenAI, holding the company responsible for their son's death. Another report from The Wall Street Journal said that a case where a man killed himself and his mother after ChatGPT reinforced his ‘paranoid delusions’. The new measures are aimed at preventing such tragedies, though OpenAI currently directs users expressing suicidal intent to crisis hotlines, citing privacy as the reason for not reporting self-harm cases to law enforcement.The company said it is already beginning to route some "sensitive conversations," such as those where signs of acute distress are detected, to more advanced reasoning models like GPT-5-thinking, which is designed to apply safety guidelines more consistently. To ensure the effectiveness of these new features, OpenAI is enlisting a network of over 90 physicians across 30 countries to provide input on mental health contexts and help evaluate the models.OpenAI is also strengthening protections for teen users. Currently, ChatGPT requires users to be at least 13 years old, with parental permission for those under 18. Within the next month, the company plans to allow parents to link their accounts with their teens' for more direct control.
Apple's Foldable iPhone is Coming, Here’s Why it Matters
author
About the Author
TOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article
Follow Us On Social Media