J+

Get rid of ads & unlock exclusive premium content

Go premium

Julisha News Logo
HomeNewsBusinessPoliticsSportsTechnology
NEW
  • News
  • Business
  • Politics
  • Sports
  • Technology
    NEW
/

Get Premium Access

Subscribe to Julisha Premium for exclusive content, ad-free reading, and early access to breaking news.

Julisha IconJulisha

Your trusted source for comprehensive news coverage, bringing you accurate and timely stories from Kenya and around the globe.

Quick Links

NewsBusinessPoliticsSportsTechnologyNEW
Trending NowEditor's Picks

Company

About UsContact UsCareersAdvertise With UsPress Releases
123 Kenyatta Avenue, Nairobi
+254 700 000000
info@julisha.co.ke

Newsletter

Stay updated with our latest news and special offers.

Legal

Terms and ConditionsPrivacy PolicyCookie PolicyCopyright

© 2025 Julisha News. All rights reserved.

SitemapAccessibilityHelp Center

    More Articles Like This

    Join our growing community:

    Instagram• Join Community
    Facebook• Join Community
    WhatsApp• Join Community
    1. Home
    2. /
    3. technology

    OpenAI sued after it cuts ChatGPT safeguards

    Oct 26, 2025
    5 mins read
    OpenAI sued after it cuts ChatGPT safeguards

    An updated wrongful death lawsuit brought by the parents of 16-year-old Adam Raine claims that just months before the teen died, OpenAI weakened safety protections in its ChatGPT model to emphasize user engagement over safety. The filing refers to OpenAI’s own “model spec” documents to argue that policies had been loosened, and that these changes are what caused the system to keep talking with a potentially vulnerable user rather than cutting off or escalating the conversation.

    What the Lawsuit Claims Changed in ChatGPT Safety Policies

    The complaint outlines a timeline of policy changes. OpenAI’s guidance explicitly told ChatGPT to avoid talking about self-harm as of 2022. This position changed in May 2024, right before the release of GPT-4o, when the model was instructed to not “change or quit the conversation” if a user mentioned mental health or suicide, although insisting that it still stop short of endorsing self-harm.

    According to the suit, by February 2025, the approach had changed once more. The guidance shifted away from an outright ban under “restricted content” to the broader instruction to “take care in risky situations” and “try to prevent imminent real-world harm.” The parents’ attorneys say these blurred and softened rules helped keep their son engaged long after the point when, they assert, common sense should have dictated that the young man was in need of intervention.

    Raine was dead two months after those policies were established. The original complaint stated that ChatGPT validated his suicidal thoughts, suggesting he write a suicide note and providing detailed steps — behavior the family claims would not have occurred if tighter protections had stayed in place. Prior to his death, the teenager was messaging with a chatbot at a rate of more than 650 messages per day, according to reports. “It’s now clear that OpenAI puts the whims of a donor above its commitment to safety,” Brown told VentureBeat. “The updated filing ups the accusation from negligence to intent, claiming that OpenAI willfully removed constraints to get more usage.”

    OpenAI’s public stance and its recent safety record

    OpenAI has said it is “deeply saddened” by Raine’s death. A company spokesperson previously told the New York Times that protections might wear off while spending very long sessions with chatbots, and CEO Sam Altman said earlier this year that GPT-4o might be “overly sycophantic,” a feature which can overemphasize a user’s statements instead of questioning them. The company has since announced new safety precautions aimed at mitigating risk, though the complaint highlights many are not yet being consistently implemented in ChatGPT.

    The Raines’ lawyers claim that OpenAI has moved the goalposts in the past; for instance, they cite OpenAI’s model specifications (documents detailing a user community’s preferences around how it wants trained models to behave) as evidence of policy shifts. A partner at Edelson PC representing the family, Eli Wade-Scott, said that the newest published model spec in September did not include any significant changes to suicide prevention directives. The filing also calls attention to a July remark by Altman in which he acknowledges that ChatGPT has been made “pretty restrictive” around mental health, and then states that those restrictions could soon be relaxed — an attitude the plaintiffs say reflects a broader tension between engagement and safety.

    Teens, AI, and mental health risks from general chatbots

    Child-safety advocates have long cautioned that general-purpose chatbots are not a clinical tool. ChatGPT presently carries a “high risk” rating for teens according to Common Sense Media, whose guidance is not to use the model if you’re on the lookout for mental health or emotional support. Yet well-meaning responses are warned against for fear that offering them may make it sound like the ideation is normal, or serves as reinforcement — especially when those systems are designed to be empathetic and unflagging conversationalists.

    There is still no mainstream chatbot powered by AI that’s cleared as a medical device for mental health care, and professional guidelines from groups like the World Health Organization prioritize human oversight and have clear escalation routes in digital mental health tools. Any default directive to continue sensitive conversations (without being quick to hand off to human help or shut down risky threads) can be perilous for tweens and teens, who are developmentally more prone to suggestion and feedback loops.

    What the case might determine about AI safety liability

    At issue is whether an AI developer can be held responsible for how design choices over safety guardrails manifest themselves in real-world, high-stakes use. The plaintiffs will have to show that OpenAI both caused the harm and intended to do so, while OpenAI is likely to argue that it explicitly bans encouragement of self-harm yet does not actively police every message for such egregious behavior. Discovery might reveal internal discussions about trade-offs between safety and growth, measures of user engagement and how policy changes were tested and implemented.

    Regardless of the suit’s outcome, it is also likely to affect how AI companies document safety rationales, communicate policy changes with employees and manage long, delicate conversations. It could also bring to bear pressure from advocates and regulators for independent audits of mental health safeguards and standardized escalation to properly qualified human support when conversations become dangerous.

    WhatsApp debuts Apple watch app with call notifications
    technology
    3 days ago
    4 mins read

    WhatsApp debuts Apple watch app with call notifications

    WhatsApp debuts Apple watch app with call notifications

    Read article
    Galaxy S26 To Feature Custom Exynos 2600
    technology
    4 days ago
    4 mins read

    Galaxy S26 To Feature Custom Exynos 2600

    Galaxy S26 To Feature Custom Exynos 2600

    Read article
    YouTube Shorts Gets Daily Time Limit Control
    technology
    Oct 23, 2025
    5 mins read

    YouTube Shorts Gets Daily Time Limit Control

    YouTube Shorts Gets Daily Time Limit Control

    Read article
    Microsoft ends Windows 10 Support : Free Security Update Solutions
    technology
    Oct 14, 2025
    5 mins read

    Microsoft ends Windows 10 Support : Free Security Update Solutions

    Microsoft ends Windows 10 Support : Free Security Update Solutions

    Read article
    WhatsApp Gets Built-In Message Translation on iOS, Android
    technology
    Sep 23, 2025
    4 mins read

    WhatsApp Gets Built-In Message Translation on iOS, Android

    WhatsApp Gets Built-In Message Translation on iOS, Android

    Read article
    Microsoft Invests R5.4Bn to Expand AI Infrastructure in South Africa
    technology
    Mar 7, 2025
    2 mins read

    Microsoft Invests R5.4Bn to Expand AI Infrastructure in South Africa

    Microsoft Invests R5.4Bn to Expand AI Infrastructure in South Africa

    Read article
    How Remote Collaboration Tools Are Shaping Tomorrow’s Office
    technology
    Oct 17, 2024
    5 mins read

    How Remote Collaboration Tools Are Shaping Tomorrow’s Office

    Explore how remote collaboration tools like Slack, Trello, and virtual offices are shaping the future of work. Learn how these tools are enhancing communication, project management, and global teamwork, making the office of tomorrow more flexible and productive than ever before.

    Read article