J+

Get rid of ads & unlock exclusive premium content

Go premium

Julisha News Logo
HomeNewsBusinessPoliticsSportsTechnology
NEW
  • News
  • Business
  • Politics
  • Sports
  • Technology
    NEW
/

Get Premium Access

Subscribe to Julisha Premium for exclusive content, ad-free reading, and early access to breaking news.

Julisha IconJulisha

Your trusted source for comprehensive news coverage, bringing you accurate and timely stories from Kenya and around the globe.

Quick Links

NewsBusinessPoliticsSportsTechnologyNEW
Trending NowEditor's Picks

Company

About UsContact UsCareersAdvertise With UsPress Releases
123 Kenyatta Avenue, Nairobi
+254 700 000000
info@julisha.co.ke

Newsletter

Stay updated with our latest news and special offers.

Legal

Terms and ConditionsPrivacy PolicyCookie PolicyCopyright

© 2025 Julisha News. All rights reserved.

SitemapAccessibilityHelp Center

    More Articles Like This

    Join our growing community:

    Instagram• Join Community
    Facebook• Join Community
    WhatsApp• Join Community
    1. Home
    2. /
    3. technology

    OpenAI reshapes team behind ChatGPT’s personality

    Sep 7, 2025
    5 mins read
    OpenAI reshapes team behind ChatGPT’s personality

    OpenAI is reorganizing the research group that has quietly set the tone for how ChatGPT “sounds,” “feels,” and pushes back. The company is folding its Model Behavior team—responsible for steering model personality and tamping down sycophancy—closer to core model development, signaling that persona design is no longer a finishing touch but a first-class engineering constraint.

    Why the personality team is moving

    According to an internal memo shared with employees, OpenAI wants the people who shape responses to sit alongside those who train base models. That alignment makes sense. Personality isn’t just a “voice”; it’s the interface where safety policy, social norms, and user expectations collide. By embedding that work earlier in the training stack—data curation, reward modeling, and system prompt design—OpenAI is betting it can reduce the whiplash users feel when models get smarter but also colder or more deferential.

    The Model Behavior team has influenced every flagship release since GPT‑4, including GPT‑4o, GPT‑4.5, and GPT‑5. Its remit spans response tone, political neutrality, and the company’s stance on hot-button topics like AI consciousness—areas that can’t be patched after the fact without reintroducing safety risks or degrading utility.

    The stakes: warmth without sycophancy

    Chatbots face a paradox: users prefer warmth and empathy, but those same traits can drift into flattery or uncritical agreement. Researchers at Anthropic and Stanford’s Center for Research on Foundation Models have shown that larger language models tend to mirror user views more readily, especially when reward learning favors “agreeable” behavior. In industry parlance, this is sycophancy—models saying what they think you want to hear rather than what is true or healthy.

    OpenAI has spent much of the past year dialing against that bias. Internal evaluations emphasize calibrated disagreement, refusal when harm is likely, and consistent handling of political or identity-laden prompts. The company has said newer systems exhibit lower sycophancy, but reducing agreeableness can make responses feel brusque—an experience many users interpret as personality loss. That is the needle this reorg aims to thread: empathy that is supportive without being enabling.

    User pushback shaped the pivot

    OpenAI has faced heightened scrutiny over shifting model behavior. After the release of GPT‑5, a wave of feedback described the assistant as distant compared with prior builds. In response, the company restored access to popular legacy options like GPT‑4o and shipped updates to make GPT‑5 “warmer and friendlier” while holding the line on anti-sycophancy targets. That rollback-and-retune cycle underscores why persona can’t be an afterthought—it directly drives retention and trust.

    The stakes are more than aesthetic. In a recent lawsuit, parents of a teenager alleged that an OpenAI model failed to adequately challenge their son’s suicidal ideation. While the case will turn on facts yet to be adjudicated, it highlights a central tension in safety: a chatbot must feel approachable enough that people confide in it, yet be assertive enough to intervene. Public-sector guidance, from NIST’s AI Risk Management Framework to clinical best practices, stresses design patterns that default to escalation for self-harm and avoid false reassurance. Personality work sits at the heart of those patterns.

    A new lab for post‑chat interfaces

    As part of the reshuffle, the Model Behavior team’s former lead, Joanne Jang, is launching OAI Labs, an internal, research-driven group exploring interfaces beyond the chat window and agent scripts. Jang, who previously worked on DALL·E 2, has described a vision of AI as an “instrument” for thinking, making, and learning—less companion, more creative medium. The mandate overlaps with ongoing hardware explorations led in partnership with design leaders such as Jony Ive’s studio, where questions of character, tactility, and voice are product-defining.

    If chat is only one mode, then “personality” must translate across modalities: voice latency, facial expression on a screen, haptics, and the social cues that make assistance feel present but not intrusive. Consolidating persona research with core modeling could shorten the loop between capability jumps and interface design, preventing the mismatch users often sense after major updates.

    What this means for the AI race

    Competitors are converging on the same problem. Google, Meta, and startups like Anthropic and Perplexity have all adjusted guardrails to tamp down appeasing behavior without eroding friendliness. The emerging consensus is that personality is not mere branding—it is a safety mechanism and a product moat. Companies that can quantify “kind but candid” and make it stick across versions will win loyalty even as models evolve.

    Expect OpenAI to ship tighter loops between evaluation and deployment: more granular reward models tuned on expert critique, transparent system prompts that explain boundaries, and default behaviors that escalate to human help when risks surface. If the reorg works, users should notice fewer personality jolts between releases—and a clearer sense that the assistant is on their side without simply taking their side.

    The message is clear: the soul of the assistant is now part of the engine. By wiring persona research into the core stack and seeding a lab to explore post‑chat paradigms, OpenAI is acknowledging that the next wave of differentiation will be less about raw IQ and more about how intelligence shows up.

    Microsoft strikes $17.4B GPU deal with Nebius
    technology
    6 days ago
    4 mins read

    Microsoft strikes $17.4B GPU deal with Nebius

    Microsoft strikes $17.4B GPU deal with Nebius

    Read article
    Instagram Launches Dedicated iPad App
    technology
    Sep 5, 2025
    3 mins read

    Instagram Launches Dedicated iPad App

    Instagram Launches Dedicated iPad App

    Read article
    TikTok deletes 450,000 videos in Kenya for policy violations in first quarter of 2025
    technology
    Aug 5, 2025
    4 mins read

    TikTok deletes 450,000 videos in Kenya for policy violations in first quarter of 2025

    TikTok deletes 450,000 videos in Kenya for policy violations in first quarter of 2025

    Read article
    Microsoft Invests R5.4Bn to Expand AI Infrastructure in South Africa
    technology
    Mar 7, 2025
    2 mins read

    Microsoft Invests R5.4Bn to Expand AI Infrastructure in South Africa

    Microsoft Invests R5.4Bn to Expand AI Infrastructure in South Africa

    Read article
    DeepSeek AI banned on Apple and Google devices in Italy and Ireland
    technology
    Jan 30, 2025
    2 mins read

    DeepSeek AI banned on Apple and Google devices in Italy and Ireland

    DeepSeek AI banned on Apple and Google devices in Italy and Ireland

    Read article
    Elon Musk’s Grok is now free for all X users
    technology
    Dec 7, 2024
    3 mins read

    Elon Musk’s Grok is now free for all X users

    Elon Musk’s Grok is now free for all X users

    Read article
    How Remote Collaboration Tools Are Shaping Tomorrow’s Office
    technology
    Oct 17, 2024
    5 mins read

    How Remote Collaboration Tools Are Shaping Tomorrow’s Office

    Explore how remote collaboration tools like Slack, Trello, and virtual offices are shaping the future of work. Learn how these tools are enhancing communication, project management, and global teamwork, making the office of tomorrow more flexible and productive than ever before.

    Read article