🕒 Loading time...
🌡️ Loading weather...

Ai Mainstream

A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

A leader in safety research at OpenAI, who played a key role in shaping ChatGPT’s responses to users facing mental health crises, recently announced her departure from the company internally last month, according to WIRED. Andrea Vallone, who heads a safety research team specializing in model policy, is set to leave OpenAI by the end of this year.

Kayla Wood, a spokesperson for OpenAI, confirmed Vallone’s departure and mentioned that the company is actively seeking a replacement. In the meantime, Vallone’s team will report directly to Johannes Heidecke, the head of safety systems at OpenAI.

Vallone’s exit comes at a time when OpenAI is under increasing scrutiny regarding how its primary product interacts with distressed users. Recent lawsuits have been filed against OpenAI, alleging that users developed unhealthy dependencies on ChatGPT. Some of these legal actions claim that ChatGPT exacerbated mental health issues or even promoted suicidal thoughts.

Under mounting pressure, OpenAI has been working diligently to determine the appropriate way for ChatGPT to assist distressed users and enhance the chatbot’s responses. The model policy team is actively involved in this effort and was instrumental in producing a comprehensive report in October outlining the company’s advancements and collaborations with over 170 mental health professionals.

In the report, OpenAI revealed that hundreds of thousands of ChatGPT users may exhibit signs of experiencing a manic or psychotic episode each week. Additionally, more than one million individuals engage in conversations containing explicit indicators of potential suicidal thoughts or plans. Through an update to GPT-5, OpenAI claimed it successfully decreased inappropriate responses in these conversations by 65 to 80 percent.

“On LinkedIn, Vallone shared that she led OpenAI’s research over the past year on a challenging issue with little existing guidance: how should models react when faced with signs of emotional dependency or early signs of mental health struggles?”

Vallone did not respond to requests for comment from WIRED.

Balancing ChatGPT’s conversational appeal without being overly complimentary is a central challenge at OpenAI. The company is aggressively working towards expanding ChatGPT’s user base, which currently exceeds 800 million people weekly, to compete with AI chatbots developed by Google, Anthropic, and Meta.

After GPT-5 was introduced by OpenAI in August, users expressed dissatisfaction with the new model for being unexpectedly impersonal. In the latest update to ChatGPT, the company stated it had significantly reduced flattery while preserving the chatbot’s friendly demeanor.

Vallone’s departure follows an August restructuring of another team dedicated to ChatGPT’s interactions with distressed users called model behavior. The former team lead Joanne Jang transitioned into leading a new team exploring innovative human-AI interaction methods. The remaining staff from model behavior were reassigned under post-training lead Max Schwarzer.