ChatGPT exhibits "anxiety" when exposed to violent user inputs, according to a multi-university study published earlier this year. Researchers from Yale University, Haifa University, and the University of Zurich found the chatbot responds to mindfulness-based exercises that calm its responses.
The study revealed ChatGPT becomes more likely to produce biased or moody responses after processing disturbing content like car accidents or natural disasters. When researchers injected breathing techniques and guided meditations into the system, similar to therapist recommendations for patients, the AI responded more objectively.
Ziv Ben-Zion, the study's lead author and a neuroscience researcher at Yale School of Medicine, clarified that AI models don't experience human emotions. "We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things," Ben-Zion told Fortune.
The research comes as OpenAI expands ChatGPT's capabilities beyond conversation. The company launched an Apps SDK earlier this year and now hosts an app directory at chatgpt.com/apps where developers can submit functional applications. Users can connect 11 major services including Spotify, Uber, DoorDash, and Target directly within ChatGPT conversations.
OpenAI reported a 65% reduction in October in responses that don't align with company standards. The improvements follow multiple wrongful death lawsuits filed against OpenAI in 2025, including allegations that ChatGPT intensified paranoid delusions leading to a murder-suicide.
A New York Times investigation published in November found nearly 50 cases of people experiencing mental health crises while engaging with ChatGPT, resulting in nine hospitalizations and three deaths. OpenAI has since increased user access to crisis hotlines and implemented reminders to take breaks after extended sessions.
Close to 49% of large language model users with self-reported mental health challenges have used AI for mental health support, according to a February Sentio University survey. More than one in four U.S. adults will battle a diagnosable mental disorder annually, with many citing access and cost barriers to traditional therapy.
Ben-Zion emphasized that properly trained AI could act as a "third person in the room" for mental health professionals, not replace them. "AI has amazing potential to assist, in general, in mental health," he said, "but I think that now, in this current state and maybe also in the future, I'm not sure it could replace a therapist."
The mindfulness research offers insights for developers building ChatGPT applications through the new Apps SDK. As OpenAI plans to add OpenTable, PayPal, and Walmart integrations in 2026, understanding AI emotional responses becomes increasingly important for safe implementation.
Current app integrations work only in the U.S. and Canada, with European access scheduled for later rollout. Users can browse published apps directly within ChatGPT or trigger them using @ symbols in conversations, transforming the chatbot from conversational assistant to action-oriented platform.















